id
stringlengths
2
8
title
stringlengths
1
130
text
stringlengths
0
252k
formulas
listlengths
1
823
url
stringlengths
38
44
7515919
Brauer–Siegel theorem
Asymptotic result on the behaviour of algebraic number fields In mathematics, the Brauer–Siegel theorem, named after Richard Brauer and Carl Ludwig Siegel, is an asymptotic result on the behaviour of algebraic number fields, obtained by Richard Brauer and Carl Ludwig Siegel. It attempts to generalise the results known on the class numbers of imaginary quadratic fields, to a more general sequence of number fields formula_0 In all cases other than the rational field Q and imaginary quadratic fields, the regulator "R""i" of "K""i" must be taken into account, because "K"i then has units of infinite order by Dirichlet's unit theorem. The quantitative hypothesis of the standard Brauer–Siegel theorem is that if "D""i" is the discriminant of "K""i", then formula_1 Assuming that, and the algebraic hypothesis that "K""i" is a Galois extension of Q, the conclusion is that formula_2 where "h""i" is the class number of "K""i". If one assumes that all the degrees formula_3 are bounded above by a uniform constant "N", then one may drop the assumption of normality - this is what is actually proved in Brauer's paper. This result is ineffective, as indeed was the result on quadratic fields on which it built. Effective results in the same direction were initiated in work of Harold Stark from the early 1970s.
[ { "math_id": 0, "text": "K_1, K_2, \\ldots.\\ " }, { "math_id": 1, "text": " \\frac{[K_i : \\mathbf Q]}{\\log|D_i|} \\to 0\\text{ as }i \\to\\infty. " }, { "math_id": 2, "text": " \\frac{ \\log(h_i R_i) }{ \\log\\sqrt{|D_i|} } \\to 1\\text{ as }i \\to\\infty " }, { "math_id": 3, "text": "[K_i : \\mathbf Q]" } ]
https://en.wikipedia.org/wiki?curid=7515919
75161417
Finite subgroups of SU(2)
Use of mathematical groups in magnetochemistry In applied mathematics, finite subgroups of SU(2) are groups composed of rotations and related transformations, employed particularly in the field of physical chemistry. The symmetry group of a physical body generally contains a subgroup (typically finite) of the 3D rotation group. It may occur that the group {±1} with two elements acts also on the body; this is typically the case in magnetism for the exchange of north and south poles, or in quantum mechanics for the change of spin sign. In this case, the symmetry group of a body may be a central extension of the group of spatial symmetries by the group with two elements. Hans Bethe introduced the term "double group" ("Doppelgruppe") for such a group, in which two different elements induce the spatial identity, and a rotation of may correspond to an element of the double group that is not the identity. The classification of the finite double groups and their character tables is therefore physically meaningful and is thus the main part of the theory of double groups. Finite double groups include the binary polyhedral groups. In physical chemistry, double groups are used in the treatment of the magnetochemistry of complexes of metal ions that have a single unpaired electron in the d-shell or f-shell. Instances when a double group is commonly used include 6-coordinate complexes of copper(II), titanium(III) and cerium(III). In these double groups rotation by 360° is treated as a symmetry operation separate from the identity operation; the double group is formed by combining these two symmetry operations with a point group such as a dihedral group or the full octahedral group. Definition and theory. Let Γ be a finite subgroup of SO(3), the three-dimensional rotation group. There is a natural homomorphism f of SU(2) onto SO(3) which has kernel {±I}. This double cover can be realised using the adjoint action of SU(2) on the Lie algebra of traceless 2-by-2 skew-adjoint matrices or using the action by conjugation of unit quaternions. The double group Γ' is defined as f−1 (Γ). By construction {±I} is a central subgroup of Γ' and the quotient is isomorphic to Γ. Thus Γ' is a central extension of the group Γ by {±1}, the cyclic group of order 2. Ordinary representations of Γ' are just mappings of Γ into the general linear group that are homomorphisms up to a sign; equivalently, they are projective representations of Γ with a factor system or Schur multiplier in {±1}. Two projective representations of Γ are closed under the tensor product operation, with their corresponding factor systems in {±1} multiplying. The central extensions of Γ by {±1} also have a natural product. The finite subgroups of SU(2) and SO(3) were determined in 1876 by Felix Klein in an article in "Mathematische Annalen", later incorporated in his celebrated 1884 "Lectures on the Icosahedron": for SU(2), the subgroups correspond to the cyclic groups, the binary dihedral groups, the binary tetrahedral group, the binary octahedral group, and the binary icosahedral group; and for SO(3), they correspond to the cyclic groups, the dihedral groups, the tetrahedral group, the octahedral group and the icosahedral group. The correspondence can be found in numerous text books, and goes back to the classification of platonic solids. From Klein's classifications of binary subgroups, it follows that, if Γ a finite subgroup of SO(3), then, up to equivalence, there are exactly two central extensions of Γ by {±1}: the one obtained by lifting the double cover Γ' = f−1 (Γ); and the trivial extension Γ x {±1}. The character tables of the finite subgroups of SU(2) and SO(3) were determined and tabulated by F. G. Frobenius in 1898, with alternative derivations by I. Schur and H. E. Jordan in 1907 independently. Branching rules and tensor product formulas were also determined. For each binary subgroup, i.e. finite subgroup of SU(2), the irreducible representations of Γ are labelled by extended Dynkin diagrams of type A, D and E; the rules for tensoring with the two-dimensional vector representation are given graphically by an undirected graph. By Schur's lemma, irreducible representations of Γ x {±1} are just irreducible representations of Γ multiplied by either the trivial or the sign character of {±1}. Likewise, irreducible representations of Γ' which send –1 to "I" are just ordinary representations of Γ; while those which send –1 to –"I" are genuinely double-valued or spinor representations. Example. For the double icosahedral group, if formula_0 is the golden ratio formula_1 with inverse formula_2, the character table is given below: spinor characters are denoted by asterisks. The character table of the icosahedral group is also given. The tensor product rules for tensoring with the two-dimensional representation are encoded diagrammatically below: The numbering has at the top formula_5 and then below, from left to right, formula_9, formula_6, formula_11, formula_7, formula_10, formula_4, formula_8, and formula_3. Thus, on labelling the vertices by irreducible characters, the result of multiplying formula_8 by a given irreducible character equals the sum of all irreducible characters labelled by an adjacent vertex. The representation theory of SU(2) goes back to the nineteenth century and the theory of invariants of binary forms, with the figures of Alfred Clebsch and Paul Gordan prominent. The irreducible representations of SU(2) are indexed by non-negative half integers j. If V is the two-dimensional vector representation, then Vj = S2"j" V, the 2jth symmetric power of V, a (2j + 1)-dimensional vector space. Letting G be the compact group SU(2), the group G acts irreducibly on each Vj and satisfies the Clebsch-Gordan rules: formula_12 In particular formula_13 for j > 0, and formula_14 By definition, the matrix representing g in Vj is just S2"j" ( g ). Since every g is conjugate to a diagonal matrix with diagonal entries formula_15 and formula_16 (the order being immaterial), in this case S2"j" ( g ) has diagonal entries formula_17, formula_18, ... ,formula_19, formula_20. Setting formula_21 this yields the character formula formula_22 Substituting formula_23, it follows that, if g has diagonal entries formula_24 then formula_25 The representation theory of SU(2), including that of SO(3), can be developed in many different ways: * using the complexification "G"c = SL(2,C) and the double coset decomposition "G"c = "B" · "w" · "B" ∐ "B", where "B" denotes upper triangular matrices and formula_26; * using the infinitesimal action of the Lie algebras of SU(2) and SL(2,C) where they appear as raising and lowering operators E, F, H of angular momentum in quantum mechanics: here "E" = formula_27, "F" = "E"* and "H" = ["E","F"] so that ["H","E"] = 2"E" and ["H","F"] = –2"F"; * using integration of class functions over SU(2), identifying the unit quaternions with 3-sphere and Haar measure as the volume form: this reduces to integration over the diagonal matrices, i.e. the circle group T. The properties of matrix coefficients or representative functions of the compact group SU(2) (and SO(3)) are well documented as part of the theory of special functions: the Casimir operator C = H2 + 2 EF + 2 FE commutes with the Lie algebras and groups. The operator C can be identified with the Laplacian Δ, so that on a matrix coefficient φ of Vj, Δφ =(j2 + j)φ. The representative functions A form a non-commutative algebra under convolution with respect to Haar measure μ. The analogue for a finite subgroup of Γ of SU(2) is the finite-dimensional group algebra C[Γ] From the Clebsch-Gordan rules, the convolution algebra A is isomorphic to a direct sum of n x n matrices, with n = 2j + 1 and j ≥ 0. The matrix coefficients for each irreducible representation Vj form a set of matrix units. This direct sum decomposition is the Peter-Weyl theorem. The corresponding result for C[Γ] is Maschke's theorem. The algebra A has eigensubspaces a(gζ) = a(g) or a(g)ζ, exhibiting them as direct sum of Vj, summed over j non-negative integers or positive half-integers – these are examples of induced representations. It allows the computations of branching rules from SU(2) to Γ, so that Vj can be decomposed as direct sums of irreducible representations of Γ. History. Georg Frobenius derived and listed in 1899 the character tables of the finite subgroups of SU(2), the double cover of the rotation group SO(3). In 1875, Felix Klein had already classified these finite "binary" subgroups into the cyclic groups, the binary dihedral groups, the binary tetrahedral group, the binary octahedral group and the binary icosahedral group. Alternative derivations of the character tables were given by Issai Schur and H. E. Jordan in 1907; further branching rules and tensor product formulas were also determined. In a 1929 article on splitting of atoms in crystals, the physicist H. Bethe first coined the term "double group" ("Doppelgruppe"), a concept that allowed double-valued or spinor representations of finite subgroups of the rotation group to be regarded as ordinary linear representations of their double covers. In particular, Bethe applied his theory to relativistic quantum mechanics and crystallographic point groups, where a natural physical restriction to 32 point groups occurs. Subsequently, the non-crystallographic icosahedral case has also been investigated more extensively, resulting most recently in groundbreaking advances on carbon 60 and fullerenes in the 1980s and 90s. In 1982–1984, there was another breakthrough involving the icosahedral group, this time through materials scientist Dan Shechtman's remarkable work on quasicrystals, for which he was awarded a Nobel Prize in Chemistry in 2011. Applications. Magnetochemistry. In magnetochemistry, the need for a double group arises in a very particular circumstance, namely, in the treatment of the magnetic properties of complexes of a metal ion in whose electronic structure there is a single unpaired electron (or its equivalent, a single vacancy) in a metal ion's "d"- or "f"- shell. This occurs, for example, with the elements copper, silver and gold in the +2 oxidation state, where there is a single vacancy in the d-electron shell, with titanium(III) which has a single electron in the 3d shell and with cerium(III) which has a single electron in the 4f shell. In group theory, the character formula_28, for rotation, by an angle α, of a wavefunction for half-integer angular momentum is given by formula_29 where angular momentum is the vector sum of spin and orbital momentum, formula_30. This formula applies with angular momentum in general. In atoms with a single unpaired electron the character for a rotation through an angle of formula_31 is equal to formula_32. The change of sign cannot be true for an identity operation in any point group. Therefore, a double group, in which rotation by formula_33 is classified as being distinct from the identity operation, is used. A character table for the double group D'4 is as follows. The new operation is labelled R in this example. The character table for the point group D4 is shown for comparison. In the table for the double group, the symmetry operations such as C4 and C4R belong to the same "class" but the header is shown, for convenience, in two rows, rather than C4, C4R in a single row. Character tables for the double groups T', O', Td', D3h', C6v', D6', D2d', C4v', D4', C3v', D3', C2v', D2' and R(3)' are given in , and . The need for a double group occurs, for example, in the treatment of magnetic properties of 6-coordinate complexes of copper(II). The electronic configuration of the central Cu2+ ion can be written as [Ar]3"d"9. It can be said that there is a single vacancy, or hole, in the copper 3"d"-electron shell, which can contain up to 10 electrons. The ion [Cu(H2O)6]2+ is a typical example of a compound with this characteristic. (1) Six-coordinate complexes of the Cu(II) ion, with the generic formula [CuL6]2+, are subject to the Jahn-Teller effect so that the symmetry is reduced from octahedral (point group "Oh)" to tetragonal (point group "D"4"h"). Since "d" orbitals are centrosymmetric the related atomic term symbols can be classified in the subgroup "D"4 . (2) To a first approximation spin-orbit coupling can be ignored and the magnetic moment is then predicted to be 1.73 Bohr magnetons, the so-called spin-only value. However, for a more accurate prediction spin-orbit coupling must be taken into consideration. This means that the relevant quantum number is "J", where "J" = "L + S". (3) When "J" is half-integer, the "character" for a rotation by an angle of α + 2π radians is equal to minus the "character" for rotation by an angle α. This cannot be true for an identity in a point group. Consequently, a group must be used in which rotations by α + 2π are classed as symmetry operations distinct from rotations by an angle α. This group is known as the double group, "D"4'. With species such as the square-planar complex of the silver(II) ion [AgF4]2- the relevant double group is also "D"4'; deviations from the spin-only value are greater as the magnitude of spin-orbit coupling is greater for silver(II) than for copper(II). A double group is also used for some compounds of titanium in the +3 oxidation state. Compounds of titanium(III) have a single electron in the 3"d" shell. The magnetic moments of octahedral complexes with the generic formula [TiL6]n+ have been found to lie in the range 1.63 - 1.81 B.M. at room temperature. The double group "O"' is used to classify their electronic states. The cerium(III) ion, Ce3+, has a single electron in the 4"f" shell. The magnetic properties of octahedral complexes of this ion are treated using the double group "O"'. When a cerium(III) ion is encapsulated in a C60 cage, the formula of the endohedral fullerene is written as {Ce3+@C603-}. Free radicals. Double groups may be used in connection with free radicals. This has been illustrated for the species CH3F+ and CH3BF2+ which both contain a single unpaired electron. Notes. <templatestyles src="Reflist/styles.css" /> References. <templatestyles src="Reflist/styles.css" />
[ { "math_id": 0, "text": "\\varphi" }, { "math_id": 1, "text": "{1\\over 2} (1 +\\sqrt{5})" }, { "math_id": 2, "text": "\\tilde{\\varphi}={1\\over 2}(-1 + \\sqrt{5})" }, { "math_id": 3, "text": "\\chi_{1}" }, { "math_id": 4, "text": "\\chi_{3}" }, { "math_id": 5, "text": "\\chi_{3^\\prime}" }, { "math_id": 6, "text": "\\chi_{4}" }, { "math_id": 7, "text": "\\chi_{5}" }, { "math_id": 8, "text": "\\chi_2^*" }, { "math_id": 9, "text": "\\chi_{2^\\prime}^*" }, { "math_id": 10, "text": "\\chi_4^*" }, { "math_id": 11, "text": "\\chi_6^*" }, { "math_id": 12, "text": "V_i \\otimes V_j \\cong V_{|i-j|} \\oplus V_{|i-j|+1} \\oplus \\cdots \\oplus V_{i+j -1} \\oplus V_{i+j}." }, { "math_id": 13, "text": " V_j \\otimes V_{{1\\over 2}} \\cong V_{j-{1\\over 2}} \\oplus V_{j+{1\\over 2}}" }, { "math_id": 14, "text": "V_0 \\otimes V_{{1\\over 2}} \\cong V_{{1\\over 2}}." }, { "math_id": 15, "text": "\\zeta" }, { "math_id": 16, "text": "\\zeta^{-1}" }, { "math_id": 17, "text": "\\zeta^{2j}" }, { "math_id": 18, "text": "\\zeta^{2j-1}" }, { "math_id": 19, "text": "\\zeta^{-2j+1}" }, { "math_id": 20, "text": "\\zeta^{-2j}" }, { "math_id": 21, "text": "\\pi_j(g) =S^{2j}(g)," }, { "math_id": 22, "text": "\\chi_j(g) := {\\rm Tr} \\, \\pi_j(g) = \\sum_{k=-2j}^{2j} \\zeta^k = {\\zeta^{2j+1} - \\zeta^{-2j-1}\\over \\zeta -\\zeta^{-1}}." }, { "math_id": 23, "text": "\\zeta =e^{i\\alpha/2}" }, { "math_id": 24, "text": "e^{\\pm i\\alpha/2}," }, { "math_id": 25, "text": "\\chi_j(g) = {\\sin (j + {1/2})\\alpha \\over \\sin \\alpha/2}." }, { "math_id": 26, "text": "w = \\begin{pmatrix} 0 & 1\\\\ -1 & 0\\end{pmatrix}" }, { "math_id": 27, "text": "\\begin{pmatrix} 0 & 1\\\\ 0 & 0\\end{pmatrix}" }, { "math_id": 28, "text": "\\chi" }, { "math_id": 29, "text": "\\chi^J (\\alpha) = \\frac{\\sin [J+1/2] \\alpha } {\\sin (1/2) \\alpha }" }, { "math_id": 30, "text": " J= L + S" }, { "math_id": 31, "text": " 2\\pi+\\alpha" }, { "math_id": 32, "text": "-\\chi^J (\\alpha)" }, { "math_id": 33, "text": " 2\\pi" } ]
https://en.wikipedia.org/wiki?curid=75161417
7516582
Hypercycle (geometry)
Type of curve in hyperbolic geometry In hyperbolic geometry, a hypercycle, hypercircle or equidistant curve is a curve whose points have the same orthogonal distance from a given straight line (its axis). Given a straight line L and a point P not on L, one can construct a hypercycle by taking all points Q on the same side of L as P, with perpendicular distance to L equal to that of P. The line L is called the "axis", "center", or "base line" of the hypercycle. The lines perpendicular to L, which are also perpendicular to the hypercycle, are called the "normals" of the hypercycle. The segments of the normals between L and the hypercycle are called the "radii". Their common length is called the "distance" or "radius" of the hypercycle. The hypercycles through a given point that share a tangent through that point converge towards a horocycle as their distances go towards infinity. Properties similar to those of Euclidean lines. Hypercycles in hyperbolic geometry have some properties similar to those of lines in Euclidean geometry: Properties similar to those of Euclidean circles. Hypercycles in hyperbolic geometry have some properties similar to those of circles in Euclidean geometry: Length of an arc. In the hyperbolic plane of constant curvature −1, the length of an arc of a hypercycle can be calculated from the radius r and the distance between the points where the normals intersect with the axis d using the formula "l" = "d" cosh "r". Construction. In the Poincaré disk model of the hyperbolic plane, hypercycles are represented by lines and circle arcs that intersect the boundary circle at non-right angles. The representation of the axis intersects the boundary circle in the same points, but at right angles. In the Poincaré half-plane model of the hyperbolic plane, hypercycles are represented by lines and circle arcs that intersect the boundary line at non-right angles. The representation of the axis intersects the boundary line in the same points, but at right angles. Congruence classes of Steiner parabolas. The congruence classes of Steiner parabolas in the hyperbolic plane are in one-to-one correspondence with the hypercycles in a given half-plane H of a given axis. In an incidence geometry, the Steiner conic at a point P produced by a collineation T is the locus of intersections "L" ∩ "T"("L") for all lines L through P. This is the analogue of Steiner's definition of a conic in the projective plane over a field. The congruence classes of Steiner conics in the hyperbolic plane are determined by the distance s between P and "T"("P") and the angle of rotation φ induced by T about "T"("P"). Each Steiner parabola is the locus of points whose distance from a focus F is equal to the distance to a hypercycle directrix that is not a line. Assuming a common axis for the hypercycles, the location of F is determined by φ as follows. Fixing sinh "s" = 1, the classes of parabolas are in one-to-one correspondence with "φ" ∈ (0, π/2). In the conformal disk model, each point P is a complex number with . Let the common axis be the real line and assume the hypercycles are in the half-plane H with Im "P" > 0. Then the vertex of each parabola will be in H, and the parabola is symmetric about the line through the vertex perpendicular to the axis. If the hypercycle is at distance d from the axis, with formula_0 then formula_1 In particular, "F" = 0 when "φ" = π/4. In this case, the focus is on the axis; equivalently, inversion in the corresponding hypercycle leaves H invariant. This is the "harmonic" case, that is, the representation of the parabola in any inversive model of the hyperbolic plane is a harmonic, genus 1 curve. References. <templatestyles src="Reflist/styles.css" />
[ { "math_id": 0, "text": "\\tanh d = \\tan\\tfrac{\\phi}{2}," }, { "math_id": 1, "text": "F = \\left(\\frac{1-\\tan\\phi}{1+\\tan\\phi}\\right)i." } ]
https://en.wikipedia.org/wiki?curid=7516582
7516709
GW approximation
Approximation in many-body systems The "GW" approximation (GWA) is an approximation made in order to calculate the self-energy of a many-body system of electrons. The approximation is that the expansion of the self-energy "Σ" in terms of the single particle Green's function "G" and the screened Coulomb interaction "W" (in units of formula_0) formula_1 can be truncated after the first term: formula_2 In other words, the self-energy is expanded in a formal Taylor series in powers of the screened interaction "W" and the lowest order term is kept in the expansion in GWA. Theory. The above formulae are schematic in nature and show the overall idea of the approximation. More precisely, if we label an electron coordinate with its position, spin, and time and bundle all three into a composite index (the numbers 1, 2, etc.), we have formula_3 where the "+" superscript means the time index is shifted forward by an infinitesimal amount. The GWA is then formula_4 To put this in context, if one replaces "W" by the bare Coulomb interaction (i.e. the usual 1/r interaction), one generates the standard perturbative series for the self-energy found in most many-body textbooks. The GWA with "W" replaced by the bare Coulomb yields nothing other than the Hartree–Fock exchange potential (self-energy). Therefore, loosely speaking, the GWA represents a type of dynamically screened Hartree–Fock self-energy. In a solid state system, the series for the self-energy in terms of "W" should converge much faster than the traditional series in the bare Coulomb interaction. This is because the screening of the medium reduces the effective strength of the Coulomb interaction: for example, if one places an electron at some position in a material and asks what the potential is at some other position in the material, the value is smaller than given by the bare Coulomb interaction (inverse distance between the points) because the other electrons in the medium polarize (move or distort their electronic states) so as to screen the electric field. Therefore, "W" is a smaller quantity than the bare Coulomb interaction so that a series in "W" should have higher hopes of converging quickly. To see the more rapid convergence, we can consider the simplest example involving the homogeneous or uniform electron gas which is characterized by an electron density or equivalently the average electron-electron separation or Wigner–Seitz radius formula_5. (We only present a scaling argument and will not compute numerical prefactors that are order unity.) Here are the key steps: formula_9 where formula_10 is the screening wave number that scales as formula_11 Thus for the bare Coulomb interaction, the ratio of Coulomb to kinetic energy is of order formula_15 which is of order 2-5 for a typical metal and not small at all: in other words, the bare Coulomb interaction is rather strong and makes for a poor perturbative expansion. On the other hand, the ratio of a typical formula_16 to the kinetic energy is greatly reduced by the screening and is of order formula_17 which is well behaved and smaller than unity even for large formula_5: the screened interaction is much weaker and is more likely to give a rapidly converging perturbative series.
[ { "math_id": 0, "text": " \\hbar=1" }, { "math_id": 1, "text": "\\Sigma = iGW - GWGWG + \\cdots" }, { "math_id": 2, "text": "\\Sigma \\approx iG W" }, { "math_id": 3, "text": " \\Sigma(1,2) = iG(1,2)W(1^+,2) - \\int d3 \\int d4 \\, G(1,3)G(3,4)G(4,2)W(1,4)W(3,2) + ... " }, { "math_id": 4, "text": " \\Sigma(1,2) \\approx iG(1,2)W(1^+,2) " }, { "math_id": 5, "text": " r_s " }, { "math_id": 6, "text": "1/r_s^2" }, { "math_id": 7, "text": "1/r_s" }, { "math_id": 8, "text": " q " }, { "math_id": 9, "text": " \\epsilon(q) = 1 + \\lambda^2/q^2" }, { "math_id": 10, "text": "\\lambda" }, { "math_id": 11, "text": " r_s^{-1/2} " }, { "math_id": 12, "text": "q" }, { "math_id": 13, "text": "\\epsilon \\sim 1 + r_s " }, { "math_id": 14, "text": " W(q) = V(q)/\\epsilon(q)" }, { "math_id": 15, "text": "r_s" }, { "math_id": 16, "text": "W" }, { "math_id": 17, "text": "r_s/(1+r_s)" } ]
https://en.wikipedia.org/wiki?curid=7516709
75168719
Weisfeiler Leman graph isomorphism test
Heuristic test for graph isomorphism In graph theory, the Weisfeiler Leman graph isomorphism test is a heuristic test for the existence of an isomorphism between two graphs "G" and "H". It is a generalization of the color refinement algorithm and has been first described by Weisfeiler and Leman in 1968. The original formulation is based on graph canonization, a normal form for graphs, while there is also a combinatorial interpretation in the spirit of color refinement and a connection to logic. There are several versions of the test (e.g. k-WL and k-FWL) referred to in the literature by various names, which easily leads to confusion. Additionally, Andrey Leman is spelled `Lehman' in several older articles. Weisfeiler-Leman-based Graph Isomorphism heuristics. All variants of color refinement are one-sided heuristics that take as input two graphs "G" and "H" and output a certificate that they are different or 'I don't know'. This means that if the heuristic is able to tell "G" and "H" apart, then they are definitely different, but the other direction does not hold: for every variant of the WL-test (see below) there are non-isomorphic graphs where the difference is not detected. Those graphs are highly symmetric graphs such as regular graphs for 1-WL/color refinement. 1-dimensional Weisfeiler-Leman (1-WL). The 1-dimensional graph isomorphism test is essentially the same as the color refinement algorithm (the difference has to do with non-edges and is irrelevant for all practical purposes as it is trivial to see that graphs with a different number of nodes are non-isomorphic). The algorithm proceeds as follows: Initialization All nodes are initialized with the same color 0 Refinement Two nodes "u,v" get a different color if a) they had a different color before or b) there is a color "c" such that "u" and "v" have a different number of "c"-colored neighbors Termination The algorithm ends if the partition induced by two successive refinement steps is the same. In order to use this algorithm as a graph isomorphism test, one runs the algorithm on two input graphs "G" and "H" in parallel, i.e. using the colors when splitting such that some color "c" (after one iteration) might mean `a node with exactly 5 neighbors of color 0'. In practice this is achieved by running color refinement on the disjoint union graph of "G" and "H". One can then look at the histogram of colors of both graphs (counting the number of nodes after color refinement stabilized) and if they differ, this is a certificate that both graphs are not isomorphic. The algorithm terminates after at most formula_0 rounds where formula_1 is the number of nodes of the input graph as it has to split one partition in every refinement step and this can happen at most until every node has its own color. Note that there are graphs for which one needs this number of iterations, although in practice the number of rounds until terminating tends to be very low (<10). The refinement of the partition in each step is by processing for each node its label and the labels of its nearest neighbors. Therefore WLtest can be viewed as a message passing algorithm which also connects it to graph neural networks. Higher-order Weisfeiler-Leman. This is the place where the aforementioned two variants of the WL algorithm appear. Both the "k"-dimensional Weisfeiler-Leman (k-WL) and the "k"-dimensional "folklore" Weisfeiler-Leman algorithm (k-FWL) are extensions of 1-WL from above operating on k-tuples instead of individual nodes. While their difference looks innocent on the first glance, it can be shown that k-WL and (k-1)-FWL (for k>2) distinguish the same pairs of graphs. k-WL (k>1). Input: a graph G = (V,E) # initialize formula_2 for all formula_3 repeat formula_4 for all formula_5 formula_6 until formula_7 (both colorings induce identical partitions of formula_8) return formula_9 Here the neighborhood formula_10 of a k-tuple formula_11 is given by the set of all k-tuples reachable by exchanging the i-th position of formula_11: formula_12 The atomic type formula_13of a tuple encodes the edge information between all pairs of nodes from formula_11. For example, a 2-tuple has only two possible atomic types, namely the two nodes may share an edge, or they do not. Note that if the graph has multiple (different) edge relations or additional node features, membership in those is also represented in formula_13. The key idea of k-WL is to expand the neighborhood notion to k-tuples and then effectively run color refinement on the resulting graph. k-FWL (k>1). Input: a graph G = (V,E) # initialize formula_14) for all formula_3 repeat formula_15 for all formula_16 formula_17 until formula_7 (both colorings induce identical partitions of formula_8) return formula_9 Here formula_18 is the tuple formula_11 where the i-th position is exchanged to be formula_19. Note that there is one major difference between k-WL and k-FWL: k-FWL checks what happens if a single node w is placed at any position of the k-tuple (and then computes the multiset of these k-tuples) while k-WL looks at the multisets that you get when changing the i-th component only of the original k-tuple. It then uses all those multisets in the hash that computes the new color. It can be shown (although only through the connection to logic) that k-FWL and (k+1)-WL are equivalent (for formula_20). Since both algorithms scale exponentially in k (both iterate over all k-tuples), the use of k-FWL is much more efficient than using the equivalent (k+1)-WL. Examples and Code for 1-WL. Code. U = combineTwo(G, H) glabels = initializeLabels(U) # dictionary where every node gets the same label 0 labels = {} # dictionary that will provide translation from a string of labels of a node and its neighbors to an integer newLabel = 1 done = False while not(done): glabelsNew = {} # set up the dictionary of labels for the next step for node in U: label = str(glabels[node]) + str([glabels[x] for x in neighbors of node].sort()) if not(label in labels): # a combination of labels from the node and its neighbors is encountered for the first time labels[label] = newLabel # assign the string of labels to a new number as an abbreviated label newLabel += 1 # increase the counter for assigning new abbreviated labels glabelsNew[node] = labels[label] if (number of different labels in glabels) == (number of different labels in glabelsNew): done = True else: glabels = glabelsNew.copy() certificateG = certificate for G from the sorted labels of the G-part of U certificateH = certificate for H from the sorted labels of the H-part of U if certificateG == certificateH: test = True else: test = False Here is some actual Python code which includes the description of the first examples. def combineTwo(g1, g2): n = len(g1) for node in g1: s = set() for neighbor in g1[node]: s.add(neighbor) g[node] = s.copy() for node in g2: s = set() for neighbor in g2[node]: s.add(neighbor + n) g[node + n] = s.copy() return g g = combineTwo(g5_00, g5_02) for i in range(len(g)): glabels[i] = 0 glabelsCount = 1 newlabel = 1 done = False while not (done): glabelsCountNew = 0 for node in g: label = str(glabels[node]) s2 = [] for neighbor in g[node]: s2.append(glabels[neighbor]) s2.sort() for i in range(len(s2)): label += "_" + str(s2[i]) if not (label in labels): labels[label] = newlabel newlabel += 1 glabelsCountNew += 1 glabelsNew[node] = labels[label] if glabelsCount == glabelsCountNew: done = True else: glabelsCount = glabelsCountNew glabels = glabelsNew.copy() print(glabels) g0labels = [] for i in range(len(g0)): g0labels.append(glabels[i]) g0labels.sort() certificate0 = "" for i in range(len(g0)): certificate0 += str(g0labels[i]) + "_" g1labels = [] for i in range(len(g1)): g1labels.append(glabels[i + len(g0)]) g1labels.sort() certificate1 = "" for i in range(len(g1)): certificate1 += str(g1labels[i]) + "_" if certificate0 == certificate1: test = True else: test = False print("Certificate 0:", certificate0) print("Certificate 1:", certificate1) print("Test yields:", test) Examples. The first three examples are for graphs of order 5. WLpair takes 3 rounds on 'G0' and 'G1'. The test succeeds as the certificates agree. WLpair takes 4 rounds on 'G0' and 'G2'. The test fails as the certificates disagree. Indeed 'G0' has a cycle of length 5, while 'G2' doesn't, thus 'G0' and 'G2' cannot be isomorphic. WLpair takes 4 rounds on 'G1' and 'G2'. The test fails as the certificates disagree. From the previous two instances we already know formula_21. Indeed "G0" and "G1" are isomorphic. Any isomorphism must respect the components and therefore the labels. This can be used for kernelization of the graph isomorphism problem. Note that not every map of vertices that respects the labels gives an isomorphism. Let formula_22 and formula_23 be maps given by formula_24 resp. formula_25. While formula_26 is not an isomorphism formula_27 constitutes an isomorphism. When applying WLpair to "G0" and "G2" we get for "G0" the certificate "7_7_8_9_9". But the isomorphic "G1" gets the certificate "7_7_8_8_9" when applying WLpair to "G1" and "G2". This illustrates the phenomenon about labels depending on the execution order of the WLtest on the nodes. Either one finds another relabeling method that keeps uniqueness of labels, which becomes rather technical, or one skips the relabeling altogether and keeps the label strings, which blows up the length of the certificate significantly, or one applies WLtest to the union of the two tested graphs, as we did in the variant WLpair. Note that although "G1" and "G2" can get distinct certificates when WLtest is executed on them separately, they do get the same certificate by WLpair. The next example is about regular graphs. WLtest cannot distinguish regular graphs of equal order, but WLpair can distinguish regular graphs of distinct degree even if they have the same order. In fact WLtest terminates after a single round as seen in these examples of order 8, which are all 3-regular except the last one which is 5-regular. All four graphs are pairwise non-isomorphic. "G8_00" has two connected components, while the others do not. "G8_03" is 5-regular, while the others are 3-regular. "G8_01" has no 3-cycle while "G8_02" does have 3-cycles. Another example of two non-isomorphic graphs that WLpair cannot distinguish is given here. Applications. The theory behind the Weisfeiler Leman test is applied in graph neural networks. Weisfeiler Leman graph kernels. In machine learning of nonlinear data one uses kernels to represent the data in a high dimensional feature space after which linear techniques such as support vector machines can be applied. Data represented as graphs often behave nonlinear. Graph kernels are method to preprocess such graph based nonlinear data to simplify subsequent learning methods. Such graph kernels can be constructed by partially executing a Weisfeiler Leman test and processing the partition that has been constructed up to that point. These Weisfeiler Leman graph kernels have attracted considerable research in the decade after their publication. They are also implemented in dedicated libraries for graph kernels such as GraKeL. Note that kernels for artificial neural network in the context of machine learning such as graph kernels are not to be confused with kernels applied in heuristic algorithms to reduce the computational cost for solving problems of high complexity such as instances of NP-hard problems in the field of complexity theory. As stated above the Weisfeiler Leman test can also be applied in the later context. References. <templatestyles src="Reflist/styles.css" />
[ { "math_id": 0, "text": "n-1" }, { "math_id": 1, "text": "n" }, { "math_id": 2, "text": "c^0(\\bar v) = \\text{Hash}(\\text{type}(\\bar v))" }, { "math_id": 3, "text": "\\bar v \\in V^k" }, { "math_id": 4, "text": "c^\\ell_i(\\bar v) = \\{\\!\\{c^{\\ell-1}(\\bar w) : \\bar w \\in \\mathcal N_i(\\bar v) \\}\\!\\}" }, { "math_id": 5, "text": "\\bar v \\in V^k, i \\in [k]" }, { "math_id": 6, "text": "c^\\ell(\\bar v) = \\text{Hash}(c^{\\ell-1}(\\bar v), c^{\\ell}_1(\\bar v),\\dots,c^\\ell_k(\\bar v))" }, { "math_id": 7, "text": "c^\\ell \\equiv c^{\\ell-1}" }, { "math_id": 8, "text": "V^k" }, { "math_id": 9, "text": "c^\\ell" }, { "math_id": 10, "text": "\\mathcal N_i(\\bar v)" }, { "math_id": 11, "text": "\\bar v" }, { "math_id": 12, "text": "\\mathcal N_i(\\bar v) = \\{ (v_1,\\dots,v_{i-1},w,v_{i+1},\\dots,v_k) : w \\in V\\}" }, { "math_id": 13, "text": "\\text{type}(\\bar v)" }, { "math_id": 14, "text": "c^0(\\bar v) = \\text{Hash}(\\text{type}(\\bar v)" }, { "math_id": 15, "text": "c^\\ell_w(\\bar v) = \\big(c^{\\ell-1}(\\bar v[1]\\!\\!\\leftarrow\\! w), \\dots, c^{\\ell-1}(\\bar v[k]\\!\\!\\leftarrow\\! w)\\big) " }, { "math_id": 16, "text": "\\bar v \\in V^k, w \\in V" }, { "math_id": 17, "text": "c^\\ell(\\bar v) = \\text{Hash}\\big(c^{\\ell-1}(\\bar v), \\{\\!\\{c^{\\ell}_w(\\bar v) : w\\in V\\}\\!\\}\\big)" }, { "math_id": 18, "text": "\\bar v[i]\\!\\!\\leftarrow\\! w" }, { "math_id": 19, "text": "w" }, { "math_id": 20, "text": "k\\geq 2" }, { "math_id": 21, "text": "G_1\\cong G_0\\not\\cong G_2" }, { "math_id": 22, "text": "\\varphi: G_0\\rightarrow G_1" }, { "math_id": 23, "text": "\\psi: G_0\\rightarrow G_1" }, { "math_id": 24, "text": "\\varphi(a)=D,\\varphi(b)=C,\\varphi(c)=B,\\varphi(d)=E,\\varphi(e)=A" }, { "math_id": 25, "text": "\\psi(a)=B,\\psi(b)=C,\\psi(c)=D,\\psi(d)=E,\\psi(e)=A" }, { "math_id": 26, "text": "\\varphi" }, { "math_id": 27, "text": "\\psi" } ]
https://en.wikipedia.org/wiki?curid=75168719
75172510
Wolfgang Gaschütz
Wolfgang Gaschütz (11 June 1920 – 7 November 2016) was a German mathematician, known for his research in group theory, especially the theory of finite groups. Biography. Gaschütz was born on 11 June 1920 in Karlshof, Oderbruch. He moved with his family in 1931 to Berlin, where he completed his "Abitur" in 1938. He served as an artillery officer in WW II, which ended for him in 1945 near Kiel. There in autumn 1945 he matriculated at the University of Kiel. He was inspired by Andreas Speiser's book "Die Theorie der Gruppen von endlicher Ordnung". Gaschütz received his Ph.D. ("Promotion") in 1949 under the supervision of Karl-Heinrich Weise with doctoral dissertation entitled "(Zur formula_0-Untergruppe endlicher Gruppen)". In 1953 Gaschütz completed his habilitation in Kiel. At the University of Kiel he held the junior academic appointments "Wissenschaftliche Hilfskraft" from 1949 to 1956 and "Diätendozent" from 1956 to 1959. He was "Außerplanmäßiger Professor" from 1959 to 1962, professor extraordinarius from 1962 to 1964, and professor ordinarius (full professor) from 1964 to 1988. He taught at Kiel until his retirement as professor emeritus in 1988. He rejected calls to Karlsruhe and Mainz. He was a visiting professor at various universities in Europe (Queen Mary College London 1965 and 1970, University of Padua 1966, University of Florence 1971, University of Naples Federico II 1974, University of Warwick 1967, 1973 & 1977); in the USA (Michigan State University 1963, University of Chicago 1968); and in Australia (Australian National University in Canberra). Gaschütz created a school of group theorists in Kiel, where there had been a gap in mathematical expertise in algebra since the death of Ernst Steinitz in 1928. Gaschütz, influenced by Helmut Wielandt in the 1950s, is best known for his research on Frattini subgroups, on questions of complementability, on group cohomology, and on the theory of finite solvable groups. In 1959 he gave a formula for the Eulerian function introduced in 1936 by Philip Hall and determined the number of generators of a finite solvable group in terms of structure and embedding of the chief factors of the Eulerian function. In 1962, Gaschütz published his theory of formations, giving a unified theory of Hall subgroups and Carter subgroups. Gaschütz's theory is important for understanding finite solvable groups. He characterized solvable T-groups. He is one of the pioneers of the theory of Fitting classes begun by Bernd Fischer in 1966 and the theory of Schunk classes. Gaschütz organized the Oberwolfach conferences on group theory for many years with Bertram Huppert and Karl W. Gruenberg. In 2000 Gaschütz received an honorary doctorate from Francisk Skorina Gomel State University in Belarus. His doctoral students include Joachim Neubüser. Gaschütz and his wife Gudrun were married in 1943 and became the parents of a son and two daughters. Gaschütz died in Kiel on 7 November 2016, at the age of 96.
[ { "math_id": 0, "text": "\\Phi" } ]
https://en.wikipedia.org/wiki?curid=75172510
751777
Francis turbine
Type of water turbine The Francis turbine is a type of water turbine. It is an inward-flow reaction turbine that combines radial and axial flow concepts. Francis turbines are the most common water turbine in use today, and can achieve over 95% efficiency. The process of arriving at the modern Francis runner design took from 1848 to approximately 1920. It became known as the Francis turbine around 1920, being named after British-American engineer James B. Francis who in 1848 created a new turbine design. Francis turbines are primarily used for producing electricity. The power output of the electric generators generally ranges from just a few kilowatts up to 1000 MW, though mini-hydro installations may be lower. The best performance is seen when the head height is between . Penstock diameters are between . The speeds of different turbine units range from 70 to 1000 rpm. A wicket gate around the outside of the turbine's rotating runner controls the rate of water flow through the turbine for different power production rates. Francis turbines are usually mounted with a vertical shaft, to isolate water from the generator. This also facilitates installation and maintenance. Development. Water wheels of different types have been used for more than 1,000 years to power mills of all types, but they were relatively inefficient. Nineteenth-century efficiency improvements of water turbines allowed them to replace nearly all water wheel applications and compete with steam engines wherever water power was available. After electric generators were developed in the late 1800s, turbines were a natural source of generator power where potential hydropower sources existed. In 1826 the French engineer Benoit Fourneyron developed a high-efficiency (80%) outward-flow water turbine. Water was directed tangentially through the turbine runner, causing it to spin. Another French engineer, Jean-Victor Poncelet, designed an inward-flow turbine in about 1820 that used the same principles. S. B. Howd obtained a US patent in 1838 for a similar design. In 1848 James B. Francis, while working as head engineer of the Locks and Canals company in the water wheel-powered textile factory city of Lowell, Massachusetts, improved on these designs to create more efficient turbines. He applied scientific principles and testing methods to produce a very efficient turbine design. More importantly, his mathematical and graphical calculation methods improved turbine design and engineering. His analytical methods allowed the design of high-efficiency turbines to precisely match a site's water flow and pressure (water head). Components. A Francis turbine consists of the following main parts: Spiral casing: The spiral casing around the runner of the turbine is known as the volute casing or scroll case. Throughout its length, it has numerous openings at regular intervals to allow the working fluid to impinge on the blades of the runner. These openings convert the pressure energy of the fluid into kinetic energy just before the fluid impinges on the blades. This maintains a constant velocity despite the fact that numerous openings have been provided for the fluid to enter the blades, as the cross-sectional area of this casing decreases uniformly along the circumference. Guide and stay vanes: The primary function of the guide and stay vanes is to convert the pressure energy of the fluid into kinetic energy. It also serves to direct the flow at design angles to the runner blades. Runner blades: Runner blades are the heart of any turbine. These are the centers where the fluid strikes and the tangential force of the impact produces torque causing the shaft of the turbine to rotate. Close attention to design of blade angles at inlet and outlet is necessary, as these are major parameters affecting power production. Draft tube: The draft tube is a conduit that connects the runner exit to the tail race where the water is discharged from the turbine. Its primary function is to reduce the velocity of discharged water to minimize the loss of kinetic energy at the outlet. This permits the turbine to be set above the tail water without appreciable drop of available head. Theory of operation. The Francis turbine is a type of reaction turbine, a category of turbine in which the working fluid comes to the turbine under immense pressure and the energy is extracted by the turbine blades from the working fluid. A part of the energy is given up by the fluid because of pressure changes occurring on the blades of the turbine, quantified by the expression of degree of reaction, while the remaining part of the energy is extracted by the volute casing of the turbine. At the exit, water acts on the spinning cup-shaped runner features, leaving at low velocity and low swirl with very little kinetic or potential energy left. The turbine's exit tube is shaped to help decelerate the water flow and recover the pressure. Blade efficiency. Usually the flow velocity (velocity perpendicular to the tangential direction) remains constant throughout, i.e. "V"f1 "V"f2 and is equal to that at the inlet to the draft tube. Using the Euler turbine equation, "E"/"m" "e" "V"w1"U"1, where "e" is the energy transfer to the rotor per unit mass of the fluid. From the inlet velocity triangle, formula_0 and formula_1 Therefore formula_2 The loss of kinetic energy per unit mass at the outlet is "V"f22/2. Therefore, neglecting friction, the blade efficiency becomes formula_3 i.e. formula_4 Degree of reaction. Degree of reaction can be defined as the ratio of pressure energy change in the blades to total energy change of the fluid. This means that it is a ratio indicating the fraction of total change in fluid pressure energy occurring in the blades of the turbine. The rest of the changes occur in the stator blades of the turbines and the volute casing as it has a varying cross-sectional area. For example, if the degree of reaction is given as 50%, that means that half of the total energy change of the fluid is taking place in the rotor blades and the other half is occurring in the stator blades. If the degree of reaction is zero it means that the energy changes due to the rotor blades is zero, leading to a different turbine design called the Pelton Turbine. formula_5 The second equality above holds, since discharge is radial in a Francis turbine. Now, putting in the value of 'e' from above and using formula_6 (as formula_7) formula_8 Application. Francis turbines may be designed for a wide range of heads and flows. This versatility, along with their high efficiency, has made them the most widely used turbine in the world. Francis type units cover a head range from , and their connected generator output power varies from just a few kilowatts up to 1000 MW. Large Francis turbines are individually designed for each site to operate with the given water flow and water head at the highest possible efficiency, typically over 90% (to 99%). In contrast to the Pelton turbine, the Francis turbine operates at its best completely filled with water at all times. The turbine and the outlet channel may be placed lower than the lake or sea level outside, reducing the tendency for cavitation. In addition to electrical production, they may also be used for pumped storage, where a reservoir is filled by the turbine (acting as a pump) driven by the generator acting as a large electrical motor during periods of low power demand, and then reversed and used to generate power during peak demand. These pump storage reservoirs act as large energy storage sources to store "excess" electrical energy in the form of water in elevated reservoirs. This is one of a few methods that allow temporary excess electrical capacity to be stored for later utilization. See also. <templatestyles src="Div col/styles.css"/> Citations. <templatestyles src="Reflist/styles.css" /> General bibliography. <templatestyles src="Refbegin/styles.css" />
[ { "math_id": 0, "text": "V_{w1}=V_{f1}\\cot\\alpha_1" }, { "math_id": 1, "text": "U_1=V_{f1}(\\cot\\alpha_1+\\cot\\beta_1)," }, { "math_id": 2, "text": "e=V^2_{f1}\\cot\\alpha_1(\\cot\\alpha_1+\\cot\\beta_1)." }, { "math_id": 3, "text": "\\eta_b={e\\over(e+V^2_{f2}/2)}," }, { "math_id": 4, "text": "\\eta_b=\\frac {2V_{f1}^2(\\cot\\alpha_1(\\cot\\alpha_1+\\cot\\beta_1))}{V_{f2}^2+ 2V_{f1}^2(\\cot\\alpha_1(\\cot\\alpha_1+\\cot\\beta_1))} \\,." }, { "math_id": 5, "text": "R = 1-\\frac{V_1^2-V_{2}^2}{2e} = 1-\\frac{V_1^2-V_{f2}^2}{2e}" }, { "math_id": 6, "text": "V^2_1-V^2_{f2}=V^2_{f1}\\cot\\alpha_2" }, { "math_id": 7, "text": "V_{f2}=V_{f1}" }, { "math_id": 8, "text": "R=1-\\frac{\\cot\\alpha_1}{2(\\cot\\alpha_1+\\cot\\beta_1)}" } ]
https://en.wikipedia.org/wiki?curid=751777
7517878
Rotordynamics
Branch of applied mechanics dealing with rotating structures Rotordynamics (or rotor dynamics) is a specialized branch of applied mechanics concerned with the behavior and diagnosis of rotating structures. It is commonly used to analyze the behavior of structures ranging from jet engines and steam turbines to auto engines and computer disk storage. At its most basic level, rotor dynamics is concerned with one or more mechanical structures (rotors) supported by bearings and influenced by internal phenomena that rotate around a single axis. The supporting structure is called a stator. As the speed of rotation increases the amplitude of vibration often passes through a maximum that is called a critical speed. This amplitude is commonly excited by imbalance of the rotating structure; everyday examples include engine balance and tire balance. If the amplitude of vibration at these critical speeds is excessive, then catastrophic failure occurs. In addition to this, turbomachinery often develop instabilities which are related to the internal makeup of turbomachinery, and which must be corrected. This is the chief concern of engineers who design large rotors. Rotating machinery produces vibrations depending upon the structure of the mechanism involved in the process. Any faults in the machine can increase or excite the vibration signatures. Vibration behavior of the machine due to imbalance is one of the main aspects of rotating machinery which must be studied in detail and considered while designing. All objects including rotating machinery exhibit natural frequency depending on the structure of the object. The critical speed of a rotating machine occurs when the rotational speed matches its natural frequency. The lowest speed at which the natural frequency is first encountered is called the first critical speed, but as the speed increases, additional critical speeds are seen which are the multiples of the natural frequency. Hence, minimizing rotational unbalance and unnecessary external forces are very important to reducing the overall forces which initiate resonance. When the vibration is in resonance, it creates a destructive energy which should be the main concern when designing a rotating machine. The objective here should be to avoid operations that are close to the critical and pass safely through them when in acceleration or deceleration. If this aspect is ignored it might result in loss of the equipment, excessive wear and tear on the machinery, catastrophic breakage beyond repair or even human injury and loss of lives. The real dynamics of the machine is difficult to model theoretically. The calculations are based on simplified models which resemble various structural components (lumped parameters models), equations obtained from solving models numerically (Rayleigh–Ritz method) and finally from the finite element method (FEM), which is another approach for modelling and analysis of the machine for natural frequencies. There are also some analytical methods, such as the distributed transfer function method, which can generate analytical and closed-form natural frequencies, critical speeds and unbalanced mass response. On any machine prototype it is tested to confirm the precise frequencies of resonance and then redesigned to assure that resonance does not occur. Basic principles. The equation of motion, in generalized matrix form, for an axially symmetric rotor rotating at a constant spin speed Ω is formula_0 where: M is the symmetric Mass matrix C is the symmetric damping matrix G is the skew-symmetric gyroscopic matrix K is the symmetric bearing or seal stiffness matrix N is the gyroscopic matrix of deflection for inclusion of e.g., centrifugal elements. in which q is the generalized coordinates of the rotor in inertial coordinates and f is a forcing function, usually including the unbalance. The gyroscopic matrix G is proportional to spin speed Ω. The general solution to the above equation involves complex eigenvectors which are spin speed dependent. Engineering specialists in this field rely on the Campbell Diagram to explore these solutions. An interesting feature of the rotordynamic system of equations are the off-diagonal terms of stiffness, damping, and mass. These terms are called cross-coupled stiffness, cross-coupled damping, and cross-coupled mass. When there is a positive cross-coupled stiffness, a deflection will cause a reaction force opposite the direction of deflection to react the load, and also a reaction force in the direction of positive whirl. If this force is large enough compared with the available direct damping and stiffness, the rotor will be unstable. When a rotor is unstable, it will typically require immediate shutdown of the machine to avoid catastrophic failure. Jeffcott rotor. The Jeffcott rotor (named after Henry Homan Jeffcott), also known as the de Laval rotor in Europe, is a simplified lumped parameter model used to solve these equations. A Jeffcott rotor consists of a flexible, massless, uniform shaft mounted on two flexible bearings equidistant from a massive disk rigidly attached to the shaft. The simplest form of the rotor constrains the disk to a plane orthogonal to the axis of rotation. This limits the rotor's response to lateral vibration only. If the disk is perfectly balanced (i.e., its geometric center and center of mass are coincident), then the rotor is analogous to a single-degree-of-freedom undamped oscillator under free vibration. If there is some radial distance between the geometric center and center of mass, then the rotor is unbalanced, which produced a force proportional to the disk's mass, "m", the distance between the two centers (eccentricity, "ε") and the disk's spin speed, "Ω". After calculating the equivalent stiffness, "k", of the system, we can create the following second-order linear ordinary differential equation that describes the radial deflection of the disk from the rotor centerline. formula_1 If we were to graph the radial response, we would see a sine wave with angular frequency formula_2. This lateral oscillation is called 'whirl', and in this case, is highly dependent upon spin speed. Not only does the spin speed influence the amplitude of the forcing function, it can also produce dynamic amplification near the system's natural frequency. While the Jeffcott rotor is a useful tool for introducing rotordynamic concepts, it is important to note that it is a mathematical idealization that only loosely approximates the behavior of real-world rotors. Campbell diagram. The Campbell diagram, also known as "Whirl Speed Map" or a "Frequency Interference Diagram", of a simple rotor system is shown on the right. The pink and blue curves show the backward whirl (BW) and forward whirl (FW) modes, respectively, which diverge as the spin speed increases. When the BW frequency or the FW frequency equal the spin speed Ω, indicated by the intersections A and B with the synchronous spin speed line, the response of the rotor may show a peak. This is called a critical speed. History. The history of rotordynamics is replete with the interplay of theory and practice. W. J. M. Rankine first performed an analysis of a spinning shaft in 1869, but his model was not adequate and he predicted that supercritical speeds could not be attained. In 1895, Dunkerley published an experimental paper describing supercritical speeds. Gustaf de Laval, a Swedish engineer, ran a steam turbine to supercritical speeds in 1889, and Kerr published a paper showing experimental evidence of a second critical speed in 1916. Henry Jeffcott was commissioned by the Royal Society of London to resolve the conflict between theory and practice. He published a paper now considered classic in the "Philosophical Magazine" in 1919 in which he confirmed the existence of stable supercritical speeds. August Föppl published much the same conclusions in 1895, but history largely ignored his work. Between the work of Jeffcott and the start of World War II there was much work in the area of instabilities and modeling techniques culminating in the work of Nils Otto Myklestad and M. A. Prohl which led to the transfer matrix method (TMM) for analyzing rotors. The most prevalent method used today for rotordynamics analysis is the finite element method. Modern computer models have been commented on in a quote attributed to Dara Childs, "the quality of predictions from a computer code has more to do with the soundness of the basic model and the physical insight of the analyst. ... Superior algorithms or computer codes will not cure bad models or a lack of engineering judgment." Prof. F. Nelson has written extensively on the history of rotordynamics and most of this section is based on his work. Software. There are many software packages that are capable of solving the rotor dynamic system of equations. Rotor dynamic specific codes are more versatile for design purposes. These codes make it easy to add bearing coefficients, side loads, and many other items only a rotordynamicist would need. The non-rotor dynamic specific codes are full featured FEA solvers, and have many years of development in their solving techniques. The non-rotor dynamic specific codes can also be used to calibrate a code designed for rotor dynamics. References. <templatestyles src="Refbegin/styles.css" /> Notes. <templatestyles src="Reflist/styles.css" />
[ { "math_id": 0, "text": "\n\\begin{matrix}\n\\mathbf {M}\\ddot{\\mathbf{q}}(t)+(\\mathbf{C}+\\mathbf{G})\\dot{\\mathbf{q}}(t)+(\\mathbf{K}+\\mathbf{N}){\\mathbf{q}}(t)&=&\\mathbf{f}(t)\\\\\n\\end{matrix}\n" }, { "math_id": 1, "text": "m \\mathbb{\\ddot{r}} + k \\mathbb{r} = m \\varepsilon \\Omega^2 sin(\\Omega t)" }, { "math_id": 2, "text": "\\Omega/2\\pi" } ]
https://en.wikipedia.org/wiki?curid=7517878
7519
Convolution
Integral expressing the amount of overlap of one function as it is shifted over another In mathematics (in particular, functional analysis), convolution is a mathematical operation on two functions (formula_0 and formula_2) that produces a third function (formula_3). The term "convolution" refers to both the result function and to the process of computing it. It is defined as the integral of the product of the two functions after one is reflected about the y-axis and shifted. The integral is evaluated for all values of shift, producing the convolution function. The choice of which function is reflected and shifted before the integral does not change the integral result (see commutativity). Graphically, it expresses how the 'shape' of one function is modified by the other. Some features of convolution are similar to cross-correlation: for real-valued functions, of a continuous or discrete variable, convolution (formula_3) differs from cross-correlation (formula_1) only in that either formula_4 or formula_5 is reflected about the y-axis in convolution; thus it is a cross-correlation of formula_6 and formula_4, or formula_7 and formula_5. For complex-valued functions, the cross-correlation operator is the adjoint of the convolution operator. Convolution has applications that include probability, statistics, acoustics, spectroscopy, signal processing and image processing, geophysics, engineering, physics, computer vision and differential equations. The convolution can be defined for functions on Euclidean space and other groups (as algebraic structures). For example, periodic functions, such as the discrete-time Fourier transform, can be defined on a circle and convolved by periodic convolution. (See row 18 at .) A "discrete convolution" can be defined for functions on the set of integers. Generalizations of convolution have applications in the field of numerical analysis and numerical linear algebra, and in the design and implementation of finite impulse response filters in signal processing. Computing the inverse of the convolution operation is known as deconvolution. Definition. The convolution of formula_0 and formula_2 is written formula_8, denoting the operator with the symbol formula_9. It is defined as the integral of the product of the two functions after one is reflected about the y-axis and shifted. As such, it is a particular kind of integral transform: formula_10 An equivalent definition is (see commutativity): formula_11 While the symbol formula_12 is used above, it need not represent the time domain. At each formula_12, the convolution formula can be described as the area under the function formula_13 weighted by the function formula_14 shifted by the amount formula_12. As formula_12 changes, the weighting function formula_15 emphasizes different parts of the input function formula_13; If formula_12 is a positive value, then formula_15 is equal to formula_14 that slides or is shifted along the formula_16-axis toward the right (toward formula_17) by the amount of formula_12, while if formula_12 is a negative value, then formula_15 is equal to formula_14 that slides or is shifted toward the left (toward formula_18) by the amount of formula_19. For functions formula_0, formula_2 supported on only formula_20 (i.e., zero for negative arguments), the integration limits can be truncated, resulting in: formula_21 For the multi-dimensional formulation of convolution, see "domain of definition" (below). Notation. A common engineering notational convention is: formula_22 which has to be interpreted carefully to avoid confusion. For instance, formula_23 is equivalent to formula_24, but formula_25 is in fact equivalent to formula_26. Relations with other transforms. Given two functions formula_27 and formula_28 with bilateral Laplace transforms (two-sided Laplace transform) formula_29 and formula_30 respectively, the convolution operation formula_31 can be defined as the inverse Laplace transform of the product of formula_32 and formula_33. More precisely, formula_34 Let formula_35 such that formula_36 Note that formula_37 is the bilateral Laplace transform of formula_31. A similar derivation can be done using the unilateral Laplace transform (one-sided Laplace transform). The convolution operation also describes the output (in terms of the input) of an important class of operations known as "linear time-invariant" (LTI). See LTI system theory for a derivation of convolution as the result of LTI constraints. In terms of the Fourier transforms of the input and output of an LTI operation, no new frequency components are created. The existing ones are only modified (amplitude and/or phase). In other words, the output transform is the pointwise product of the input transform with a third transform (known as a transfer function). See Convolution theorem for a derivation of that property of convolution. Conversely, convolution can be derived as the inverse Fourier transform of the pointwise product of two Fourier transforms. Historical developments. One of the earliest uses of the convolution integral appeared in D'Alembert's derivation of Taylor's theorem in "Recherches sur différents points importants du système du monde," published in 1754. Also, an expression of the type: formula_38 is used by Sylvestre François Lacroix on page 505 of his book entitled "Treatise on differences and series", which is the last of 3 volumes of the encyclopedic series: , Chez Courcier, Paris, 1797–1800. Soon thereafter, convolution operations appear in the works of Pierre Simon Laplace, Jean-Baptiste Joseph Fourier, Siméon Denis Poisson, and others. The term itself did not come into wide use until the 1950s or 1960s. Prior to that it was sometimes known as "Faltung" (which means "folding" in German), "composition product", "superposition integral", and "Carson's integral". Yet it appears as early as 1903, though the definition is rather unfamiliar in older uses. The operation: formula_39 is a particular case of composition products considered by the Italian mathematician Vito Volterra in 1913. Circular convolution. When a function "g""T" is periodic, with period T, then for functions, f, such that "f" ∗ "g""T" exists, the convolution is also periodic and identical to: formula_40 where "t"0 is an arbitrary choice. The summation is called a periodic summation of the function f. When "g""T" is a periodic summation of another function, g, then "f" ∗ "g""T" is known as a "circular" or "cyclic" convolution of f and g. And if the periodic summation above is replaced by "f""T", the operation is called a "periodic" convolution of "f""T" and "g""T". Discrete convolution. For complex-valued functions "f", "g" defined on the set Z of integers, the "discrete convolution" of f and g is given by: formula_41 or equivalently (see commutativity) by: formula_42 The convolution of two finite sequences is defined by extending the sequences to finitely supported functions on the set of integers. When the sequences are the coefficients of two polynomials, then the coefficients of the ordinary product of the two polynomials are the convolution of the original two sequences. This is known as the Cauchy product of the coefficients of the sequences. Thus when g has finite support in the set formula_43 (representing, for instance, a finite impulse response), a finite summation may be used: formula_44 Circular discrete convolution. When a function formula_45 is periodic, with period formula_46 then for functions, formula_47 such that formula_48 exists, the convolution is also periodic and identical to: formula_49 The summation on formula_50 is called a periodic summation of the function formula_51 If formula_45 is a periodic summation of another function, formula_52 then formula_48 is known as a circular convolution of formula_0 and formula_53 When the non-zero durations of both formula_0 and formula_2 are limited to the interval formula_54  formula_48 reduces to these common forms: The notation formula_55 for "cyclic convolution" denotes convolution over the cyclic group of integers modulo "N". Circular convolution arises most often in the context of fast convolution with a fast Fourier transform (FFT) algorithm. Fast convolution algorithms. In many situations, discrete convolutions can be converted to circular convolutions so that fast transforms with a convolution property can be used to implement the computation. For example, convolution of digit sequences is the kernel operation in multiplication of multi-digit numbers, which can therefore be efficiently implemented with transform techniques (; ). Eq.1 requires N arithmetic operations per output value and "N"2 operations for N outputs. That can be significantly reduced with any of several fast algorithms. Digital signal processing and other applications typically use fast convolution algorithms to reduce the cost of the convolution to O(N log N) complexity. The most common fast convolution algorithms use fast Fourier transform (FFT) algorithms via the circular convolution theorem. Specifically, the circular convolution of two finite-length sequences is found by taking an FFT of each sequence, multiplying pointwise, and then performing an inverse FFT. Convolutions of the type defined above are then efficiently implemented using that technique in conjunction with zero-extension and/or discarding portions of the output. Other fast convolution algorithms, such as the Schönhage–Strassen algorithm or the Mersenne transform, use fast Fourier transforms in other rings. The Winograd method is used as an alternative to the FFT. It significantly speeds up 1D, 2D, and 3D convolution. If one sequence is much longer than the other, zero-extension of the shorter sequence and fast circular convolution is not the most computationally efficient method available. Instead, decomposing the longer sequence into blocks and convolving each block allows for faster algorithms such as the overlap–save method and overlap–add method. A hybrid convolution method that combines block and FIR algorithms allows for a zero input-output latency that is useful for real-time convolution computations. Domain of definition. The convolution of two complex-valued functions on R"d" is itself a complex-valued function on R"d", defined by: formula_56 and is well-defined only if f and g decay sufficiently rapidly at infinity in order for the integral to exist. Conditions for the existence of the convolution may be tricky, since a blow-up in g at infinity can be easily offset by sufficiently rapid decay in f. The question of existence thus may involve different conditions on f and g: Compactly supported functions. If f and g are compactly supported continuous functions, then their convolution exists, and is also compactly supported and continuous . More generally, if either function (say f) is compactly supported and the other is locally integrable, then the convolution "f"∗"g" is well-defined and continuous. Convolution of f and g is also well defined when both functions are locally square integrable on R and supported on an interval of the form ["a", +∞) (or both supported on [−∞, "a"]). Integrable functions. The convolution of f and g exists if f and g are both Lebesgue integrable functions in "L"1(R"d"), and in this case "f"∗"g" is also integrable . This is a consequence of Tonelli's theorem. This is also true for functions in "L"1, under the discrete convolution, or more generally for the convolution on any group. Likewise, if "f" ∈ "L"1(R"d")  and  "g" ∈ "L""p"(R"d")  where 1 ≤ "p" ≤ ∞,  then  "f"*"g" ∈ "L""p"(R"d"),  and formula_57 In the particular case "p" 1, this shows that "L"1 is a Banach algebra under the convolution (and equality of the two sides holds if f and g are non-negative almost everywhere). More generally, Young's inequality implies that the convolution is a continuous bilinear map between suitable "L""p" spaces. Specifically, if 1 ≤ "p", "q", "r" ≤ ∞ satisfy: formula_58 then formula_59 so that the convolution is a continuous bilinear mapping from "L""p"×"L""q" to "L""r". The Young inequality for convolution is also true in other contexts (circle group, convolution on Z). The preceding inequality is not sharp on the real line: when 1 < "p", "q", "r" < ∞, there exists a constant "B""p","q" < 1 such that: formula_60 The optimal value of "B""p","q" was discovered in 1975 and independently in 1976, see Brascamp–Lieb inequality. A stronger estimate is true provided 1 < "p", "q", "r" < ∞: formula_61 where formula_62 is the weak "L""q" norm. Convolution also defines a bilinear continuous map formula_63 for formula_64, owing to the weak Young inequality: formula_65 Functions of rapid decay. In addition to compactly supported functions and integrable functions, functions that have sufficiently rapid decay at infinity can also be convolved. An important feature of the convolution is that if "f" and "g" both decay rapidly, then "f"∗"g" also decays rapidly. In particular, if "f" and "g" are rapidly decreasing functions, then so is the convolution "f"∗"g". Combined with the fact that convolution commutes with differentiation (see #Properties), it follows that the class of Schwartz functions is closed under convolution . Distributions. If "f" is a smooth function that is compactly supported and "g" is a distribution, then "f"∗"g" is a smooth function defined by formula_66 More generally, it is possible to extend the definition of the convolution in a unique way with formula_67 the same as "f" above, so that the associative law formula_68 remains valid in the case where "f" is a distribution, and "g" a compactly supported distribution . Measures. The convolution of any two Borel measures "μ" and "ν" of bounded variation is the measure formula_69 defined by formula_70 In particular, formula_71 where formula_72 is a measurable set and formula_73 is the indicator function of formula_74. This agrees with the convolution defined above when μ and ν are regarded as distributions, as well as the convolution of L1 functions when μ and ν are absolutely continuous with respect to the Lebesgue measure. The convolution of measures also satisfies the following version of Young's inequality formula_75 where the norm is the total variation of a measure. Because the space of measures of bounded variation is a Banach space, convolution of measures can be treated with standard methods of functional analysis that may not apply for the convolution of distributions. Properties. Algebraic properties. The convolution defines a product on the linear space of integrable functions. This product satisfies the following algebraic properties, which formally mean that the space of integrable functions with the product given by convolution is a commutative associative algebra without identity . Other linear spaces of functions, such as the space of continuous functions of compact support, are closed under the convolution, and so also form commutative associative algebras. Proof (using convolution theorem): formula_88 formula_89 formula_90 formula_92 Integration. If "f" and "g" are integrable functions, then the integral of their convolution on the whole space is simply obtained as the product of their integrals: formula_96 This follows from Fubini's theorem. The same result holds if "f" and "g" are only assumed to be nonnegative measurable functions, by Tonelli's theorem. Differentiation. In the one-variable case, formula_97 where formula_98 is the derivative. More generally, in the case of functions of several variables, an analogous formula holds with the partial derivative: formula_99 A particular consequence of this is that the convolution can be viewed as a "smoothing" operation: the convolution of "f" and "g" is differentiable as many times as "f" and "g" are in total. These identities hold for example under the condition that "f" and "g" are absolutely integrable and at least one of them has an absolutely integrable (L1) weak derivative, as a consequence of Young's convolution inequality. For instance, when "f" is continuously differentiable with compact support, and "g" is an arbitrary locally integrable function, formula_100 These identities also hold much more broadly in the sense of tempered distributions if one of "f" or "g" is a rapidly decreasing tempered distribution, a compactly supported tempered distribution or a Schwartz function and the other is a tempered distribution. On the other hand, two positive integrable and infinitely differentiable functions may have a nowhere continuous convolution. In the discrete case, the difference operator "D" "f"("n") = "f"("n" + 1) − "f"("n") satisfies an analogous relationship: formula_101 Convolution theorem. The convolution theorem states that formula_102 where formula_103 denotes the Fourier transform of formula_0. Convolution in other types of transformations. Versions of this theorem also hold for the Laplace transform, two-sided Laplace transform, Z-transform and Mellin transform. Convolution on matrices. If formula_104 is the Fourier transform matrix, then formula_105, where formula_106 is face-splitting product, formula_107 denotes Kronecker product, formula_108 denotes Hadamard product (this result is an evolving of count sketch properties). This can be generalized for appropriate matrices formula_109: formula_110 from the properties of the face-splitting product. Translational equivariance. The convolution commutes with translations, meaning that formula_111 where τ"x"f is the translation of the function "f" by "x" defined by formula_112 If "f" is a Schwartz function, then "τxf" is the convolution with a translated Dirac delta function "τ""x""f" = "f" ∗ "τ""x" "δ". So translation invariance of the convolution of Schwartz functions is a consequence of the associativity of convolution. Furthermore, under certain conditions, convolution is the most general translation invariant operation. Informally speaking, the following holds Suppose that "S" is a bounded linear operator acting on functions which commutes with translations: "S"("τxf") = "τx"("Sf") for all "x". Then "S" is given as convolution with a function (or distribution) "g""S"; that is "Sf" = "g""S" ∗ "f". Thus some translation invariant operations can be represented as convolution. Convolutions play an important role in the study of time-invariant systems, and especially LTI system theory. The representing function "g""S" is the impulse response of the transformation "S". A more precise version of the theorem quoted above requires specifying the class of functions on which the convolution is defined, and also requires assuming in addition that "S" must be a continuous linear operator with respect to the appropriate topology. It is known, for instance, that every continuous translation invariant continuous linear operator on "L"1 is the convolution with a finite Borel measure. More generally, every continuous translation invariant continuous linear operator on "L""p" for 1 ≤ "p" < ∞ is the convolution with a tempered distribution whose Fourier transform is bounded. To wit, they are all given by bounded Fourier multipliers. Convolutions on groups. If "G" is a suitable group endowed with a measure λ, and if "f" and "g" are real or complex valued integrable functions on "G", then we can define their convolution by formula_113 It is not commutative in general. In typical cases of interest "G" is a locally compact Hausdorff topological group and λ is a (left-) Haar measure. In that case, unless "G" is unimodular, the convolution defined in this way is not the same as formula_114. The preference of one over the other is made so that convolution with a fixed function "g" commutes with left translation in the group: formula_115 Furthermore, the convention is also required for consistency with the definition of the convolution of measures given below. However, with a right instead of a left Haar measure, the latter integral is preferred over the former. On locally compact abelian groups, a version of the convolution theorem holds: the Fourier transform of a convolution is the pointwise product of the Fourier transforms. The circle group T with the Lebesgue measure is an immediate example. For a fixed "g" in "L"1(T), we have the following familiar operator acting on the Hilbert space "L"2(T): formula_116 The operator "T" is compact. A direct calculation shows that its adjoint "T* " is convolution with formula_117 By the commutativity property cited above, "T" is normal: "T"* "T" = "TT"* . Also, "T" commutes with the translation operators. Consider the family "S" of operators consisting of all such convolutions and the translation operators. Then "S" is a commuting family of normal operators. According to spectral theory, there exists an orthonormal basis {"hk"} that simultaneously diagonalizes "S". This characterizes convolutions on the circle. Specifically, we have formula_118 which are precisely the characters of T. Each convolution is a compact multiplication operator in this basis. This can be viewed as a version of the convolution theorem discussed above. A discrete example is a finite cyclic group of order "n". Convolution operators are here represented by circulant matrices, and can be diagonalized by the discrete Fourier transform. A similar result holds for compact groups (not necessarily abelian): the matrix coefficients of finite-dimensional unitary representations form an orthonormal basis in "L"2 by the Peter–Weyl theorem, and an analog of the convolution theorem continues to hold, along with many other aspects of harmonic analysis that depend on the Fourier transform. Convolution of measures. Let "G" be a (multiplicatively written) topological group. If μ and ν are finite Borel measures on "G", then their convolution "μ"∗"ν" is defined as the pushforward measure of the group action and can be written as formula_119 for each measurable subset "E" of "G". The convolution is also a finite measure, whose total variation satisfies formula_120 In the case when "G" is locally compact with (left-)Haar measure λ, and μ and ν are absolutely continuous with respect to a λ, so that each has a density function, then the convolution μ∗ν is also absolutely continuous, and its density function is just the convolution of the two separate density functions. If μ and ν are probability measures on the topological group (R,+), then the convolution "μ"∗"ν" is the probability distribution of the sum "X" + "Y" of two independent random variables "X" and "Y" whose respective distributions are μ and ν. Infimal convolution. In convex analysis, the infimal convolution of proper (not identically formula_17) convex functions formula_121 on formula_122 is defined by: formula_123 It can be shown that the infimal convolution of convex functions is convex. Furthermore, it satisfies an identity analogous to that of the Fourier transform of a traditional convolution, with the role of the Fourier transform is played instead by the Legendre transform: formula_124 We have: formula_125 Bialgebras. Let ("X", Δ, ∇, "ε", "η") be a bialgebra with comultiplication Δ, multiplication ∇, unit η, and counit "ε". The convolution is a product defined on the endomorphism algebra End("X") as follows. Let "φ", "ψ" ∈ End("X"), that is, "φ", "ψ": "X" → "X" are functions that respect all algebraic structure of "X", then the convolution "φ"∗"ψ" is defined as the composition formula_126 The convolution appears notably in the definition of Hopf algebras . A bialgebra is a Hopf algebra if and only if it has an antipode: an endomorphism "S" such that formula_127 Applications. Convolution and related operations are found in many applications in science, engineering and mathematics. See also. <templatestyles src="Div col/styles.css"/> Notes. <templatestyles src="Reflist/styles.css" /> References. <templatestyles src="Reflist/styles.css" />
[ { "math_id": 0, "text": "f" }, { "math_id": 1, "text": "f \\star g" }, { "math_id": 2, "text": "g" }, { "math_id": 3, "text": "f*g" }, { "math_id": 4, "text": "f(x)" }, { "math_id": 5, "text": "g(x)" }, { "math_id": 6, "text": "g(-x)" }, { "math_id": 7, "text": "f(-x)" }, { "math_id": 8, "text": "f * g" }, { "math_id": 9, "text": "*" }, { "math_id": 10, "text": "(f * g)(t) := \\int_{-\\infty}^\\infty f(\\tau) g(t - \\tau) \\, d\\tau." }, { "math_id": 11, "text": "(f * g)(t) := \\int_{-\\infty}^\\infty f(t - \\tau) g(\\tau)\\, d\\tau." }, { "math_id": 12, "text": "t" }, { "math_id": 13, "text": "f(\\tau)" }, { "math_id": 14, "text": "g(-\\tau)" }, { "math_id": 15, "text": "g(t-\\tau)" }, { "math_id": 16, "text": "\\tau" }, { "math_id": 17, "text": "+\\infty" }, { "math_id": 18, "text": "-\\infty" }, { "math_id": 19, "text": "|t|" }, { "math_id": 20, "text": "[0,\\infty)" }, { "math_id": 21, "text": "(f * g)(t) = \\int_{0}^{t} f(\\tau) g(t - \\tau)\\, d\\tau \\quad \\ \\text{for } f, g : [0, \\infty) \\to \\mathbb{R}." }, { "math_id": 22, "text": " f(t) * g(t) \\mathrel{:=} \\underbrace{\\int_{-\\infty}^\\infty f(\\tau) g(t - \\tau)\\, d\\tau}_{(f * g )(t)}," }, { "math_id": 23, "text": "f(t) * g(t-t_0)" }, { "math_id": 24, "text": "(f*g)(t-t_0)" }, { "math_id": 25, "text": "f(t-t_0) * g(t-t_0)" }, { "math_id": 26, "text": "(f * g)(t-2t_0)" }, { "math_id": 27, "text": " f(t) " }, { "math_id": 28, "text": " g(t) " }, { "math_id": 29, "text": " F(s) = \\int_{-\\infty}^\\infty e^{-su} \\ f(u) \\ \\text{d}u " }, { "math_id": 30, "text": " G(s) = \\int_{-\\infty}^\\infty e^{-sv} \\ g(v) \\ \\text{d}v " }, { "math_id": 31, "text": " (f * g)(t) " }, { "math_id": 32, "text": " F(s) " }, { "math_id": 33, "text": " G(s) " }, { "math_id": 34, "text": "\n\\begin{align}\nF(s) \\cdot G(s) &= \\int_{-\\infty}^\\infty e^{-su} \\ f(u) \\ \\text{d}u \\cdot \\int_{-\\infty}^\\infty e^{-sv} \\ g(v) \\ \\text{d}v \\\\\n&= \\int_{-\\infty}^\\infty \\int_{-\\infty}^\\infty e^{-s(u + v)} \\ f(u) \\ g(v) \\ \\text{d}u \\ \\text{d}v\n\\end{align}\n" }, { "math_id": 35, "text": " t = u + v " }, { "math_id": 36, "text": "\n\\begin{align}\nF(s) \\cdot G(s) &= \\int_{-\\infty}^\\infty \\int_{-\\infty}^\\infty e^{-st} \\ f(u) \\ g(t - u) \\ \\text{d}u \\ \\text{d}t \\\\\n&= \\int_{-\\infty}^\\infty e^{-st} \\underbrace{\\int_{-\\infty}^\\infty f(u) \\ g(t - u) \\ \\text{d}u}_{(f * g)(t)} \\ \\text{d}t \\\\\n&= \\int_{-\\infty}^\\infty e^{-st} (f * g)(t) \\ \\text{d}t\n\\end{align}\n" }, { "math_id": 37, "text": " F(s) \\cdot G(s) " }, { "math_id": 38, "text": "\\int f(u)\\cdot g(x - u) \\, du" }, { "math_id": 39, "text": "\\int_0^t \\varphi(s)\\psi(t - s) \\, ds,\\quad 0 \\le t < \\infty," }, { "math_id": 40, "text": "(f * g_T)(t) \\equiv \\int_{t_0}^{t_0+T} \\left[\\sum_{k=-\\infty}^\\infty f(\\tau + kT)\\right] g_T(t - \\tau)\\, d\\tau," }, { "math_id": 41, "text": "(f * g)[n] = \\sum_{m=-\\infty}^\\infty f[m] g[n - m]," }, { "math_id": 42, "text": "(f * g)[n] = \\sum_{m=-\\infty}^\\infty f[n-m] g[m]." }, { "math_id": 43, "text": "\\{-M,-M+1,\\dots,M-1,M\\}" }, { "math_id": 44, "text": "(f * g)[n]=\\sum_{m=-M}^M f[n-m]g[m]." }, { "math_id": 45, "text": "g_{_N}" }, { "math_id": 46, "text": "N," }, { "math_id": 47, "text": "f," }, { "math_id": 48, "text": "f*g_{_N}" }, { "math_id": 49, "text": "(f * g_{_N})[n] \\equiv \\sum_{m=0}^{N-1} \\left(\\sum_{k=-\\infty}^\\infty {f}[m + kN]\\right) g_{_N}[n - m]." }, { "math_id": 50, "text": "k" }, { "math_id": 51, "text": "f." }, { "math_id": 52, "text": "g," }, { "math_id": 53, "text": "g." }, { "math_id": 54, "text": "[0,N-1]," }, { "math_id": 55, "text": "f *_N g" }, { "math_id": 56, "text": "(f * g )(x) = \\int_{\\mathbf{R}^d} f(y)g(x-y)\\,dy = \\int_{\\mathbf{R}^d} f(x-y)g(y)\\,dy," }, { "math_id": 57, "text": "\\|{f}* g\\|_p\\le \\|f\\|_1\\|g\\|_p." }, { "math_id": 58, "text": "\\frac{1}{p}+\\frac{1}{q}=\\frac{1}{r}+1," }, { "math_id": 59, "text": "\\left\\Vert f*g\\right\\Vert_r\\le\\left\\Vert f\\right\\Vert_p\\left\\Vert g\\right\\Vert_q,\\quad f\\in L^p,\\ g\\in L^q," }, { "math_id": 60, "text": "\\left\\Vert f*g\\right\\Vert_r\\le B_{p,q}\\left\\Vert f\\right\\Vert_p\\left\\Vert g\\right\\Vert_q,\\quad f\\in L^p,\\ g\\in L^q." }, { "math_id": 61, "text": "\\|f * g\\|_r\\le C_{p,q}\\|f\\|_p\\|g\\|_{q,w}" }, { "math_id": 62, "text": "\\|g\\|_{q,w}" }, { "math_id": 63, "text": "L^{p,w}\\times L^{q,w}\\to L^{r,w}" }, { "math_id": 64, "text": "1< p,q,r<\\infty" }, { "math_id": 65, "text": "\\|f * g\\|_{r,w}\\le C_{p,q}\\|f\\|_{p,w}\\|g\\|_{r,w}." }, { "math_id": 66, "text": "\\int_{\\mathbb{R}^d} {f}(y)g(x-y)\\,dy = (f*g)(x) \\in C^\\infty(\\mathbb{R}^d) ." }, { "math_id": 67, "text": "\\varphi" }, { "math_id": 68, "text": "f* (g* \\varphi) = (f* g)* \\varphi" }, { "math_id": 69, "text": "\\mu*\\nu" }, { "math_id": 70, "text": "\\int_{\\mathbf{R}^d} f(x) \\, d(\\mu*\\nu)(x) = \\int_{\\mathbf{R}^d}\\int_{\\mathbf{R}^d}f(x+y)\\,d\\mu(x)\\,d\\nu(y)." }, { "math_id": 71, "text": "(\\mu*\\nu)(A) = \\int_{\\mathbf{R}^d\\times\\mathbf R^d}1_A(x+y)\\, d(\\mu\\times\\nu)(x,y)," }, { "math_id": 72, "text": "A\\subset\\mathbf R^d" }, { "math_id": 73, "text": "1_A" }, { "math_id": 74, "text": "A" }, { "math_id": 75, "text": "\\|\\mu* \\nu\\|\\le \\|\\mu\\|\\|\\nu\\| " }, { "math_id": 76, "text": "f * g = g * f " }, { "math_id": 77, "text": "(f * g)(t) = \\int^\\infty_{-\\infty} f(\\tau)g(t - \\tau)\\, d\\tau" }, { "math_id": 78, "text": "u = t - \\tau" }, { "math_id": 79, "text": "f * (g * h) = (f * g) * h" }, { "math_id": 80, "text": "f * (g + h) = (f * g) + (f * h)" }, { "math_id": 81, "text": "a (f * g) = (a f) * g" }, { "math_id": 82, "text": "a" }, { "math_id": 83, "text": "f * \\delta = f" }, { "math_id": 84, "text": "S^{-1} * S = \\delta" }, { "math_id": 85, "text": "\\overline{f * g} = \\overline{f} * \\overline{g}" }, { "math_id": 86, "text": "q(t) = r(t)*s(t)," }, { "math_id": 87, "text": "q(-t) = r(-t)*s(-t)." }, { "math_id": 88, "text": "q(t) \\ \\stackrel{\\mathcal{F}}{\\Longleftrightarrow}\\ \\ Q(f) = R(f)S(f)" }, { "math_id": 89, "text": "q(-t) \\ \\stackrel{\\mathcal{F}}{\\Longleftrightarrow}\\ \\ Q(-f) = R(-f)S(-f)" }, { "math_id": 90, "text": "\n\\begin{align}\nq(-t) &= \\mathcal{F}^{-1}\\bigg\\{R(-f)S(-f)\\bigg\\}\\\\\n&= \\mathcal{F}^{-1}\\bigg\\{R (-f)\\bigg\\} * \\mathcal{F}^{-1}\\bigg\\{S(-f)\\bigg\\}\\\\\n&= r(-t) * s(-t)\n\\end{align}\n" }, { "math_id": 91, "text": "(f * g)' = f' * g = f * g'" }, { "math_id": 92, "text": "\n\\begin{align}\n(f * g)' & = \\frac{d}{dt} \\int^\\infty_{-\\infty} f(\\tau) g(t - \\tau) \\, d\\tau \\\\\n& =\\int^\\infty_{-\\infty} f(\\tau) \\frac{\\partial}{\\partial t} g(t - \\tau) \\, d\\tau \\\\\n& =\\int^\\infty_{-\\infty} f(\\tau) g'(t - \\tau) \\, d\\tau = f* g'.\n\\end{align}\n" }, { "math_id": 93, "text": "F(t) = \\int^t_{-\\infty} f(\\tau) d\\tau," }, { "math_id": 94, "text": "G(t) = \\int^t_{-\\infty} g(\\tau) \\, d\\tau," }, { "math_id": 95, "text": "(F * g)(t) = (f * G)(t) = \\int^t_{-\\infty}(f * g)(\\tau)\\,d\\tau." }, { "math_id": 96, "text": "\\int_{\\mathbf{R}^d}(f * g)(x) \\, dx=\\left(\\int_{\\mathbf{R}^d}f(x) \\, dx\\right) \\left(\\int_{\\mathbf{R}^d}g(x) \\, dx\\right)." }, { "math_id": 97, "text": "\\frac{d}{dx}(f * g) = \\frac{df}{dx} * g = f * \\frac{dg}{dx}" }, { "math_id": 98, "text": "\\frac{d}{dx}" }, { "math_id": 99, "text": "\\frac{\\partial}{\\partial x_i}(f * g) = \\frac{\\partial f}{\\partial x_i} * g = f * \\frac{\\partial g}{\\partial x_i}." }, { "math_id": 100, "text": "\\frac{d}{dx}(f* g) = \\frac{df}{dx} * g." }, { "math_id": 101, "text": "D(f * g) = (Df) * g = f * (Dg)." }, { "math_id": 102, "text": " \\mathcal{F}\\{f * g\\} = \\mathcal{F}\\{f\\}\\cdot \\mathcal{F}\\{g\\}" }, { "math_id": 103, "text": " \\mathcal{F}\\{f\\}" }, { "math_id": 104, "text": "\\mathcal W" }, { "math_id": 105, "text": "\\mathcal W\\left(C^{(1)}x \\ast C^{(2)}y\\right) = \\left(\\mathcal W C^{(1)} \\bull \\mathcal W C^{(2)}\\right)(x \\otimes y) = \\mathcal W C^{(1)}x \\circ \\mathcal W C^{(2)}y" }, { "math_id": 106, "text": " \\bull " }, { "math_id": 107, "text": " \\otimes " }, { "math_id": 108, "text": " \\circ " }, { "math_id": 109, "text": "\\mathbf{A},\\mathbf{B}" }, { "math_id": 110, "text": "\\mathcal W\\left((\\mathbf{A}x) \\ast (\\mathbf{B}y)\\right) = \\left((\\mathcal W \\mathbf{A}) \\bull (\\mathcal W \\mathbf{B})\\right)(x \\otimes y) = (\\mathcal W \\mathbf{A}x) \\circ (\\mathcal W \\mathbf{B}y)" }, { "math_id": 111, "text": "\\tau_x (f * g) = (\\tau_x f) * g = f * (\\tau_x g)" }, { "math_id": 112, "text": "(\\tau_x f)(y) = f(y - x)." }, { "math_id": 113, "text": "(f * g)(x) = \\int_G f(y) g\\left(y^{-1}x\\right)\\,d\\lambda(y)." }, { "math_id": 114, "text": "\\int f\\left(xy^{-1}\\right)g(y) \\, d\\lambda(y)" }, { "math_id": 115, "text": "L_h(f* g) = (L_hf)* g." }, { "math_id": 116, "text": "T {f}(x) = \\frac{1}{2 \\pi} \\int_{\\mathbf{T}} {f}(y) g( x - y) \\, dy." }, { "math_id": 117, "text": "\\bar{g}(-y)." }, { "math_id": 118, "text": "h_k (x) = e^{ikx}, \\quad k \\in \\mathbb{Z},\\;" }, { "math_id": 119, "text": "(\\mu * \\nu)(E) = \\iint 1_E(xy) \\,d\\mu(x) \\,d\\nu(y)" }, { "math_id": 120, "text": "\\|\\mu * \\nu\\| \\le \\left\\|\\mu\\right\\| \\left\\|\\nu\\right\\|." }, { "math_id": 121, "text": "f_1,\\dots,f_m" }, { "math_id": 122, "text": "\\mathbb R^n" }, { "math_id": 123, "text": "(f_1*\\cdots*f_m)(x)=\\inf_x \\{ f_1(x_1)+\\cdots+f_m(x_m) | x_1+\\cdots+x_m = x\\}." }, { "math_id": 124, "text": "\\varphi^*(x) = \\sup_y ( x\\cdot y - \\varphi(y))." }, { "math_id": 125, "text": "(f_1*\\cdots *f_m)^*(x) = f_1^*(x) + \\cdots + f_m^*(x)." }, { "math_id": 126, "text": "X \\mathrel{\\xrightarrow{\\Delta}} X \\otimes X \\mathrel{\\xrightarrow{\\phi\\otimes\\psi}} X \\otimes X \\mathrel{\\xrightarrow{\\nabla}} X." }, { "math_id": 127, "text": "S * \\operatorname{id}_X = \\operatorname{id}_X * S = \\eta\\circ\\varepsilon." }, { "math_id": 128, "text": "i" }, { "math_id": 129, "text": "A_i" }, { "math_id": 130, "text": "A_j" }, { "math_id": 131, "text": "j" } ]
https://en.wikipedia.org/wiki?curid=7519
75192315
24 cm K L/20
1867 German Navy rifled breech loader The 24 cm K L/20 was a 24 cm caliber Krupp gun which was the first heavy Ring Kanone or built-up gun used by Germany. It was a rifled breech loader with a Krupp cylindroprismatic sliding breech and a Broadwell ring. The 24 cm K L/20 was also known as 96-pdr, 9 inch gun and kurze 24 cm Ring Kanone. It saw extensive testing of several new concepts. Background. In 1858, Prussia decided to use rifled breechloading guns of 9, 12 and 15 cm caliber for its artillery. The idea was that the same guns would be used for the navy and coastal defense. However, tests in the early 1860s showed that even the 15 cm rifled breechloader was almost useless against the standard 114 mm ship armor of the time. In mid-April 1862, this led to a recommendation to develop a 36-pdr caliber (17 cm) gun. In late April 1862, this was followed by a recommendation to develop a 19.3 cm (48-pdr caliber) gun. In mid 1864 trials showed that the developed 19.3 cm gun was still too light. In Fall 1864 the Prussian Navy then ordered its first massive steel 21 cm rifled breechloaders a.k.a. as 72-pdrs. These were made according to a design by the Navy Departement. To all appearances, the Prussian, Austrian and Russian navies planned to standardize on an 8 inch gun. For Russia, this was a 20.3 cm gun. For Prussia, it was a 21 cm gun, because Prussia used the slightly longer Prussian inch called of 26.154 mm. 8 Prussian inches equaled 209.2 mm. Even before the first Prussian 21 cm gun started its trials, its Inspection of the Artillery advised to develop an even heavier caliber gun in February 1865. This was based on the artillery developments in foreign countries and the general increase in armor thickness. Combined with other developments, this would lead the Prussian Navy to order two 24 cm guns for testing in 1867. Krupp then proposed to use the ring or built up construction, which led to the first two 24 cm Kanone L/20 being delivered in December 1867. After the 24 cm Kanone L/20 had been tested, a longer version was designed and ordered. This would become the 24 cm Ring Kanone L/22. Somewhat before mid December 1868, Krupp got the order to design this longer version of the 24 cm Kanone L/20 and a longer version of the 21 cm RK L/19. The L/22 was supposed to have the same weight as the L/20 but would become significantly heavier. It was used in coastal defence. Characteristics. Names and models. The names 96-pdr and '9 inch gun' for the 24 cm K L/20 and the longer 24 cm RK L/22 are due to the shift from traditional to newer systems to denote the caliber (inner diameter) of a gun barrel. In the traditional system for smoothbore muzzleloading guns, the caliber was denoted by the weight of the shot in pounds. This made sense, because all shot were the same i.e. round, giving a direct relation between the two. When rifled guns were introduced, the shot became cylindrical. The designation 96-pounder therefore meant: of a caliber that could fire a 96 pound round bullet. The 96-pdr would actually fire a cylindrical shot of about 300 pounds. Due to this marked difference, the English and American rifled guns of about the same caliber were named 300-pounders. The Prussian Navy did not follow this practice, but continued to officially refer to 96-pdrs. When a slightly longer model of the 24 cm K L/20 was made, the two were held apart by the designations 'kurzer Ring 96-pfdr 180" lang' and 'langer Ring 96-pfdr 200" lang'. The name 9 inch gun or 9 zölliges is in line with current practice to refer to the actual inner diameter of the gun. In January 1867 the Prussian Navy ordered Krupp to design a 9 inch (96-pdr) gun barrel. This was the 24 cm K L/20, which had a caliber of 235.4 mm. From this measurement we know that Krupp designed a gun with a caliber of 9 Prussian inch or Zöll, which explains why Krupp and others liked to refer to the 9 inch gun. When the Prussian Navy shifted to the metric system, the guns received the names 'kurze 24 cm Kanone' and 'lange 24 cm Kanone'. In 1885, the kurze 24 cm Ring Kanone was renamed '24 cm Kanone L/20' abbreviated: '24 cm K L/20'. The lange 24 cm Kanone was renamed '24 cm Ring Kanone L/22' abbreviated: '24 cm RK L/22'. Characteristics of the 24 cm L/20. The design of the 24 cm K L/20 had been ordered in January 1867. Just like previous Krupp guns built for Prussia, it was a rifled breech loader. Other characteristics were new for the Prussian Navy: The built-up gun barrel consisted of an inner tube strengthened by two more tubes. The breech was Krupp's cylindroprismatic variant of its horizontal sliding wedge breech block. Of the first two guns, one used a Broadwell ring for obturation, the other used copper scales. The caliber was 235.4 mm. This amounted to 9 Prussian inches of 26.154 mm. Overall length was 4.708 m. Weight including breech was 14,500 kg. The breech block itself weighed 625 kg. The length in calibers was 4.708 m / 0.2354 mm = L/20. At the time of its introduction, the caliber length was not part of the gun's name. There were two carriages: the 24 cm Rahmen Laffete C/76 84 for broadside use, and the 24 cm Pivot Laffete C/85 for use as a pivot gun. The ammunition that the 24 cm K L/20 could fire was: 24 cm grenade C/69 aptirt; 24 cm grenade L/2,7; 24 cm Hartgussgranate C/69 aptirt; and 24 cm Steel grenade. Design, development, and trials. The first built-up guns. Unlike the preceding massive 21 cm RK L/19, the 24 cm Ring Kanone had a built-up gun barrel. Here, Krupp followed the British example. In the 1850s, the Armstrong gun had appeared. It was basically a rifled breech loader that was built-up, instead of having been cast in a single piece. The built-up construction allows pre-stressing of the innermost tube of the barrel, allowing it to withstand much higher pressures than when the gun is cast in a single piece. In time, Krupp proved itself to be the best manufacturer of gun barrels due to the cast steel that it used. This was the famous . Even the Elswick Ordnance Company, which exported guns designed by Armstrong, used Krupp castings for the inner tubes of its gun barrels. How Krupp came to develop a large caliber built-up breechloader. At the time, it was usual for gun manufacturers to order cast gun barrels at Krupp or another manufacturer, and to turn them into a complete artillery piece on their own premises. The practice that Krupp only cast the gun barrels was changed by orders from Russia. In 1863 Russia ordered 16 9 inch (22.86 cm) guns at Krupp and about 80 8 inch guns, probably all still muzzle loaders. These orders gave Krupp the opportunity to develop itself as an artillery manufacturer. The major doubts about using rifled breech loading built-up guns of larger caliber centered on the breech system and the obturation. The problems were caused by the built-up guns using far higher explosive pressures in the gun barrel. In the United Kingdom, these problems surfaced in the RBL 7-inch Armstrong gun a.k.a. the 110-pounder. This was an early British 178 mm rifled built-up breech loader. When the problems could not be solved, production of the 110-pounder was discontinued in 1864, and the United Kingdom reverted to muzzle loaders for the higher cailbers. The main reason was that muzzle loaders were able to withstand the higher pressures needed to fire a projectile with the speed required to penetrate armor. In that same year 1864, Krupp invented the cylindroprismatic variant of its horizontal sliding wedge breech. This was essential for the development of its built-up gun, the so called Ring Kanone. The cylindroprismatic breech could withstand far higher explosive pressures than the square variant. It also allowed to use much less metal near the breach, which in turn allowed a rational construction of the Ring Kanone. Krupp's work on the barrel itself used the work of the Russian general Alex Gadolin, who had built on Lamé's work regarding elasticity and developed a practical application. In 1866 Krupp created an 8" built-up gun called 'Ring Kanone' and tested it. That same year Russia ordered 25 8" Ring Kanone, and a single 9" one. The latter was tested in Russia in 1867, and led to an order for 62 more 9" Ring Kanone in 1868. The Prussian 24 cm Ring Kanone. In January 1867, the Prussian Navy ordered its first two 24 cm Ring Kanonen, see above. In fact, this January 1867 order was one to design a 9 inch / 96-pdr gun barrel. After the design drawing of an inner tube reinforced by two ring tubes came in, the Prussian Navy ordered two barrels. Both would get the Krupp simple cylindroprismatic breech. For obturation, gun number 1 used copper scales. Gun number 2 used a Broadwell ring. The powder chamber was first designed for 12.5-15 kg of regular gunpowder, but this was increased after the trials of the 9 inch Ring Kanone for Russia, which were attended by many Prussian officers. Comparative tests (1868). After arriving in Berlin in December 1867, the two 24 cm guns were brought to the Tegel Shooting range () in Tegel, Berlin. They were to be tested together with the heavy and light versions of the massive 21 cm breechloaders mentioned above. The targets were mockup ship hulls with armor of 5, 6, 7, 8, and 9 British inch thickness. Behind the armor there were the regular supporting wooden layers of teak, oak and other wood, some of which were 737 mm thick. The armor came from the Brown factory in Sheffield. Preliminary tests of the Krupp 9 inch Ring Kanone. Before the trials, there was separate shooting to determine the velocity of the guns involved. For this a Boulengé chronograph was placed 47 m from the muzzle of the guns. It was determined that with a charge of 21 kg of black gun powder, the 24 cm gun fired a bullet with a speed of 347.5 m/s. This was disappointing, but still much better than the 21 cm massive guns. On 31 March 1868 official trials were held at Tegel. Only 11 shots were fired, 7 of these by the 24 cm guns. The outcome was bad. The 9 inch gun was ineffective against armor over 7 inch thick, even at short distances. It was obvious that the 24 cm gun could only be made effective if the velocity by which it fired shot was increased. There were three options to do this: These options to increase velocity failed. The only result was a small increase in velocity by increasing the charge of black gun powder to 22.5 kg Preliminary tests of the rival 9 inch Armstrong gun. As the Prussian Navy was about to arm its first armored ships, it had to make a choice for the main guns. The 9 inch Ring Kanone was a logical candidate, but it got a rival when the navy ordered a 9 inch Woolwich gun from the United Kingdom. This was the RML 9-inch Armstrong Gun, which was the Elswick Ordnance Company version of the RML 9-inch 12-ton gun that the British Royal Arsenal in Woolwich made for the Royal Navy. On some main characteristics, the Krupp 9 inch Ring Kanone and the Elswick 9 inch Woolwich gun compared as follows: Caliber 235.4 mm vs. 228.6 mm; Length 4.708 m vs. 3.962 m; Weight 14,650 kg vs. 13,100 kg. The Woolwich gun used the fast burning large grained rifle powder which was able to create a high velocity in the relatively short barrel. Just like the 24 cm Ring Kanone, the 9 inch Armstrong gun was subjected to some preliminiary tests. Trials were held with a shot of 113.5 kg and an explosive charge of 19.5 kg of British gunpowder. These yielded an average velocity of 404 m/s. Some more trials were held in which the Armstrong 9 inch gun fired at a wooden target of 5 by 5 meters at a distance of 900 m. As the biggest deviation was only 1.7 m, the accuracy was very satisfactory. Comparative trials on 2 June 1868. The first comparative trials of the British and Prussian guns against the mock-up armored hulls took place on 2 June 1868. The Krupp 9 inch Ring Kanone fired 152.5 kg shot propelled by 22.5 kg of regular Prussian gun powder and attained a velocity of about 351 m/s. The 9 inch Woolwich gun fired shot of 113.5 kg propelled by a charge of 19.5 kg of British (large grained rifle) powder. It attained a velocity of 404 m/s. The result of these trials against armor were clear. The Woolwich gun penetrated all the mock-up armored hulls. The Krupp 9 inch Ring Kanone was useless against the 7 and 8 inch armor plating. Investigation of the failure of the 9 inch Ring Kanone. For Krupp, the results of the first comparative trials were very disappointing. Two causes were investigated: the higher velocity of the Woolwich gun, and the form of the shot. The recent invention of tools that could measure the velocity of bullets made that scientists were already able to apply scientific theory to the problem. The Kinetic energy or Vis viva with which the shot hit the armor plating could be calculated from the weight and speed of the shot. formula_0 It was found that the difference in kinetic energy was not so great that it could by itself explain the superiority of the Woolwich gun. A big part of the underperformance of the Krupp gun seemed to be caused by the heavy lead covering and the form of the Prussian shot. If the weight of this lead covering was subtracted from the weight of the Prussian shot, the calculation of the kinetic energy did explain the results. Measures to improve the gun. In order to increase velocity, Krupp changed the 96-pdr number 2 to use ignition through the breech block. It would also use prismatic gunpowder, which proved crucial. A charge of 24 kg of regular gunpowder had led to a velocity of 357 m/s. The same charge of prismatic gunpowder led to a velocity of 392 m/s. The prismatic gunpowder also led to a dramatic increase in accuracy. It even made the Krupp gun more accurate than the Woolwich gun. To address the problem of the form and heavy lead covering of the Prussian shot, several new shot were tested. The new types of shot had a longer and sharper head with a diameter that aimed to cover the hull and bottom of the shot from hitting the armor. The Gruson shot used lead rings that weighed far less than the previous lead covering (mantle). Comparative trials in July-August 1868. On 7 July 1868 the Krupp gun was again fired against the mock-up ship hulls. With a charge of 24 kg of prismatic gun powder it fired improved shot which used less lead. The new Gruson shot weighed 150-163 kg and were fired with an average velocity of 392 m/s. The new Krupp shot weighed about 125 kg and were fired with an average velocity of 431 m/s. The Woolwich gun was also tested again. The results showed that the Krupp gun was now more effective against the armor plates than the Woolwich gun. This was also confirmed by calculations of the kinetic energy for the higher velocity of the Krupp gun. However, the lead rings of the Gruson shot were found to be very defective. On 4 August new trials were held. They would test a Gruson shot of 153 kg that did again use the heavy lead covering, but retained the other improvements to the shot. It was opposed by a lengthened Palliser shot fired by the Woolwich gun. This test showed that with these shot, the Krupp gun was only about as effective as the Woolwich gun. On the same day, these trials also tested the grenades (i.e. shot that explode after impact) used for the British and Prussian 9 inch guns. In this respect the Prussian grenades proved superior to the Palliser grenades. Further trials. In November 1868 improved shot were tested with both guns. The new Palliser shot were found to be no improvement, even though Armstrong objected that they had not been made by Palliser. The improved Gruson shot had a much thinner lead covering, and weighed only 141 kg. A new steel grenade by Krupp weighed 132 kg and contained 5 kg explosives. A final test of both guns was about their durability. Both guns would have to fire 600 shot with the maximum regular charge. The trial of the 9 inch Woolwich gun proved that it became unsuitable for service after 200-250 shots. The test even had to be stopped after 292 shots, when it became too dangerous to fire the gun due to cracks in the barrel. The Krupp gun fired 676 shots without much problems. However, on 28 November a defective grenade exploded in the barrel, causing a crack and destroying the rifling. Even so, this crack was not yet dangerous. In January 1869, the Prussian War Office then declared the official end of the trials between the Prussian and British guns. Use. The Prussian naval artillery systems. The trials of the 24 cm gun had been concluded with a very positive outcome for the Krupp guns. The authorities then had to decide which guns they wanted to order. For the coastal artillery, the Artillerie Prüfungskommission (Artillery test commission) issued a report on 22 April 1869. It advised to order 15 cm, 21 cm, and 27 (later 28) cm Ring Kanone. The commission judged that the 24 cm Ring Kanone did not differ enough from the 21 cm gun to justify its inclusion in the coastal artillery system. For the on board artillery, there was also the matter of how much gun weight a ship could support. In 1869, the navy department therefore decided that the armored frigates would use the 24 cm Ring Kanone and the 21 cm gun. The turret ships would use a 26 cm and 17 cm gun. Smaller ships would also use 15 and 12 cm guns. Somewhat before mid December 1868, Krupp got the order to design longer versions of the 24 and 21 cm guns, but of the same weight. This would lead to the 24 cm RK L/22's inclusion in the coastal artillery system. On board ships. The armored frigate SMS König Wilhelm had 18 24 cm Kanone L/20 mounted in the broadside. Notes. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "E_\\text{k} = \\frac{1}{2} mv^2" } ]
https://en.wikipedia.org/wiki?curid=75192315
751933
Method of characteristics
Technique for solving hyperbolic partial differential equations In mathematics, the method of characteristics is a technique for solving partial differential equations. Typically, it applies to first-order equations, although more generally the method of characteristics is valid for any hyperbolic and parabolic partial differential equation. The method is to reduce a partial differential equation to a family of ordinary differential equations along which the solution can be integrated from some initial data given on a suitable hypersurface. Characteristics of first-order partial differential equation. For a first-order PDE (partial differential equation), the method of characteristics discovers curves (called characteristic curves or just characteristics) along which the PDE becomes an ordinary differential equation (ODE). Once the ODE is found, it can be solved along the characteristic curves and transformed into a solution for the original PDE. For the sake of simplicity, we confine our attention to the case of a function of two independent variables "x" and "y" for the moment. Consider a quasilinear PDE of the form Suppose that a solution "z" is known, and consider the surface graph "z" = "z"("x","y") in R3. A normal vector to this surface is given by formula_0 As a result, equation (1) is equivalent to the geometrical statement that the vector field formula_1 is tangent to the surface "z" = "z"("x","y") at every point, for the dot product of this vector field with the above normal vector is zero. In other words, the graph of the solution must be a union of integral curves of this vector field. These integral curves are called the characteristic curves of the original partial differential equation and are given by the "Lagrange– equations" formula_2 A parametrization invariant form of the "Lagrange–Charpit equations" is: formula_3 Linear and quasilinear cases. Consider now a PDE of the form formula_4 For this PDE to be linear, the coefficients "a""i" may be functions of the spatial variables only, and independent of "u". For it to be quasilinear, "a""i" may also depend on the value of the function, but not on any derivatives. The distinction between these two cases is inessential for the discussion here. For a linear or quasilinear PDE, the characteristic curves are given parametrically by formula_5 formula_6 such that the following system of ODEs is satisfied Equations (2) and (3) give the characteristics of the PDE. Proof for quasilinear case. In the quasilinear case, the use of the method of characteristics is justified by Grönwall's inequality. The above equation may be written as formula_7 We must distinguish between the solutions to the ODE and the solutions to the PDE, which we do not know are equal "a priori." Letting capital letters be the solutions to the ODE we find formula_8 formula_9 Examining formula_10, we find, upon differentiating that formula_11 which is the same as formula_12 We cannot conclude the above is 0 as we would like, since the PDE only guarantees us that this relationship is satisfied for formula_13, formula_14, and we do not yet know that formula_15. However, we can see that formula_16 since by the PDE, the last term is 0. This equals formula_17 By the triangle inequality, we have formula_18 Assuming formula_19 are at least formula_20, we can bound this for small times. Choose a neighborhood formula_21 around formula_22 small enough such that formula_19 are locally Lipschitz. By continuity, formula_23 will remain in formula_21 for small enough formula_24. Since formula_25, we also have that formula_26 will be in formula_21 for small enough formula_27 by continuity. So, formula_28 and formula_29 for formula_30. Additionally, formula_31 for some formula_32 for formula_30 by compactness. From this, we find the above is bounded as formula_33 for some formula_34. It is a straightforward application of Grönwall's Inequality to show that since formula_35 we have formula_36 for as long as this inequality holds. We have some interval formula_37 such that formula_38 in this interval. Choose the largest formula_39 such that this is true. Then, by continuity, formula_40. Provided the ODE still has a solution in some interval after formula_39, we can repeat the argument above to find that formula_38 in a larger interval. Thus, so long as the ODE has a solution, we have formula_38. Fully nonlinear case. Consider the partial differential equation where the variables "p"i are shorthand for the partial derivatives formula_41 Let ("x"i("s"),"u"("s"),"p"i("s")) be a curve in R2n+1. Suppose that "u" is any solution, and that formula_42 Along a solution, differentiating (4) with respect to "s" gives formula_43 formula_44 formula_45 The second equation follows from applying the chain rule to a solution "u", and the third follows by taking an exterior derivative of the relation formula_46. Manipulating these equations gives formula_47 where λ is a constant. Writing these equations more symmetrically, one obtains the Lagrange–Charpit equations for the characteristic formula_48 Geometrically, the method of characteristics in the fully nonlinear case can be interpreted as requiring that the Monge cone of the differential equation should everywhere be tangent to the graph of the solution. The second order partial differential equation is solved with Charpit method. Example. As an example, consider the advection equation (this example assumes familiarity with PDE notation, and solutions to basic ODEs). formula_49 where formula_50 is constant and formula_51 is a function of formula_52 and formula_53. We want to transform this linear first-order PDE into an ODE along the appropriate curve; i.e. something of the form formula_54 where formula_55 is a characteristic line. First, we find formula_56 by the chain rule. Now, if we set formula_57 and formula_58 we get formula_59 which is the left hand side of the PDE we started with. Thus formula_60 So, along the characteristic line formula_61, the original PDE becomes the ODE formula_62. That is to say that along the characteristics, the solution is constant. Thus, formula_63 where formula_64 and formula_65 lie on the same characteristic. Therefore, to determine the general solution, it is enough to find the characteristics by solving the characteristic system of ODEs: In this case, the characteristic lines are straight lines with slope formula_50, and the value of formula_51 remains constant along any characteristic line. Characteristics of linear differential operators. Let "X" be a differentiable manifold and "P" a linear differential operator formula_74 of order "k". In a local coordinate system "x""i", formula_75 in which "α" denotes a multi-index. The principal symbol of "P", denoted "σ""P", is the function on the cotangent bundle T∗"X" defined in these local coordinates by formula_76 where the "ξ""i" are the fiber coordinates on the cotangent bundle induced by the coordinate differentials "dx""i". Although this is defined using a particular coordinate system, the transformation law relating the "ξ""i" and the "x""i" ensures that "σ""P" is a well-defined function on the cotangent bundle. The function "σ""P" is homogeneous of degree "k" in the "ξ" variable. The zeros of "σ""P", away from the zero section of T∗"X", are the characteristics of "P". A hypersurface of "X" defined by the equation "F"("x") = "c" is called a characteristic hypersurface at "x" if formula_77 Invariantly, a characteristic hypersurface is a hypersurface whose conormal bundle is in the characteristic set of "P". Qualitative analysis of characteristics. Characteristics are also a powerful tool for gaining qualitative insight into a PDE. One can use the crossings of the characteristics to find shock waves for potential flow in a compressible fluid. Intuitively, we can think of each characteristic line implying a solution to formula_51 along itself. Thus, when two characteristics cross, the function becomes multi-valued resulting in a non-physical solution. Physically, this contradiction is removed by the formation of a shock wave, a tangential discontinuity or a weak discontinuity and can result in non-potential flow, violating the initial assumptions. Characteristics may fail to cover part of the domain of the PDE. This is called a rarefaction, and indicates the solution typically exists only in a weak, i.e. integral equation, sense. The direction of the characteristic lines indicates the flow of values through the solution, as the example above demonstrates. This kind of knowledge is useful when solving PDEs numerically as it can indicate which finite difference scheme is best for the problem. Notes. &lt;templatestyles src="Refbegin/styles.css" /&gt; &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\left(\\frac{\\partial z}{\\partial x}(x,y),\\frac{\\partial z}{\\partial y}(x,y),-1\\right).\\," }, { "math_id": 1, "text": "(a(x,y,z),b(x,y,z),c(x,y,z))\\," }, { "math_id": 2, "text": "\n\\begin{align}\n\\frac{dx}{dt}&=a(x,y,z),\\\\[8pt]\n\\frac{dy}{dt}&=b(x,y,z),\\\\[8pt]\n\\frac{dz}{dt}&=c(x,y,z).\n\\end{align}\n" }, { "math_id": 3, "text": "\\frac{dx}{a(x,y,z)} = \\frac{dy}{b(x,y,z)} = \\frac{dz}{c(x,y,z)} ." }, { "math_id": 4, "text": "\\sum_{i=1}^n a_i(x_1,\\dots,x_n,u) \\frac{\\partial u}{\\partial x_i}=c(x_1,\\dots,x_n,u)." }, { "math_id": 5, "text": "(x_1,\\dots,x_n,u) = (x_1(s),\\dots,x_n(s),u(s))" }, { "math_id": 6, "text": "u(\\mathbf{X}(s)) = U(s)" }, { "math_id": 7, "text": "\\mathbf{a}(\\mathbf{x},u) \\cdot \\nabla u(\\mathbf{x}) = c(\\mathbf{x},u) " }, { "math_id": 8, "text": "\\mathbf{X}'(s) = \\mathbf{a}(\\mathbf{X}(s),U(s)) " }, { "math_id": 9, "text": "U'(s) = c(\\mathbf{X}(s), U(s)) " }, { "math_id": 10, "text": "\\Delta(s) = |u(\\mathbf{X}(s)) - U(s)|^2 " }, { "math_id": 11, "text": "\\Delta'(s) = 2\\big(u(\\mathbf{X}(s)) - U(s)\\big) \\Big(\\mathbf{X}'(s)\\cdot \\nabla u(\\mathbf{X}(s)) - U'(s)\\Big) " }, { "math_id": 12, "text": "\\Delta'(s) = 2\\big(u(\\mathbf{X}(s)) - U(s)\\big) \\Big(\\mathbf{a}(\\mathbf{X}(s),U(s))\\cdot \\nabla u(\\mathbf{X}(s)) - c(\\mathbf{X}(s),U(s))\\Big) " }, { "math_id": 13, "text": "u(\\mathbf{x})" }, { "math_id": 14, "text": "\\mathbf{a}(\\mathbf{x},u) \\cdot \\nabla u(\\mathbf{x}) = c(\\mathbf{x},u)" }, { "math_id": 15, "text": "U(s) = u(\\mathbf{X}(s))" }, { "math_id": 16, "text": "\\Delta'(s) = 2\\big(u(\\mathbf{X}(s)) - U(s)\\big) \\Big(\\mathbf{a}(\\mathbf{X}(s),U(s))\\cdot \\nabla u(\\mathbf{X}(s)) - c(\\mathbf{X}(s),U(s))-\\big(\\mathbf{a}(\\mathbf{X}(s),u(\\mathbf{X}(s))) \\cdot \\nabla u(\\mathbf{X}(s)) - c(\\mathbf{X}(s),u(\\mathbf{X}(s)))\\big)\\Big) " }, { "math_id": 17, "text": "\\Delta'(s) = 2\\big(u(\\mathbf{X}(s)) - U(s)\\big) \\Big(\\big(\\mathbf{a}(\\mathbf{X}(s),U(s))-\\mathbf{a}(\\mathbf{X}(s),u(\\mathbf{X}(s)))\\big)\\cdot \\nabla u(\\mathbf{X}(s)) - \\big(c(\\mathbf{X}(s),U(s))-c(\\mathbf{X}(s),u(\\mathbf{X}(s)))\\big)\\Big) " }, { "math_id": 18, "text": "|\\Delta'(s)| \\leq 2\\big|u(\\mathbf{X}(s)) - U(s)\\big| \\Big(\\big\\|\\mathbf{a}(\\mathbf{X}(s),U(s))-\\mathbf{a}(\\mathbf{X}(s),u(\\mathbf{X}(s)))\\big\\| \\ \\|\\nabla u(\\mathbf{X}(s))\\| + \\big|c(\\mathbf{X}(s),U(s))-c(\\mathbf{X}(s),u(\\mathbf{X}(s)))\\big|\\Big) " }, { "math_id": 19, "text": "\\mathbf{a},c " }, { "math_id": 20, "text": "C^1 " }, { "math_id": 21, "text": "\\Omega " }, { "math_id": 22, "text": "\\mathbf{X}(0), U(0) " }, { "math_id": 23, "text": "(\\mathbf{X}(s),U(s)) " }, { "math_id": 24, "text": "s\n " }, { "math_id": 25, "text": "U(0) = u(\\mathbf{X}(0)) " }, { "math_id": 26, "text": "(\\mathbf{X}(s), u(\\mathbf{X}(s))) " }, { "math_id": 27, "text": "s " }, { "math_id": 28, "text": "(\\mathbf{X}(s),U(s)) \\in \\Omega " }, { "math_id": 29, "text": "(\\mathbf{X}(s), u(\\mathbf{X}(s))) \\in \\Omega " }, { "math_id": 30, "text": "s \\in [0,s_0] " }, { "math_id": 31, "text": "\\|\\nabla u(\\mathbf{X}(s))\\| \\leq M " }, { "math_id": 32, "text": "M \\in \\R " }, { "math_id": 33, "text": "|\\Delta'(s)| \\leq C|u(\\mathbf{X}(s)) - U(s)|^2 = C |\\Delta(s)| " }, { "math_id": 34, "text": "C \\in \\mathbb{R} " }, { "math_id": 35, "text": "\\Delta(0) = 0 " }, { "math_id": 36, "text": "\\Delta(s) = 0 " }, { "math_id": 37, "text": "[0, \\varepsilon) " }, { "math_id": 38, "text": "u(X(s)) = U(s) " }, { "math_id": 39, "text": "\\varepsilon " }, { "math_id": 40, "text": "U(\\varepsilon) = u(\\mathbf{X}(\\varepsilon)) " }, { "math_id": 41, "text": "p_i = \\frac{\\partial u}{\\partial x_i}." }, { "math_id": 42, "text": "u(s) = u(x_1(s),\\dots,x_n(s))." }, { "math_id": 43, "text": "\\sum_i(F_{x_i} + F_u p_i)\\dot{x}_i + \\sum_i F_{p_i}\\dot{p}_i = 0" }, { "math_id": 44, "text": "\\dot{u} - \\sum_i p_i \\dot{x}_i = 0" }, { "math_id": 45, "text": "\\sum_i (\\dot{x}_i dp_i - \\dot{p}_i dx_i)= 0." }, { "math_id": 46, "text": "du - \\sum_i p_i \\, dx_i = 0" }, { "math_id": 47, "text": "\\dot{x}_i=\\lambda F_{p_i},\\quad\\dot{p}_i=-\\lambda(F_{x_i}+F_up_i),\\quad \\dot{u}=\\lambda\\sum_i p_iF_{p_i}" }, { "math_id": 48, "text": "\\frac{\\dot{x}_i}{F_{p_i}}=-\\frac{\\dot{p}_i}{F_{x_i}+F_up_i}=\\frac{\\dot{u}}{\\sum p_iF_{p_i}}." }, { "math_id": 49, "text": "a \\frac{\\partial u}{\\partial x} + \\frac{\\partial u}{\\partial t} = 0" }, { "math_id": 50, "text": "a" }, { "math_id": 51, "text": "u" }, { "math_id": 52, "text": "x" }, { "math_id": 53, "text": "t" }, { "math_id": 54, "text": " \\frac{d}{ds}u(x(s), t(s)) = F(u, x(s), t(s)) ," }, { "math_id": 55, "text": "(x(s),t(s))" }, { "math_id": 56, "text": "\\frac{d}{ds}u(x(s), t(s)) = \\frac{\\partial u}{\\partial x} \\frac{dx}{ds} + \\frac{\\partial u}{\\partial t} \\frac{dt}{ds}" }, { "math_id": 57, "text": " \\frac{dx}{ds} = a" }, { "math_id": 58, "text": "\\frac{dt}{ds} = 1" }, { "math_id": 59, "text": " a \\frac{\\partial u}{\\partial x} + \\frac{\\partial u}{\\partial t} " }, { "math_id": 60, "text": "\\frac{d}{ds}u = a \\frac{\\partial u}{\\partial x} + \\frac{\\partial u}{\\partial t} = 0." }, { "math_id": 61, "text": "(x(s), t(s))" }, { "math_id": 62, "text": "u_s = F(u, x(s), t(s)) = 0" }, { "math_id": 63, "text": "u(x_s, t_s) = u(x_0, 0)" }, { "math_id": 64, "text": "(x_s, t_s)\\," }, { "math_id": 65, "text": "(x_0, 0)" }, { "math_id": 66, "text": "t(0)=0" }, { "math_id": 67, "text": "t=s" }, { "math_id": 68, "text": "\\frac{dx}{ds} = a" }, { "math_id": 69, "text": "x(0)=x_0" }, { "math_id": 70, "text": "x=as+x_0=at+x_0" }, { "math_id": 71, "text": "\\frac{du}{ds} = 0" }, { "math_id": 72, "text": "u(0)=f(x_0)" }, { "math_id": 73, "text": "u(x(t), t)=f(x_0)=f(x-at)" }, { "math_id": 74, "text": "P : C^\\infty(X) \\to C^\\infty(X)" }, { "math_id": 75, "text": "P = \\sum_{|\\alpha|\\le k} P^{\\alpha}(x)\\frac{\\partial}{\\partial x^\\alpha}" }, { "math_id": 76, "text": "\\sigma_P(x,\\xi) = \\sum_{|\\alpha|=k} P^\\alpha(x)\\xi_\\alpha" }, { "math_id": 77, "text": "\\sigma_P(x,dF(x)) = 0." } ]
https://en.wikipedia.org/wiki?curid=751933
75196571
Full entropy
Cryptographic property of random output In cryptography full entropy is a property of an output of a random number generator. The output has full entropy if it cannot practically be distinguished from an output of a theoretical perfect random number source (has almost n bits of entropy for an n-bit output). The term is extensively used in the NIST random generator standards NIST SP 800-90A and NIST SP 800-90B. With full entropy the per-bit entropy in the output of the random number generator is close to one: formula_0, where per NIST a practical formula_1. Some sources use the term to define the ideal random bit string (one bit of entropy per bit of output). In this sense "getting to 100% full entropy is impossible" in the real world. Definition. The mathematical definition relies on a "distinguishing game": an adversary with an unlimited computing power is provided with two sets of random numbers, each containing W elements of length n. One set is "ideal", it contains bit strings from the theoretically perfect random number generator, the other set is "real" and includes bit strings from the practical random number source after a randomness extractor. The full entropy for particular values of W and positive parameter formula_2 is achieved if an adversary cannot guess the real set with probability higher than formula_3. Additional entropy. The practical way to achieve the full entropy is to obtain from an entropy source bit strings longer than n bits, apply to them a high-quality randomness extractor that produces the n-bit result, and build the real set from these results. The ideal elements by nature have an entropy value of n. The inputs of the conditioning function will need to have a higher min-entropy value H to satisfy the full-entropy definition. The number of additional bits of entropy H-n depends on W and formula_2; the following table contains few representative values: Randomness extractor requirements. Not every randomness extractor will produce the desired results. For example, the Von Neumann extractor, while providing an unbiased output, does not decorrelate groups of bits, so for serially correlated inputs (typical for many entropy sources) the output bits will not be independent. NIST therefore defines the "vetted conditioning components" in its NIST SP 800-90B standard, including AES-CBC-MAC. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "1-\\epsilon" }, { "math_id": 1, "text": "\\epsilon<2^{-32}" }, { "math_id": 2, "text": "\\delta" }, { "math_id": 3, "text": "\\frac 1 2 + \\delta" } ]
https://en.wikipedia.org/wiki?curid=75196571
7519917
Bayesian linear regression
Method of statistical analysis Bayesian linear regression is a type of conditional modeling in which the mean of one variable is described by a linear combination of other variables, with the goal of obtaining the posterior probability of the regression coefficients (as well as other parameters describing the distribution of the regressand) and ultimately allowing the out-of-sample prediction of the regressand (often labelled formula_0) "conditional on" observed values of the regressors (usually formula_1). The simplest and most widely used version of this model is the "normal linear model", in which formula_0 given formula_1 is distributed Gaussian. In this model, and under a particular choice of prior probabilities for the parameters—so-called conjugate priors—the posterior can be found analytically. With more arbitrarily chosen priors, the posteriors generally have to be approximated. Model setup. Consider a standard linear regression problem, in which for formula_2 we specify the mean of the conditional distribution of formula_3 given a formula_4 predictor vector formula_5: formula_6 where formula_7 is a formula_4 vector, and the formula_8 are independent and identically normally distributed random variables: formula_9 This corresponds to the following likelihood function: formula_10 The ordinary least squares solution is used to estimate the coefficient vector using the Moore–Penrose pseudoinverse: formula_11 where formula_12 is the formula_13 design matrix, each row of which is a predictor vector formula_14; and formula_15 is the column formula_16-vector formula_17. This is a frequentist approach, and it assumes that there are enough measurements to say something meaningful about formula_7. In the Bayesian approach, the data are supplemented with additional information in the form of a prior probability distribution. The prior belief about the parameters is combined with the data's likelihood function according to Bayes theorem to yield the posterior belief about the parameters formula_7 and formula_18. The prior can take different functional forms depending on the domain and the information that is available "a priori". Since the data comprise both formula_15 and formula_12, the focus only on the distribution of formula_15 conditional on formula_12 needs justification. In fact, a "full" Bayesian analysis would require a joint likelihood formula_19 along with a prior formula_20, where formula_21 symbolizes the parameters of the distribution for formula_12. Only under the assumption of (weak) exogeneity can the joint likelihood be factored into formula_22. The latter part is usually ignored under the assumption of disjoint parameter sets. More so, under classic assumptions formula_12 are considered chosen (for example, in a designed experiment) and therefore has a known probability without parameters. With conjugate priors. Conjugate prior distribution. For an arbitrary prior distribution, there may be no analytical solution for the posterior distribution. In this section, we will consider a so-called conjugate prior for which the posterior distribution can be derived analytically. A prior formula_23 is conjugate to this likelihood function if it has the same functional form with respect to formula_7 and formula_18. Since the log-likelihood is quadratic in formula_7, the log-likelihood is re-written such that the likelihood becomes normal in formula_24. Write formula_25 The likelihood is now re-written as formula_26 where formula_27 where formula_28 is the number of regression coefficients. This suggests a form for the prior: formula_29 where formula_30 is an inverse-gamma distribution formula_31 In the notation introduced in the inverse-gamma distribution article, this is the density of an formula_32 distribution with formula_33 and formula_34 with formula_35 and formula_36 as the prior values of formula_37 and formula_38, respectively. Equivalently, it can also be described as a scaled inverse chi-squared distribution, formula_39 Further the conditional prior density formula_40 is a normal distribution, formula_41 In the notation of the normal distribution, the conditional prior distribution is formula_42 Posterior distribution. With the prior now specified, the posterior distribution can be expressed as formula_43 With some re-arrangement, the posterior can be re-written so that the posterior mean formula_44 of the parameter vector formula_7 can be expressed in terms of the least squares estimator formula_45 and the prior mean formula_46, with the strength of the prior indicated by the prior precision matrix formula_47 formula_48 To justify that formula_44 is indeed the posterior mean, the quadratic terms in the exponential can be re-arranged as a quadratic form in formula_49. formula_50 Now the posterior can be expressed as a normal distribution times an inverse-gamma distribution: formula_51 Therefore, the posterior distribution can be parametrized as follows. formula_52 where the two factors correspond to the densities of formula_53 and formula_54 distributions, with the parameters of these given by formula_55 formula_56 which illustrates Bayesian inference being a compromise between the information contained in the prior and the information contained in the sample. Model evidence. The model evidence formula_57 is the probability of the data given the model formula_58. It is also known as the marginal likelihood, and as the "prior predictive density". Here, the model is defined by the likelihood function formula_59 and the prior distribution on the parameters, i.e. formula_60. The model evidence captures in a single number how well such a model explains the observations. The model evidence of the Bayesian linear regression model presented in this section can be used to compare competing linear models by Bayesian model comparison. These models may differ in the number and values of the predictor variables as well as in their priors on the model parameters. Model complexity is already taken into account by the model evidence, because it marginalizes out the parameters by integrating formula_61 over all possible values of formula_7 and formula_18. formula_62 This integral can be computed analytically and the solution is given in the following equation. formula_63 Here formula_64 denotes the gamma function. Because we have chosen a conjugate prior, the marginal likelihood can also be easily computed by evaluating the following equality for arbitrary values of formula_7 and formula_18. formula_65 Note that this equation is nothing but a re-arrangement of Bayes theorem. Inserting the formulas for the prior, the likelihood, and the posterior and simplifying the resulting expression leads to the analytic expression given above. Other cases. In general, it may be impossible or impractical to derive the posterior distribution analytically. However, it is possible to approximate the posterior by an approximate Bayesian inference method such as Monte Carlo sampling, INLA or variational Bayes. The special case formula_66 is called ridge regression. A similar analysis can be performed for the general case of the multivariate regression and part of this provides for Bayesian estimation of covariance matrices: see Bayesian multivariate linear regression. Notes. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "y" }, { "math_id": 1, "text": "X" }, { "math_id": 2, "text": "i = 1, \\ldots, n" }, { "math_id": 3, "text": "y_i" }, { "math_id": 4, "text": "k \\times 1" }, { "math_id": 5, "text": "\\mathbf{x}_i" }, { "math_id": 6, "text": "y_{i} = \\mathbf{x}_i^\\mathsf{T} \\boldsymbol\\beta + \\varepsilon_i," }, { "math_id": 7, "text": "\\boldsymbol\\beta" }, { "math_id": 8, "text": "\\varepsilon_i" }, { "math_id": 9, "text": "\\varepsilon_{i} \\sim N(0, \\sigma^2)." }, { "math_id": 10, "text": "\\rho(\\mathbf{y}\\mid\\mathbf{X},\\boldsymbol\\beta,\\sigma^{2}) \\propto (\\sigma^2)^{-n/2} \\exp\\left(-\\frac{1}{2\\sigma^2} (\\mathbf{y}- \\mathbf{X} \\boldsymbol\\beta)^\\mathsf{T}(\\mathbf{y}- \\mathbf{X} \\boldsymbol\\beta)\\right)." }, { "math_id": 11, "text": " \\hat{\\boldsymbol\\beta} = (\\mathbf{X}^\\mathsf{T}\\mathbf{X})^{-1}\\mathbf{X}^\\mathsf{T}\\mathbf{y}" }, { "math_id": 12, "text": "\\mathbf{X}" }, { "math_id": 13, "text": "n \\times k" }, { "math_id": 14, "text": "\\mathbf{x}_i^\\mathsf{T}" }, { "math_id": 15, "text": "\\mathbf{y}" }, { "math_id": 16, "text": "n" }, { "math_id": 17, "text": "[y_1 \\; \\cdots \\; y_n]^\\mathsf{T}" }, { "math_id": 18, "text": "\\sigma" }, { "math_id": 19, "text": "\\rho(\\mathbf{y},\\mathbf{X}\\mid\\boldsymbol\\beta,\\sigma^{2},\\gamma)" }, { "math_id": 20, "text": "\\rho(\\beta,\\sigma^{2},\\gamma)" }, { "math_id": 21, "text": "\\gamma" }, { "math_id": 22, "text": "\\rho(\\mathbf{y}\\mid\\boldsymbol\\mathbf{X},\\beta,\\sigma^{2})\\rho(\\mathbf{X}\\mid\\gamma)" }, { "math_id": 23, "text": "\\rho(\\boldsymbol\\beta,\\sigma^{2})" }, { "math_id": 24, "text": "(\\boldsymbol\\beta-\\hat{\\boldsymbol\\beta})" }, { "math_id": 25, "text": "\\begin{align}\n(\\mathbf{y}- \\mathbf{X} \\boldsymbol\\beta)^\\mathsf{T}(\\mathbf{y}- \\mathbf{X} \\boldsymbol\\beta) \n&= [(\\mathbf{y}- \\mathbf{X} \\hat{\\boldsymbol\\beta}) + (\\mathbf{X} \\hat{\\boldsymbol\\beta} - \\mathbf{X} \\boldsymbol\\beta)]^\\mathsf{T} [(\\mathbf{y}- \\mathbf{X} \\hat{\\boldsymbol\\beta}) + (\\mathbf{X} \\hat{\\boldsymbol\\beta} - \\mathbf{X} \\boldsymbol\\beta)] \\\\\n&= (\\mathbf{y}- \\mathbf{X} \\hat{\\boldsymbol\\beta})^\\mathsf{T}(\\mathbf{y}- \\mathbf{X} \\hat{\\boldsymbol\\beta}) + (\\boldsymbol\\beta - \\hat{\\boldsymbol\\beta})^\\mathsf{T}(\\mathbf{X}^\\mathsf{T}\\mathbf{X})(\\boldsymbol\\beta - \\hat{\\boldsymbol\\beta})\n+ \\underbrace{2(\\mathbf{X} \\hat{\\boldsymbol\\beta} - \\mathbf{X} \\boldsymbol\\beta)^\\mathsf{T} (\\mathbf{y}- \\mathbf{X} \\hat{\\boldsymbol\\beta})}_{= \\ 0}\\\\\n&= (\\mathbf{y}- \\mathbf{X} \\hat{\\boldsymbol\\beta})^\\mathsf{T}(\\mathbf{y}- \\mathbf{X} \\hat{\\boldsymbol\\beta}) + (\\boldsymbol\\beta - \\hat{\\boldsymbol\\beta})^\\mathsf{T}(\\mathbf{X}^\\mathsf{T}\\mathbf{X})(\\boldsymbol\\beta - \\hat{\\boldsymbol\\beta})\\,.\n\\end{align}" }, { "math_id": 26, "text": "\\rho(\\mathbf{y}|\\mathbf{X},\\boldsymbol\\beta,\\sigma^{2}) \\propto (\\sigma^2)^{-\\frac{v}{2}} \\exp\\left(-\\frac{vs^{2}}{2{\\sigma}^{2}}\\right)(\\sigma^2)^{-\\frac{n-v}{2}} \\exp\\left(-\\frac{1}{2{\\sigma}^{2}}(\\boldsymbol\\beta - \\hat{\\boldsymbol\\beta})^\\mathsf{T}(\\mathbf{X}^\\mathsf{T}\\mathbf{X})(\\boldsymbol\\beta - \\hat{\\boldsymbol\\beta})\\right)," }, { "math_id": 27, "text": "vs^2 =(\\mathbf{y}- \\mathbf{X} \\hat{\\boldsymbol\\beta})^\\mathsf{T}(\\mathbf{y}- \\mathbf{X} \\hat{\\boldsymbol\\beta}) \\quad \\text{ and } \\quad v = n-k," }, { "math_id": 28, "text": "k" }, { "math_id": 29, "text": "\\rho(\\boldsymbol\\beta,\\sigma^2) = \\rho(\\sigma^2)\\rho(\\boldsymbol\\beta\\mid\\sigma^2)," }, { "math_id": 30, "text": "\\rho(\\sigma^2)" }, { "math_id": 31, "text": " \\rho(\\sigma^2) \\propto (\\sigma^2)^{-\\frac{v_0}{2}-1} \\exp\\left(-\\frac{v_0 s_0^2}{2\\sigma^2}\\right)." }, { "math_id": 32, "text": " \\text{Inv-Gamma}( a_0, b_0)" }, { "math_id": 33, "text": "a_0=\\tfrac{v_0}{2}" }, { "math_id": 34, "text": "b_0=\\tfrac{1}{2} v_0s_0^2 " }, { "math_id": 35, "text": "v_0" }, { "math_id": 36, "text": "s_0^2" }, { "math_id": 37, "text": "v" }, { "math_id": 38, "text": "s^{2}" }, { "math_id": 39, "text": "\\text{Scale-inv-}\\chi^2(v_0, s_0^2)." }, { "math_id": 40, "text": "\\rho(\\boldsymbol\\beta|\\sigma^{2})" }, { "math_id": 41, "text": " \\rho(\\boldsymbol\\beta\\mid\\sigma^2) \\propto (\\sigma^2)^{-k/2} \\exp\\left(-\\frac{1}{2\\sigma^2}(\\boldsymbol\\beta - \\boldsymbol\\mu_0)^\\mathsf{T} \\mathbf{\\Lambda}_0 (\\boldsymbol\\beta - \\boldsymbol\\mu_0)\\right)." }, { "math_id": 42, "text": " \\mathcal{N}\\left(\\boldsymbol\\mu_0, \\sigma^2 \\boldsymbol\\Lambda_0^{-1}\\right)." }, { "math_id": 43, "text": " \\begin{align}\n\\rho(\\boldsymbol\\beta,\\sigma^2\\mid\\mathbf{y},\\mathbf{X}) &\\propto \\rho(\\mathbf{y}\\mid\\mathbf{X},\\boldsymbol\\beta,\\sigma^2)\\rho(\\boldsymbol\\beta\\mid\\sigma^2)\\rho(\\sigma^2) \\\\\n& \\propto (\\sigma^2)^{-n/2} \\exp\\left(-\\frac{1}{2{\\sigma}^2}(\\mathbf{y}- \\mathbf{X} \\boldsymbol\\beta)^\\mathsf{T}(\\mathbf{y}- \\mathbf{X} \\boldsymbol\\beta)\\right) (\\sigma^2)^{-k/2} \\exp\\left(-\\frac{1}{2\\sigma^2}(\\boldsymbol\\beta -\\boldsymbol\\mu_0)^\\mathsf{T} \\boldsymbol\\Lambda_0 (\\boldsymbol\\beta - \\boldsymbol\\mu_0)\\right) (\\sigma^2)^{-(a_0+1)} \\exp\\left(-\\frac{b_0}{\\sigma^2}\\right)\n\\end{align}" }, { "math_id": 44, "text": "\\boldsymbol\\mu_n" }, { "math_id": 45, "text": "\\hat{\\boldsymbol\\beta}" }, { "math_id": 46, "text": "\\boldsymbol\\mu_0" }, { "math_id": 47, "text": "\\boldsymbol\\Lambda_0" }, { "math_id": 48, "text": "\\boldsymbol\\mu_n = (\\mathbf{X}^\\mathsf{T}\\mathbf{X}+\\boldsymbol\\Lambda_0)^{-1}(\\mathbf{X}^\\mathsf{T} \\mathbf{X}\\hat{\\boldsymbol\\beta}+\\boldsymbol\\Lambda_0\\boldsymbol\\mu_0) ." }, { "math_id": 49, "text": "\\boldsymbol\\beta - \\boldsymbol\\mu_n" }, { "math_id": 50, "text": " (\\mathbf{y}- \\mathbf{X} \\boldsymbol\\beta)^\\mathsf{T}(\\mathbf{y}- \\mathbf{X} \\boldsymbol\\beta) + (\\boldsymbol\\beta - \\boldsymbol\\mu_0)^\\mathsf{T}\\boldsymbol\\Lambda_0(\\boldsymbol\\beta - \\boldsymbol\\mu_0) =(\\boldsymbol\\beta-\\boldsymbol\\mu_n)^\\mathsf{T}(\\mathbf{X}^\\mathsf{T}\\mathbf{X}+\\boldsymbol\\Lambda_0)(\\boldsymbol\\beta-\\boldsymbol\\mu_n)+\\mathbf{y}^\\mathsf{T}\\mathbf{y}-\\boldsymbol\\mu_n^\\mathsf{T}(\\mathbf{X}^\\mathsf{T}\\mathbf{X}+\\boldsymbol\\Lambda_0)\\boldsymbol\\mu_n+\\boldsymbol\\mu_0^\\mathsf{T} \\boldsymbol\\Lambda_0\\boldsymbol\\mu_0 ." }, { "math_id": 51, "text": "\\rho(\\boldsymbol\\beta,\\sigma^2\\mid\\mathbf{y},\\mathbf{X}) \\propto (\\sigma^2)^{-k/2} \\exp\\left(-\\frac{1}{2{\\sigma}^{2}}(\\boldsymbol\\beta - \\boldsymbol\\mu_n)^\\mathsf{T}(\\mathbf{X}^\\mathsf{T} \\mathbf{X}+\\mathbf{\\Lambda}_0)(\\boldsymbol\\beta - \\boldsymbol\\mu_n)\\right) (\\sigma^2)^{-\\frac{n+2a_0}{2}-1} \\exp\\left(-\\frac{2 b_0+\\mathbf{y}^\\mathsf{T}\\mathbf{y}-\\boldsymbol\\mu_n^\\mathsf{T}(\\mathbf{X}^\\mathsf{T} \\mathbf{X}+\\boldsymbol\\Lambda_0)\\boldsymbol\\mu_n+\\boldsymbol\\mu_0^\\mathsf{T} \\boldsymbol\\Lambda_0 \\boldsymbol\\mu_0}{2\\sigma^2}\\right) ." }, { "math_id": 52, "text": "\\rho(\\boldsymbol\\beta,\\sigma^2\\mid\\mathbf{y},\\mathbf{X}) \\propto \\rho(\\boldsymbol\\beta \\mid \\sigma^2,\\mathbf{y},\\mathbf{X}) \\rho(\\sigma^2\\mid\\mathbf{y},\\mathbf{X}), " }, { "math_id": 53, "text": " \\mathcal{N}\\left( \\boldsymbol\\mu_n, \\sigma^2\\boldsymbol\\Lambda_n^{-1} \\right)\\," }, { "math_id": 54, "text": " \\text{Inv-Gamma}\\left(a_n,b_n \\right) " }, { "math_id": 55, "text": "\\boldsymbol\\Lambda_n=(\\mathbf{X}^\\mathsf{T}\\mathbf{X}+\\mathbf{\\Lambda}_0), \\quad \\boldsymbol\\mu_n = (\\boldsymbol\\Lambda_n)^{-1}(\\mathbf{X}^\\mathsf{T} \\mathbf{X} \\hat{\\boldsymbol\\beta} + \\boldsymbol\\Lambda_0 \\boldsymbol\\mu_0) ," }, { "math_id": 56, "text": "a_n= a_0 + \\frac{n}{2}, \\qquad b_n=b_0+\\frac{1}{2}(\\mathbf{y}^\\mathsf{T} \\mathbf{y} + \\boldsymbol\\mu_0^\\mathsf{T} \\boldsymbol\\Lambda_0\\boldsymbol\\mu_0-\\boldsymbol\\mu_n^\\mathsf{T} \\boldsymbol\\Lambda_n \\boldsymbol\\mu_n) ." }, { "math_id": 57, "text": "p(\\mathbf{y}\\mid m)" }, { "math_id": 58, "text": "m" }, { "math_id": 59, "text": "p(\\mathbf{y}\\mid\\mathbf{X},\\boldsymbol\\beta,\\sigma)" }, { "math_id": 60, "text": "p(\\boldsymbol\\beta,\\sigma)" }, { "math_id": 61, "text": "p(\\mathbf{y},\\boldsymbol\\beta,\\sigma\\mid\\mathbf{X})" }, { "math_id": 62, "text": "p(\\mathbf{y}|m)=\\int p(\\mathbf{y}\\mid\\mathbf{X},\\boldsymbol\\beta,\\sigma)\\, p(\\boldsymbol\\beta,\\sigma)\\, d\\boldsymbol\\beta\\, d\\sigma" }, { "math_id": 63, "text": "p(\\mathbf{y}\\mid m)=\\frac{1}{(2\\pi)^{n/2}}\\sqrt{\\frac{\\det(\\boldsymbol\\Lambda_0)}{\\det(\\boldsymbol\\Lambda_n)}} \\cdot \\frac{b_0^{a_0}}{b_n^{a_n}} \\cdot \\frac{\\Gamma(a_n)}{\\Gamma(a_0)}" }, { "math_id": 64, "text": "\\Gamma" }, { "math_id": 65, "text": "p(\\mathbf{y}\\mid m)=\\frac{p(\\boldsymbol\\beta,\\sigma|m)\\, p(\\mathbf{y} \\mid \\mathbf{X}, \\boldsymbol\\beta,\\sigma,m)}{p(\\boldsymbol\\beta, \\sigma \\mid \\mathbf{y},\\mathbf{X},m)}" }, { "math_id": 66, "text": "\\boldsymbol\\mu_0=0, \\mathbf{\\Lambda}_0 = c\\mathbf{I}" } ]
https://en.wikipedia.org/wiki?curid=7519917
752040
Hick's law
Time to make a decision as a result of the possible choices Hick's law, or the Hick–Hyman law, named after British and American psychologists William Edmund Hick and Ray Hyman, describes the time it takes for a person to make a decision as a result of the possible choices: increasing the number of choices will increase the decision time logarithmically. The Hick–Hyman law assesses cognitive information capacity in choice reaction experiments. The amount of time taken to process a certain amount of bits in the Hick–Hyman law is known as the "rate of gain of information". The plain language implication of the finding is that increasing the number of choices does not directly increase the time to choose. In other words, twice as many choices does not result in twice as long to choose. Also, because the relationship is logarithmic, the increase in time it takes to choose becomes less and less as the number of choices increases. Background. In 1868, Franciscus Donders reported the relationship between having multiple stimuli and choice reaction time. In 1885, J. Merkel discovered that the response time is longer when a stimulus belongs to a larger set of stimuli. Psychologists began to see similarities between this phenomenon and information theory. Hick first began experimenting with this theory in 1951. In his first experiment, there were 10 lamps arranged circularly around the subject. There were 10 Morse keys for each of his fingers that corresponded to these lamps. A running pre-punched tape roll activated a random lamp every 5 seconds; 4 electric pens recorded this lamp activation on moving paper in . When the subject tapped the corresponding key, the 4 pens recorded the response, using the same system. Although Hicks notes his experimental design using a 4-bit binary recording process was capable of showing up to 15 positions and "all clear", in his experiment he required the device to give an accurate record of reaction time between 10 options after a stimulus for the experiment. Hick performed a second experiment using the same task, while keeping the number of alternatives at 10. The participant performed the task the first two times with the instruction to perform the task as accurately as possible. For the last task, the participant was asked to perform the task as quickly as possible. While Hick was stating that the relationship between reaction time and the number of choices was logarithmic, Hyman wanted to better understand the relationship between the reaction time and the mean number of choices. In Hyman’s experiment, he had eight different lights arranged in a 6x6 matrix. Each of these different lights was given a name, so the participant was timed in the time it took to say the name of the light after it was lit. Further experiments changed the number of each different type of light. Hyman was responsible for determining a linear relation between reaction time and the information transmitted. Law. Given "n" equally probable choices, the average reaction time "T" required to choose among the choices is approximately: formula_0 where "b" is a constant that can be determined empirically by fitting a line to measured data. The logarithm expresses depth of "choice tree" hierarchy – log2 indicates binary search was performed. Addition of 1 to "n" takes into account the "uncertainty about whether to respond or not, as well as about which response to make." In the case of choices with unequal probabilities, the law can be generalized as: formula_1 where "H" is strongly related to the information-theoretic entropy of the decision, defined as formula_2 where "pi" refers to the probability of the "i"th alternative yielding the information-theoretic entropy. Hick's law is similar in form to Fitts's law. Hick's law has a logarithmic form because people subdivide the total collection of choices into categories, eliminating about half of the remaining choices at each step, rather than considering each and every choice one-by-one, which would require linear time. Relation to IQ. E. Roth (1964) demonstrated a correlation between IQ and information processing speed, which is the reciprocal of the slope of the function: formula_3 where "n" is the number of choices. The time it takes to come to a decision is: proportional to :formula_4 Stimulus–response compatibility. The stimulus–response compatibility is known to also affect the choice reaction time for the Hick–Hyman law. This means that the response should be similar to the stimulus itself (such as turning a steering wheel to turn the wheels of the car). The action the user performs is similar to the response the driver receives from the car. Exceptions. Studies suggest that the search for a word within a randomly ordered list—in which the reaction time increases linearly according to the number of items—does not allow for the generalization of the scientific law, considering that, in other conditions, the reaction time may not be linearly associated to the logarithm of the number of elements or even show other variations of the basic plane. Exceptions to Hick's law have been identified in studies of verbal response to familiar stimuli, where there is no relationship or only a subtle increase in the reaction time associated with an increased number of elements, and saccade responses, where it was shown that there is either no relationship, or a decrease in the saccadic time with the increase of the number of elements, thus an antagonistic effect to that postulated by Hick's law. The generalization of Hick's law was also tested in studies on the predictability of transitions associated with the reaction time of elements that appeared in a structured sequence. This process was first described as being in accordance to Hick's law, but more recently it was shown that the relationship between predictability and reaction time is sigmoid, not linear associated with different modes of action. Hick's law is sometimes cited to justify menu design decisions. For example, to find a given word (e.g. the name of a command) in a randomly ordered word list (e.g. a menu), scanning of each word in the list is required, consuming linear time, so Hick's law does not apply. However, if the list is alphabetical and the user knows the name of the command, he or she may be able to use a subdividing strategy that works in logarithmic time. Notes. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "T = b \\cdot \\log_{2}(n + 1)" }, { "math_id": 1, "text": "T = b H" }, { "math_id": 2, "text": "H = \\sum_i^n p_i \\log_{2}(1/p_i + 1)" }, { "math_id": 3, "text": "\\text{Reaction Time} = \\text{Movement Time} + \\frac{ \\log_{2}(n) } {\\text{Processing Speed} }" }, { "math_id": 4, "text": "\\frac{ \\log_{2}(n) } {\\text{Processing Speed}} " } ]
https://en.wikipedia.org/wiki?curid=752040
75204916
Utility assessment
Utility assessment, also called utility measurement, is a process by which the utility function of individuals or groups can be estimated. There are many different methods for utility assessment. Assessing single-attribute utility functions. A single-attribute utility function maps the amount of money a person has (or gains), to a number representing the subjective satisfaction he derives from it. The motivation to define a utility function comes from the St. Petersburg paradox: the observation that people are not willing to pay much for a lottery, even if its expected monetary gain is infinite. The classical solution to this paradox, suggested by Daniel Bernoulli and Gabriel Cramer, is that most people have a utility function that is strictly concave, and they aim to maximize their expected utility, rather than their expected gain. Power-log utility. Bernouli himself assumed that the utility is logarithmic, that is, u("x")=log("x") where x is the amount of money; this was sufficient for solving the St. Petersburg paradox. Gustav Fechner also supplied psychophysical justification for the logarithmic function (known as the Weber–Fechner law). But Stanley Smith Stevens showed that the relation between physical stimulus and psychological perception can be better explaind by a power function, that is, u("x")=x"p", with exponent "p" between 0.3 to 2. Many investigators tried to determine whether utility is better represented by logarithmic functions or by power functions. Using various methods, they showed that power functions fit utility data better. As a result, power functions were incorporated into psychological decision theories, such as Cumulative prospect theory, rank-affected multiplicative (RAM) weights model, and transfer of attention exchange (TAX) model. Some economic applications still use logarithmic functions though. Wakker noted that power functions can have a negative exponent, but in this case their sign should change so that they remain increasing. One way to define this generalized family of functions is:formula_0which is increasing for any exponent "r" ≠ 0. Moreover, the limit of this function when "r" → 0 is exactly the logarithmic function: formula_1. Therefore, the family of functions "ur"("x") for all real "p" is sometimes called "power-log utility". Procedures for assessing utility. Utility functions are usually assessed in experiments checking subjects' preferences over "lotteries". Two general types of procedures have been used: There are several problems with these procedures. First, they assume people weight events by their true (objective) probabilities, p1,i and p2,i. In fact, much evidence shows that people weight events by "subjective probabilities". In particular, people tend to overweight small probabilities and underweight medium to large probabilities (see Prospect theory). The non-linearity in the subjective probability may be confounded with the concavity of the utility function. For example, the person indifferent between the lotteries [100%: $10] and [60%:$20, 40%:0] can be modeled by a "linear" utility function, if we assume that he underweights the probability 60% to around 50%. One way to avoid this confounding is using equal probabilities in all queries; this was done, for example, by Coombs and Komorita. This trick works for "non-configural weight theories", which assume that the subjective probability is a function of the objective probability (that is, every objective probability is translated to a unique subjective probability). In this case, when all probabilities in the queries are equal, and they cancel out in the equations. The equations involve only the utilities, and we can again use them to infer the form of the utility function. However, "configural weight theories", motivated by the Allais paradox, show that subjective probability may depend both on the objective probability and on the outcome. Kirby presented a way to design the queries such that, for power-log utilities and negative-exponential utilities, the predictions do not depend on canceling subjective probabilities. A second problem is that some experiments use both gains and losses. However, later research show that the concavity of the utility function may be different between gains and losses (see prospect theory and loss aversion). Combining gain and loss domains may yield an incorrect utility function. A possible solution is to measure each of these two domains separately. Eugene Galanter devised another solution, for both the first and the second problem. He conducted experiments in which no probabilities were used; instead, he asked questions such as "how much money would you need in order to feel twice as happy as $10"? If the answer is e.g. $18, then we get an equation such as formula_7, which gives information on the utility function, without and dependence on probabilities and risk attitudes. His experiments consistently showed that power functions better fit the data than log functions. A third problem is that most experiments compare the "relative" fit of different utility models to the data. For example, they can show that power functions fit the data better than logarithmic functions, but cannot reject the hypothesis that power functions fit the data. Kirby presented a novel experiment design, that allowed him to get point-predictions for each model separately. His experiments indicate that both power-log functions and negative-exponent functions do "not" fit the data. He leaves finding a better-fitting function as an open problem. Assessing multi-attribute utility functions. A multi-attribute utility (MAU) function maps a bundle with two or more attributes (e.g. money and free time) to a number representing the subjective satisfaction from that bundle. Assessing MAU is relevant even in conditions of certainty. For example, whereas most people prefer $12,000 for sure to $10,000 for sure, different people may have different preferences between the bundles ($10,000 salary, 8 work hours per day) and ($12,000 salary, 9 work hours per day), even when both bundles are certain. A procedure for assessing MAU in conditions of certainty is presented in Ordinal utility#assessment. Assessing MAU in conditions of uncertainty is more complex; see multi-attribute utility#assessment for details. In health. Assessement of MAU functions is particularly relevant in Health economics. It is often required to choose among different possible treatments, where each treatment has different attributes regarding life expectancy, life quality, safety, and cost. Following extensive surveys, MAU functions for health-related conditions were developed; see Quality-adjusted life year and EQ-5D#assessment. The most common method for assessing health-related utilities is time trade-off. To enable decision-making in the national level, a MAU function for health is constructed in the national level, as an "average" utility function of all patiens in the country. The utility functions are usually normalized such that a utility of 1 means "full health", and a utility of 0 means "death". Negative utility functions are possible, for situations considered "worse than death". As an example, here is a description of a protocol for constructing a value-set for EQ-5D-Y (the EuroQol 5-Dimensional scale for Young people). The construction is done in two steps: an online Discrete Choice Experiment (DCE) survey, and a face-to-face composite time-tradeoff interview (cTTO): In both steps, the subjects are adults, and they are asked to answer the queries from the point-of-view of a 10-years-old child. The reasons for asking adults, rather than children, were (1) adults are the taxpayers - they should decide how health budget is used; (2) there are queries about death, which may be inappropriate for children; (3) children may misunderstand the questions. The above protocol was first executed in Slovenia as follows: References. &lt;templatestyles src="Reflist/styles.css" /&gt; you
[ { "math_id": 0, "text": "u_r(x) = x^r/r" }, { "math_id": 1, "text": "u_0(x) = \\ln(x)" }, { "math_id": 2, "text": "\\sum_i p_{1,i} u(x_{1,i}) = \\sum_i p_{2,i} u(x_{2,i})" }, { "math_id": 3, "text": "u(10) = 0.6 u(20) + 0.4 u(0)" }, { "math_id": 4, "text": "{10}^r = 0.6 \\cdot {20}^r" }, { "math_id": 5, "text": "r=\\log(0.6)/\\log(0.5)\\approx 0.74" }, { "math_id": 6, "text": "\\sum_i p_{1,i} u(x_{1,i}) > \\sum_i p_{2,i} u(x_{2,i})" }, { "math_id": 7, "text": "u(18) = 2 u(10)" } ]
https://en.wikipedia.org/wiki?curid=75204916
7522
Calorimetry
Determining heat transfer in a system by measuring its other properties In chemistry and thermodynamics, calorimetry (from la " calor" 'heat' and el " "μέτρον" (metron)" 'measure') is the science or act of measuring changes in "state variables" of a body for the purpose of deriving the heat transfer associated with changes of its state due, for example, to chemical reactions, physical changes, or phase transitions under specified constraints. Calorimetry is performed with a calorimeter. Scottish physician and scientist Joseph Black, who was the first to recognize the distinction between heat and temperature, is said to be the founder of the science of calorimetry. Indirect calorimetry calculates heat that living organisms produce by measuring either their production of carbon dioxide and nitrogen waste (frequently ammonia in aquatic organisms, or urea in terrestrial ones), or from their consumption of oxygen. Lavoisier noted in 1780 that heat production can be predicted from oxygen consumption this way, using multiple regression. The dynamic energy budget theory explains why this procedure is correct. Heat generated by living organisms may also be measured by "direct calorimetry", in which the entire organism is placed inside the calorimeter for the measurement. A widely used modern instrument is the differential scanning calorimeter, a device which allows thermal data to be obtained on small amounts of material. It involves heating the sample at a controlled rate and recording the heat flow either into or from the specimen. Classical calorimetric calculation of heat. Cases with differentiable equation of state for a one-component body. Basic classical calculation with respect to volume. Calorimetry requires that a reference material that changes temperature have known definite thermal constitutive properties. The classical rule, recognized by Clausius and Kelvin, is that the pressure exerted by the calorimetric material is fully and rapidly determined solely by its temperature and volume; this rule is for changes that do not involve phase change, such as melting of ice. There are many materials that do not comply with this rule, and for them, the present formula of classical calorimetry does not provide an adequate account. Here the classical rule is assumed to hold for the calorimetric material being used, and the propositions are mathematically written: The thermal response of the calorimetric material is fully described by its pressure formula_0 as the value of its constitutive function formula_1 of just the volume formula_2 and the temperature formula_3. All increments are here required to be very small. This calculation refers to a domain of volume and temperature of the body in which no phase change occurs, and there is only one phase present. An important assumption here is continuity of property relations. A different analysis is needed for phase change When a small increment of heat is gained by a calorimetric body, with small increments, formula_4 of its volume, and formula_5 of its temperature, the increment of heat, formula_6, gained by the body of calorimetric material, is given by formula_7 where formula_8 denotes the latent heat with respect to volume, of the calorimetric material at constant controlled temperature formula_9. The surroundings' pressure on the material is instrumentally adjusted to impose a chosen volume change, with initial volume formula_2. To determine this latent heat, the volume change is effectively the independently instrumentally varied quantity. This latent heat is not one of the widely used ones, but is of theoretical or conceptual interest. formula_10 denotes the heat capacity, of the calorimetric material at fixed constant volume formula_2, while the pressure of the material is allowed to vary freely, with initial temperature formula_3. The temperature is forced to change by exposure to a suitable heat bath. It is customary to write formula_10 simply as formula_11, or even more briefly as formula_12. This latent heat is one of the two widely used ones. The latent heat with respect to volume is the heat required for unit increment in volume at constant temperature. It can be said to be 'measured along an isotherm', and the pressure the material exerts is allowed to vary freely, according to its constitutive law formula_13. For a given material, it can have a positive or negative sign or exceptionally it can be zero, and this can depend on the temperature, as it does for water about 4 C. The concept of latent heat with respect to volume was perhaps first recognized by Joseph Black in 1762. The term 'latent heat of expansion' is also used. The latent heat with respect to volume can also be called the 'latent energy with respect to volume'. For all of these usages of 'latent heat', a more systematic terminology uses 'latent heat capacity'. The heat capacity at constant volume is the heat required for unit increment in temperature at constant volume. It can be said to be 'measured along an isochor', and again, the pressure the material exerts is allowed to vary freely. It always has a positive sign. This means that for an increase in the temperature of a body without change of its volume, heat must be supplied to it. This is consistent with common experience. Quantities like formula_6 are sometimes called 'curve differentials', because they are measured along curves in the formula_14 surface. Classical theory for constant-volume (isochoric) calorimetry. Constant-volume calorimetry is calorimetry performed at a constant volume. This involves the use of a constant-volume calorimeter. Heat is still measured by the above-stated principle of calorimetry. This means that in a suitably constructed calorimeter, called a bomb calorimeter, the increment of volume formula_4 can be made to vanish, formula_15. For constant-volume calorimetry: formula_16 where formula_5 denotes the increment in temperature and formula_12 denotes the heat capacity at constant volume. Classical heat calculation with respect to pressure. From the above rule of calculation of heat with respect to volume, there follows one with respect to pressure. In a process of small increments, formula_17 of its pressure, and formula_5 of its temperature, the increment of heat, formula_6, gained by the body of calorimetric material, is given by formula_18 where formula_19 denotes the latent heat with respect to pressure, of the calorimetric material at constant temperature, while the volume and pressure of the body are allowed to vary freely, at pressure formula_0 and temperature formula_3; formula_20 denotes the heat capacity, of the calorimetric material at constant pressure, while the temperature and volume of the body are allowed to vary freely, at pressure formula_0 and temperature formula_3. It is customary to write formula_20 simply as formula_21, or even more briefly as formula_22. The new quantities here are related to the previous ones: formula_23 formula_24 where formula_25 denotes the partial derivative of formula_1 with respect to formula_2 evaluated for formula_14 and formula_26 denotes the partial derivative of formula_1 with respect to formula_3 evaluated for formula_14. The latent heats formula_8 and formula_19 are always of opposite sign. It is common to refer to the ratio of specific heats as formula_27 often just written as formula_28. Calorimetry through phase change, equation of state shows one jump discontinuity. An early calorimeter was that used by Laplace and Lavoisier, as shown in the figure above. It worked at constant temperature, and at atmospheric pressure. The latent heat involved was then not a latent heat with respect to volume or with respect to pressure, as in the above account for calorimetry without phase change. The latent heat involved in this calorimeter was with respect to phase change, naturally occurring at constant temperature. This kind of calorimeter worked by measurement of mass of water produced by the melting of ice, which is a phase change. Cumulation of heating. For a time-dependent process of heating of the calorimetric material, defined by a continuous joint progression formula_29 of formula_30 and formula_31, starting at time formula_32 and ending at time formula_33, there can be calculated an accumulated quantity of heat delivered, formula_34 . This calculation is done by mathematical integration along the progression with respect to time. This is because increments of heat are 'additive'; but this does not mean that heat is a conservative quantity. The idea that heat was a conservative quantity was invented by Lavoisier, and is called the 'caloric theory'; by the middle of the nineteenth century it was recognized as mistaken. Written with the symbol formula_35, the quantity formula_34 is not at all restricted to be an increment with very small values; this is in contrast with formula_6. One can write formula_36 formula_37 formula_38. This expression uses quantities such as formula_39 which are defined in the section below headed 'Mathematical aspects of the above rules'. Mathematical aspects of the above rules. The use of 'very small' quantities such as formula_6 is related to the physical requirement for the quantity formula_1 to be 'rapidly determined' by formula_2 and formula_3; such 'rapid determination' refers to a physical process. These 'very small' quantities are used in the Leibniz approach to the infinitesimal calculus. The Newton approach uses instead 'fluxions' such as formula_40, which makes it more obvious that formula_1 must be 'rapidly determined'. In terms of fluxions, the above first rule of calculation can be written formula_41 where formula_42 denotes the time formula_39 denotes the time rate of heating of the calorimetric material at time formula_42 formula_43 denotes the time rate of change of volume of the calorimetric material at time formula_42 formula_44 denotes the time rate of change of temperature of the calorimetric material. The increment formula_6 and the fluxion formula_39 are obtained for a particular time formula_42 that determines the values of the quantities on the righthand sides of the above rules. But this is not a reason to expect that there should exist a mathematical function formula_45. For this reason, the increment formula_6 is said to be an 'imperfect differential' or an 'inexact differential'. Some books indicate this by writing formula_46 instead of formula_6. Also, the notation "đQ" is used in some books. Carelessness about this can lead to error.&lt;ref name="Planck 1923/1926 57"&gt;Planck, M. (1923/1926), page 57.&lt;/ref&gt; The quantity formula_36 is properly said to be a functional of the continuous joint progression formula_29 of formula_30 and formula_31, but, in the mathematical definition of a function, formula_36 is not a function of formula_14. Although the fluxion formula_39 is defined here as a function of time formula_42, the symbols formula_47 and formula_45 respectively standing alone are not defined here. Physical scope of the above rules of calorimetry. The above rules refer only to suitable calorimetric materials. The terms 'rapidly' and 'very small' call for empirical physical checking of the domain of validity of the above rules. The above rules for the calculation of heat belong to pure calorimetry. They make no reference to thermodynamics, and were mostly understood before the advent of thermodynamics. They are the basis of the 'thermo' contribution to thermodynamics. The 'dynamics' contribution is based on the idea of work, which is not used in the above rules of calculation. Experimentally conveniently measured coefficients. Empirically, it is convenient to measure properties of calorimetric materials under experimentally controlled conditions. Pressure increase at constant volume. For measurements at experimentally controlled volume, one can use the assumption, stated above, that the pressure of the body of calorimetric material is can be expressed as a function of its volume and temperature. For measurement at constant experimentally controlled volume, the isochoric coefficient of pressure rise with temperature, is defined by formula_48 Expansion at constant pressure. For measurements at experimentally controlled pressure, it is assumed that the volume formula_2 of the body of calorimetric material can be expressed as a function formula_49 of its temperature formula_3 and pressure formula_0. This assumption is related to, but is not the same as, the above used assumption that the pressure of the body of calorimetric material is known as a function of its volume and temperature; anomalous behaviour of materials can affect this relation. The quantity that is conveniently measured at constant experimentally controlled pressure, the isobar volume expansion coefficient, is defined by formula_50 Compressibility at constant temperature. For measurements at experimentally controlled temperature, it is again assumed that the volume formula_2 of the body of calorimetric material can be expressed as a function formula_49 of its temperature formula_3 and pressure formula_0, with the same provisos as mentioned just above. The quantity that is conveniently measured at constant experimentally controlled temperature, the isothermal compressibility, is defined by formula_51 Relation between classical calorimetric quantities. Assuming that the rule formula_13 is known, one can derive the function of formula_52 that is used above in the classical heat calculation with respect to pressure. This function can be found experimentally from the coefficients formula_53 and formula_54 through the mathematically deducible relation formula_55. Connection between calorimetry and thermodynamics. Thermodynamics developed gradually over the first half of the nineteenth century, building on the above theory of calorimetry which had been worked out before it, and on other discoveries. According to Gislason and Craig (2005): "Most thermodynamic data come from calorimetry..." According to Kondepudi (2008): "Calorimetry is widely used in present day laboratories." In terms of thermodynamics, the internal energy formula_56 of the calorimetric material can be considered as the value of a function formula_57 of formula_14, with partial derivatives formula_58 and formula_59. Then it can be shown that one can write a thermodynamic version of the above calorimetric rules: formula_60 with formula_61 and formula_62 . Again, further in terms of thermodynamics, the internal energy formula_56 of the calorimetric material can sometimes, depending on the calorimetric material, be considered as the value of a function formula_63 of formula_64, with partial derivatives formula_65 and formula_59, and with formula_2 being expressible as the value of a function formula_66 of formula_64, with partial derivatives formula_67 and formula_68 . Then, according to Adkins (1975), it can be shown that one can write a further thermodynamic version of the above calorimetric rules: formula_69 with formula_70 and formula_71 . Beyond the calorimetric fact noted above that the latent heats formula_8 and formula_19 are always of opposite sign, it may be shown, using the thermodynamic concept of work, that also formula_72 Special interest of thermodynamics in calorimetry: the isothermal segments of a Carnot cycle. Calorimetry has a special benefit for thermodynamics. It tells about the heat absorbed or emitted in the isothermal segment of a Carnot cycle. A Carnot cycle is a special kind of cyclic process affecting a body composed of material suitable for use in a heat engine. Such a material is of the kind considered in calorimetry, as noted above, that exerts a pressure that is very rapidly determined just by temperature and volume. Such a body is said to change reversibly. A Carnot cycle consists of four successive stages or segments: (3) another isothermal change in volume from formula_76 to a volume formula_77 at constant temperature formula_78 such as to incur a flow or heat out of the body and just such as to precisely prepare for the following change (4) another adiabatic change of volume from formula_77 back to formula_73 just such as to return the body to its starting temperature formula_75. In isothermal segment (1), the heat that flows into the body is given by    formula_79 and in isothermal segment (3) the heat that flows out of the body is given by formula_80. Because the segments (2) and (4) are adiabats, no heat flows into or out of the body during them, and consequently the net heat supplied to the body during the cycle is given by formula_81. This quantity is used by thermodynamics and is related in a special way to the net work done by the body during the Carnot cycle. The net change of the body's internal energy during the Carnot cycle, formula_82, is equal to zero, because the material of the working body has the special properties noted above. Special interest of calorimetry in thermodynamics: relations between classical calorimetric quantities. Relation of latent heat with respect to volume, and the equation of state. The quantity formula_8, the latent heat with respect to volume, belongs to classical calorimetry. It accounts for the occurrence of energy transfer by work in a process in which heat is also transferred; the quantity, however, was considered before the relation between heat and work transfers was clarified by the invention of thermodynamics. In the light of thermodynamics, the classical calorimetric quantity is revealed as being tightly linked to the calorimetric material's equation of state formula_13. Provided that the temperature formula_83 is measured in the thermodynamic absolute scale, the relation is expressed in the formula formula_84. Difference of specific heats. Advanced thermodynamics provides the relation formula_85. From this, further mathematical and thermodynamic reasoning leads to another relation between classical calorimetric quantities. The difference of specific heats is given by formula_86. Practical constant-volume calorimetry (bomb calorimetry) for thermodynamic studies. Constant-volume calorimetry is calorimetry performed at a constant volume. This involves the use of a constant-volume calorimeter. No work is performed in constant-volume calorimetry, so the heat measured equals the change in internal energy of the system. The heat capacity at constant volume is assumed to be independent of temperature. Heat is measured by the principle of calorimetry. formula_87 where Δ"U" is change in internal energy, Δ"T" is change in temperature and "CV" is the heat capacity at constant volume. In "constant-volume calorimetry" the pressure is not held constant. If there is a pressure difference between initial and final states, the heat measured needs adjustment to provide the "enthalpy change". One then has formula_88 where Δ"H" is change in enthalpy and "V" is the unchanging volume of the sample chamber. References. &lt;templatestyles src="Reflist/styles.css" /&gt; Books. &lt;templatestyles src="Refbegin/styles.css" /&gt;
[ { "math_id": 0, "text": "p\\ " }, { "math_id": 1, "text": "p(V,T)\\ " }, { "math_id": 2, "text": "V\\ " }, { "math_id": 3, "text": "T\\ " }, { "math_id": 4, "text": "\\delta V\\ " }, { "math_id": 5, "text": "\\delta T\\ " }, { "math_id": 6, "text": "\\delta Q\\ " }, { "math_id": 7, "text": "\\delta Q\\ =C^{(V)}_T(V,T)\\, \\delta V\\,+\\,C^{(T)}_V(V,T)\\,\\delta T" }, { "math_id": 8, "text": "C^{(V)}_T(V,T)\\ " }, { "math_id": 9, "text": "T" }, { "math_id": 10, "text": "C^{(T)}_V(V,T)\\ " }, { "math_id": 11, "text": "C_V(V,T)\\ " }, { "math_id": 12, "text": "C_V\\ " }, { "math_id": 13, "text": "p=p(V,T)\\ " }, { "math_id": 14, "text": "(V,T)\\ " }, { "math_id": 15, "text": "\\delta V=0\\ " }, { "math_id": 16, "text": "\\delta Q = C_V \\delta T\\ " }, { "math_id": 17, "text": "\\delta p\\ " }, { "math_id": 18, "text": "\\delta Q\\ =C^{(p)}_T(p,T)\\, \\delta p\\,+\\,C^{(T)}_p(p,T)\\,\\delta T" }, { "math_id": 19, "text": "C^{(p)}_T(p,T)\\ " }, { "math_id": 20, "text": "C^{(T)}_p(p,T)\\ " }, { "math_id": 21, "text": "C_p(p,T)\\ " }, { "math_id": 22, "text": "C_p\\ " }, { "math_id": 23, "text": "C^{(p)}_T(p,T)=\\frac{C^{(V)}_T(V,T)}{\\left.\\cfrac{\\partial p}{\\partial V}\\right|_{(V,T)}} " }, { "math_id": 24, "text": "C^{(T)}_p(p,T)=C^{(T)}_V(V,T)-C^{(V)}_T(V,T) \\frac{\\left.\\cfrac{\\partial p}{\\partial T}\\right|_{(V,T)}}{\\left.\\cfrac{\\partial p}{\\partial V}\\right|_{(V,T)}} " }, { "math_id": 25, "text": "\\left.\\frac{\\partial p}{\\partial V}\\right|_{(V,T)}" }, { "math_id": 26, "text": "\\left.\\frac{\\partial p}{\\partial T}\\right|_{(V,T)}" }, { "math_id": 27, "text": "\\gamma(V,T)=\\frac{C^{(T)}_p(p,T)}{C^{(T)}_V(V,T)}" }, { "math_id": 28, "text": "\\gamma=\\frac{C_p}{C_V}" }, { "math_id": 29, "text": "P(t_1,t_2)\\ " }, { "math_id": 30, "text": "V(t)\\ " }, { "math_id": 31, "text": "T(t)\\ " }, { "math_id": 32, "text": "t_1\\ " }, { "math_id": 33, "text": "t_2\\ " }, { "math_id": 34, "text": "\\Delta Q(P(t_1,t_2))\\, " }, { "math_id": 35, "text": "\\Delta\\ " }, { "math_id": 36, "text": "\\Delta Q(P(t_1,t_2))\\ " }, { "math_id": 37, "text": "=\\int_{P(t_1,t_2)} \\dot Q(t)dt" }, { "math_id": 38, "text": "=\\int_{P(t_1,t_2)} C^{(V)}_T(V,T)\\, \\dot V(t)\\, dt\\,+\\,\\int_{P(t_1,t_2)}C^{(T)}_V(V,T)\\,\\dot T(t)\\,dt " }, { "math_id": 39, "text": "\\dot Q(t)\\ " }, { "math_id": 40, "text": "\\dot V(t) = \\left.\\frac{dV}{dt}\\right|_t" }, { "math_id": 41, "text": "\\dot Q(t)\\ =C^{(V)}_T(V,T)\\, \\dot V(t)\\,+\\,C^{(T)}_V(V,T)\\,\\dot T(t)" }, { "math_id": 42, "text": "t\\ " }, { "math_id": 43, "text": "\\dot V(t)\\ " }, { "math_id": 44, "text": "\\dot T(t)\\ " }, { "math_id": 45, "text": "Q(V,T)\\ " }, { "math_id": 46, "text": "q\\ " }, { "math_id": 47, "text": "Q\\ " }, { "math_id": 48, "text": "\\alpha _V(V,T)\\ = \\frac{1}{p(V,T)}{\\left.\\cfrac{\\partial p}{\\partial V}\\right|_{(V,T)}} " }, { "math_id": 49, "text": "V(T,p)\\ " }, { "math_id": 50, "text": "\\beta _p(T,p)\\ = \\frac{1}{V(T,p)}{\\left.\\cfrac{\\partial V}{\\partial T}\\right|_{(T,p)}} " }, { "math_id": 51, "text": "\\kappa _T(T,p)\\ = -\\frac{1}{V(T,p)}{\\left.\\cfrac{\\partial V}{\\partial p}\\right|_{(T,p)}} " }, { "math_id": 52, "text": "\\frac{\\partial p}{\\partial T}\\ " }, { "math_id": 53, "text": "\\beta _p(T,p)\\ " }, { "math_id": 54, "text": "\\kappa _T(T,p)\\ " }, { "math_id": 55, "text": "\\frac{\\partial p}{\\partial T}=\\frac{\\beta _p(T,p)}{\\kappa _T(T,p)}" }, { "math_id": 56, "text": "U\\ " }, { "math_id": 57, "text": "U(V,T)\\ " }, { "math_id": 58, "text": "\\frac{\\partial U}{\\partial V}\\ " }, { "math_id": 59, "text": "\\frac{\\partial U}{\\partial T}\\ " }, { "math_id": 60, "text": "\\delta Q\\ =\\left [p(V,T)\\,+\\,\\left.\\frac{\\partial U}{\\partial V}\\right|_{(V,T)}\\right ]\\, \\delta V\\,+\\,\\left.\\frac{\\partial U}{\\partial T}\\right|_{(V,T)}\\,\\delta T" }, { "math_id": 61, "text": "C^{(V)}_T(V,T)=p(V,T)\\,+\\,\\left.\\frac{\\partial U}{\\partial V}\\right|_{(V,T)}\\ " }, { "math_id": 62, "text": "C^{(T)}_V(V,T)=\\left.\\frac{\\partial U}{\\partial T}\\right|_{(V,T)}\\ " }, { "math_id": 63, "text": "U(p,T)\\ " }, { "math_id": 64, "text": "(p,T)\\ " }, { "math_id": 65, "text": "\\frac{\\partial U}{\\partial p}\\ " }, { "math_id": 66, "text": "V(p,T)\\ " }, { "math_id": 67, "text": "\\frac{\\partial V}{\\partial p}\\ " }, { "math_id": 68, "text": "\\frac{\\partial V}{\\partial T}\\ " }, { "math_id": 69, "text": "\\delta Q\\ =\\left [\\left. \\frac{\\partial U}{\\partial p}\\right |_{(p,T)}\\,+\\,p \\left.\\frac{\\partial V}{\\partial p}\\right |_{(p,T)}\\right ]\\delta p\\,+\\,\\left [ \\left.\\frac{\\partial U}{\\partial T}\\right|_{(p,T)}\\,+\\,p \\left.\\frac{\\partial V}{\\partial T}\\right |_{(p,T)}\\right ]\\delta T" }, { "math_id": 70, "text": "C^{(p)}_T(p,T)=\\left.\\frac{\\partial U}{\\partial p}\\right|_{(p,T)}\\,+\\,p\\left.\\frac{\\partial V}{\\partial p}\\right|_{(p,T)}\\ " }, { "math_id": 71, "text": "C^{(T)}_p(p,T)=\\left.\\frac{\\partial U}{\\partial T}\\right|_{(p,T)}\\,+\\,p\\left.\\frac{\\partial V}{\\partial T}\\right|_{(p,T)}\\ " }, { "math_id": 72, "text": "C^{(V)}_T(V,T)\\,\\left.\\frac{\\partial p}{\\partial T}\\right|_{(V,T)} \\geq 0\\,." }, { "math_id": 73, "text": "V_a\\ " }, { "math_id": 74, "text": "V_b\\ " }, { "math_id": 75, "text": "T^+\\ " }, { "math_id": 76, "text": "V_c\\ " }, { "math_id": 77, "text": "V_d\\ " }, { "math_id": 78, "text": "T^-\\ " }, { "math_id": 79, "text": "\\Delta Q(V_a,V_b;T^+)\\,=\\,\\,\\,\\,\\,\\,\\,\\,\\int_{V_a}^{V_b} C^{(V)}_T(V,T^+)\\, dV\\ " }, { "math_id": 80, "text": "-\\Delta Q(V_c,V_d;T^-)\\,=\\,-\\int_{V_c}^{V_d} C^{(V)}_T(V,T^-)\\, dV\\ " }, { "math_id": 81, "text": "\\Delta Q(V_a,V_b;T^+;V_c,V_d;T^-)\\,=\\,\\Delta Q(V_a,V_b;T^+)\\,+\\,\\Delta Q(V_c,V_d;T^-)\\,=\\,\\int_{V_a}^{V_b} C^{(V)}_T(V,T^+)\\, dV\\,+\\,\\int_{V_c}^{V_d} C^{(V)}_T(V,T^-)\\, dV\\ " }, { "math_id": 82, "text": "\\Delta U(V_a,V_b;T^+;V_c,V_d;T^-)\\ " }, { "math_id": 83, "text": "T\\, " }, { "math_id": 84, "text": "C^{(V)}_T(V,T)=T \\left.\\frac{\\partial p}{\\partial T}\\right|_{(V,T)}\\ " }, { "math_id": 85, "text": "C_p(p,T)-C_V(V,T)=\\left [p(V,T)\\,+\\,\\left.\\frac{\\partial U}{\\partial V}\\right|_{(V,T)}\\right ]\\, \\left.\\frac{\\partial V}{\\partial T}\\right|_{(p,T)}" }, { "math_id": 86, "text": "C_p(p,T)-C_V(V,T)=\\frac{TV\\,\\beta _p^2(T,p)}{\\kappa _T(T,p)}" }, { "math_id": 87, "text": "q = C_V \\Delta T = \\Delta U \\,," }, { "math_id": 88, "text": "\\Delta H = \\Delta U + \\Delta (PV) = \\Delta U + V \\Delta P \\,," } ]
https://en.wikipedia.org/wiki?curid=7522
75221052
Spinach (software)
Magnetic resonance simulation package Spinach is an open-source magnetic resonance simulation package initially released in 2011 and continuously updated since. The package is written in "Matlab" and makes use of the built-in parallel computing and GPU interfaces of "Matlab". The name of the package whimsically refers to the physical concept of spin and to Popeye the Sailor who, in the eponymous comic books, becomes stronger after consuming spinach. Overview. "Spinach" implements magnetic resonance spectroscopy and imaging simulations by solving the equation of motion for the density matrix formula_0 in the time domain: formula_1 where the Liouvillian superoperator formula_2 is a sum of the Hamiltonian commutation superoperator formula_3, relaxation superoperator formula_4, kinetics superoperator formula_5, and potentially other terms that govern spatial dynamics and coupling to other degrees of freedom: formula_6 Computational efficiency is achieved through the use of reduced state spaces, sparse matrix arithmetic, on-the-fly trajectory analysis, and dynamic parallelization. Standard functionality. As of 2023, "Spinach" is cited in over 300 academic publications. According to the documentation and academic papers citing its features, the most recent version 2.8 of the package performs: Common models of spin relaxation (Redfield theory, stochastic Liouville equation, Lindblad theory) and chemical kinetics are supported, and a library of powder averaging grids is included with the package. Optimal control module. "Spinach" contains an implementation the gradient ascent pulse engineering (GRAPE) algorithm for quantum optimal control. The documentation and the book describing the optimal control module of the package list the following features: Dissipative background evolution generators and control operators are supported, as well as ensemble control over distributions in common instrument calibration parameters, such as control channel power and offset.
[ { "math_id": 0, "text": "\\mathbf{\\rho }\\left( t \\right)" }, { "math_id": 1, "text": "\\begin{matrix}\n \\frac{\\partial }{\\partial t}\\mathbf{\\rho }\\left( t \\right)=-i\\mathbf{L}\\left( t \\right)\\mathbf{\\rho }\\left( t \\right) \\\\ \n \\Downarrow \\\\ \n \\mathbf{\\rho }\\left( t+dt \\right)=\\exp \\left[ -i\\mathbf{L}\\left( t \\right)dt \\right]\\mathbf{\\rho }\\left( t \\right) \\\\ \n\\end{matrix}" }, { "math_id": 2, "text": "\\mathbf{L}\\left( t \\right)" }, { "math_id": 3, "text": "\\mathbf{H}\\left( t \\right)" }, { "math_id": 4, "text": "\\mathbf{R}" }, { "math_id": 5, "text": "\\mathbf{K}" }, { "math_id": 6, "text": "\\mathbf{L}\\left( t \\right)=\\mathbf{H}\\left( t \\right)+i\\mathbf{R}+i\\mathbf{K}+..." } ]
https://en.wikipedia.org/wiki?curid=75221052
75224605
307 (number)
Natural number 307 is the natural number following 306 and preceding 308. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\mathbb{Q}(\\sqrt{-n})" } ]
https://en.wikipedia.org/wiki?curid=75224605
7522539
Bayesian multivariate linear regression
Bayesian approach to multivariate linear regression In statistics, Bayesian multivariate linear regression is a Bayesian approach to multivariate linear regression, i.e. linear regression where the predicted outcome is a vector of correlated random variables rather than a single scalar random variable. A more general treatment of this approach can be found in the article MMSE estimator. Details. Consider a regression problem where the dependent variable to be predicted is not a single real-valued scalar but an "m"-length vector of correlated real numbers. As in the standard regression setup, there are "n" observations, where each observation "i" consists of "k"−1 explanatory variables, grouped into a vector formula_0 of length "k" (where a dummy variable with a value of 1 has been added to allow for an intercept coefficient). This can be viewed as a set of "m" related regression problems for each observation "i": formula_1 where the set of errors formula_2 are all correlated. Equivalently, it can be viewed as a single regression problem where the outcome is a row vector formula_3 and the regression coefficient vectors are stacked next to each other, as follows: formula_4 The coefficient matrix B is a formula_5 matrix where the coefficient vectors formula_6 for each regression problem are stacked horizontally: formula_7 The noise vector formula_8 for each observation "i" is jointly normal, so that the outcomes for a given observation are correlated: formula_9 We can write the entire regression problem in matrix form as: formula_10 where Y and E are formula_11 matrices. The design matrix X is an formula_12 matrix with the observations stacked vertically, as in the standard linear regression setup: formula_13 The classical, frequentists linear least squares solution is to simply estimate the matrix of regression coefficients formula_14 using the Moore-Penrose pseudoinverse: formula_15 To obtain the Bayesian solution, we need to specify the conditional likelihood and then find the appropriate conjugate prior. As with the univariate case of linear Bayesian regression, we will find that we can specify a natural conditional conjugate prior (which is scale dependent). Let us write our conditional likelihood as formula_16 writing the error formula_17 in terms of formula_18 and formula_19 yields formula_20 We seek a natural conjugate prior—a joint density formula_21 which is of the same functional form as the likelihood. Since the likelihood is quadratic in formula_19, we re-write the likelihood so it is normal in formula_22 (the deviation from classical sample estimate). Using the same technique as with Bayesian linear regression, we decompose the exponential term using a matrix-form of the sum-of-squares technique. Here, however, we will also need to use the Matrix Differential Calculus (Kronecker product and vectorization transformations). First, let us apply sum-of-squares to obtain new expression for the likelihood: formula_23 formula_24 We would like to develop a conditional form for the priors: formula_25 where formula_26 is an inverse-Wishart distribution and formula_27 is some form of normal distribution in the matrix formula_19. This is accomplished using the vectorization transformation, which converts the likelihood from a function of the matrices formula_28 to a function of the vectors formula_29. Write formula_30 Let formula_31 where formula_32 denotes the Kronecker product of matrices A and B, a generalization of the outer product which multiplies an formula_33 matrix by a formula_34 matrix to generate an formula_35 matrix, consisting of every combination of products of elements from the two matrices. Then formula_36 which will lead to a likelihood which is normal in formula_37. With the likelihood in a more tractable form, we can now find a natural (conditional) conjugate prior. Conjugate prior distribution. The natural conjugate prior using the vectorized variable formula_38 is of the form: formula_39 where formula_40 and formula_41 Posterior distribution. Using the above prior and likelihood, the posterior distribution can be expressed as: formula_42 where formula_43. The terms involving formula_19 can be grouped (with formula_44) using: formula_45 with formula_46 This now allows us to write the posterior in a more useful form: formula_47 This takes the form of an inverse-Wishart distribution times a Matrix normal distribution: formula_48 and formula_49 The parameters of this posterior are given by: formula_50 formula_51 formula_52 formula_53 References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\mathbf{x}_i" }, { "math_id": 1, "text": "\\begin{align}\ny_{i,1} &= \\mathbf{x}_i^\\mathsf{T}\\boldsymbol\\beta_{1} + \\epsilon_{i,1} \\\\\n&\\;\\;\\vdots \\\\\ny_{i,m} &= \\mathbf{x}_i^\\mathsf{T}\\boldsymbol\\beta_{m} + \\epsilon_{i,m}\n\\end{align}" }, { "math_id": 2, "text": "\\{ \\epsilon_{i,1}, \\ldots, \\epsilon_{i,m}\\}" }, { "math_id": 3, "text": "\\mathbf{y}_i^\\mathsf{T}" }, { "math_id": 4, "text": "\\mathbf{y}_i^\\mathsf{T} = \\mathbf{x}_i^\\mathsf{T}\\mathbf{B} + \\boldsymbol\\epsilon_{i}^\\mathsf{T}." }, { "math_id": 5, "text": "k \\times m" }, { "math_id": 6, "text": "\\boldsymbol\\beta_1,\\ldots,\\boldsymbol\\beta_m" }, { "math_id": 7, "text": "\\mathbf{B} =\n\\begin{bmatrix}\n\\begin{pmatrix} \\\\ \\boldsymbol\\beta_1 \\\\ \\\\ \\end{pmatrix}\n\\cdots\n\\begin{pmatrix} \\\\ \\boldsymbol\\beta_m \\\\ \\\\ \\end{pmatrix}\n\\end{bmatrix}\n=\n\\begin{bmatrix}\n\\begin{pmatrix}\n\\beta_{1,1} \\\\ \\vdots \\\\ \\beta_{k,1}\n\\end{pmatrix}\n\\cdots\n\\begin{pmatrix}\n\\beta_{1,m} \\\\ \\vdots \\\\ \\beta_{k,m}\n\\end{pmatrix}\n\\end{bmatrix}\n." }, { "math_id": 8, "text": "\\boldsymbol\\epsilon_{i}" }, { "math_id": 9, "text": "\\boldsymbol\\epsilon_i \\sim N(0, \\boldsymbol\\Sigma_{\\epsilon})." }, { "math_id": 10, "text": "\\mathbf{Y} = \\mathbf{X}\\mathbf{B} + \\mathbf{E}," }, { "math_id": 11, "text": "n \\times m" }, { "math_id": 12, "text": "n \\times k" }, { "math_id": 13, "text": "\n \\mathbf{X} = \\begin{bmatrix} \\mathbf{x}^\\mathsf{T}_1 \\\\ \\mathbf{x}^\\mathsf{T}_2 \\\\ \\vdots \\\\ \\mathbf{x}^\\mathsf{T}_n \\end{bmatrix}\n = \\begin{bmatrix} x_{1,1} & \\cdots & x_{1,k} \\\\\n x_{2,1} & \\cdots & x_{2,k} \\\\\n \\vdots & \\ddots & \\vdots \\\\\n x_{n,1} & \\cdots & x_{n,k}\n \\end{bmatrix}.\n" }, { "math_id": 14, "text": "\\hat{\\mathbf{B}}" }, { "math_id": 15, "text": " \\hat{\\mathbf{B}} = (\\mathbf{X}^\\mathsf{T}\\mathbf{X})^{-1}\\mathbf{X}^\\mathsf{T}\\mathbf{Y}." }, { "math_id": 16, "text": "\\rho(\\mathbf{E}|\\boldsymbol\\Sigma_{\\epsilon}) \\propto |\\boldsymbol\\Sigma_{\\epsilon}|^{-n/2} \\exp\\left(-\\tfrac{1}{2} \\operatorname{tr}\\left(\\mathbf{E}^\\mathsf{T} \\mathbf{E} \\boldsymbol\\Sigma_{\\epsilon}^{-1}\\right) \\right) ," }, { "math_id": 17, "text": "\\mathbf{E}" }, { "math_id": 18, "text": "\\mathbf{Y},\\mathbf{X}," }, { "math_id": 19, "text": "\\mathbf{B}" }, { "math_id": 20, "text": "\\rho(\\mathbf{Y}|\\mathbf{X},\\mathbf{B},\\boldsymbol\\Sigma_{\\epsilon}) \\propto |\\boldsymbol\\Sigma_{\\epsilon}|^{-n/2} \\exp(-\\tfrac{1}{2} \\operatorname{tr}((\\mathbf{Y}-\\mathbf{X} \\mathbf{B})^\\mathsf{T} (\\mathbf{Y}-\\mathbf{X} \\mathbf{B}) \\boldsymbol\\Sigma_{\\epsilon}^{-1} ) ) ," }, { "math_id": 21, "text": "\\rho(\\mathbf{B},\\Sigma_{\\epsilon})" }, { "math_id": 22, "text": "(\\mathbf{B}-\\hat{\\mathbf{B}})" }, { "math_id": 23, "text": "\\rho(\\mathbf{Y}|\\mathbf{X},\\mathbf{B},\\boldsymbol\\Sigma_{\\epsilon}) \\propto |\\boldsymbol\\Sigma_{\\epsilon}|^{-(n-k)/2} \\exp(-\\operatorname{tr}(\\tfrac{1}{2}\\mathbf{S}^\\mathsf{T} \\mathbf{S} \\boldsymbol\\Sigma_{\\epsilon}^{-1})) \n|\\boldsymbol\\Sigma_{\\epsilon}|^{-k/2} \\exp(-\\tfrac{1}{2} \\operatorname{tr}((\\mathbf{B}-\\hat{\\mathbf{B}})^\\mathsf{T} \\mathbf{X}^\\mathsf{T} \\mathbf{X}(\\mathbf{B}-\\hat{\\mathbf{B}}) \\boldsymbol\\Sigma_{\\epsilon}^{-1} ) )\n," }, { "math_id": 24, "text": "\\mathbf{S} = \\mathbf{Y} - \\mathbf{X}\\hat{\\mathbf{B}}" }, { "math_id": 25, "text": "\\rho(\\mathbf{B},\\boldsymbol\\Sigma_{\\epsilon}) = \\rho(\\boldsymbol\\Sigma_{\\epsilon})\\rho(\\mathbf{B}|\\boldsymbol\\Sigma_{\\epsilon})," }, { "math_id": 26, "text": "\\rho(\\boldsymbol\\Sigma_{\\epsilon})" }, { "math_id": 27, "text": "\\rho(\\mathbf{B}|\\boldsymbol\\Sigma_{\\epsilon})" }, { "math_id": 28, "text": "\\mathbf{B}, \\hat{\\mathbf{B}}" }, { "math_id": 29, "text": "\\boldsymbol\\beta = \\operatorname{vec}(\\mathbf{B}), \\hat{\\boldsymbol\\beta} = \\operatorname{vec}(\\hat{\\mathbf{B}})" }, { "math_id": 30, "text": "\\operatorname{tr}((\\mathbf{B} - \\hat{\\mathbf{B}})^\\mathsf{T}\\mathbf{X}^\\mathsf{T} \\mathbf{X}(\\mathbf{B} - \\hat{\\mathbf{B}}) \\boldsymbol\\Sigma_\\epsilon^{-1}) = \\operatorname{vec}(\\mathbf{B} - \\hat{\\mathbf{B}})^\\mathsf{T} \\operatorname{vec}(\\mathbf{X}^\\mathsf{T} \\mathbf{X}(\\mathbf{B} - \\hat{\\mathbf{B}}) \\boldsymbol\\Sigma_{\\epsilon}^{-1} )" }, { "math_id": 31, "text": " \\operatorname{vec}(\\mathbf{X}^\\mathsf{T} \\mathbf{X}(\\mathbf{B} - \\hat{\\mathbf{B}}) \\boldsymbol\\Sigma_{\\epsilon}^{-1} ) = (\\boldsymbol\\Sigma_{\\epsilon}^{-1} \\otimes \\mathbf{X}^\\mathsf{T}\\mathbf{X} )\\operatorname{vec}(\\mathbf{B} - \\hat{\\mathbf{B}}), " }, { "math_id": 32, "text": "\\mathbf{A} \\otimes \\mathbf{B}" }, { "math_id": 33, "text": "m \\times n" }, { "math_id": 34, "text": "p \\times q" }, { "math_id": 35, "text": "mp \\times nq" }, { "math_id": 36, "text": "\\begin{align}\n&\\operatorname{vec}(\\mathbf{B} - \\hat{\\mathbf{B}})^\\mathsf{T} (\\boldsymbol\\Sigma_{\\epsilon}^{-1} \\otimes \\mathbf{X}^\\mathsf{T}\\mathbf{X} )\\operatorname{vec}(\\mathbf{B} - \\hat{\\mathbf{B}}) \\\\\n&= (\\boldsymbol\\beta - \\hat{\\boldsymbol\\beta})^\\mathsf{T}(\\boldsymbol\\Sigma_{\\epsilon}^{-1} \\otimes \\mathbf{X}^\\mathsf{T}\\mathbf{X} )(\\boldsymbol\\beta-\\hat{\\boldsymbol\\beta})\n\\end{align}" }, { "math_id": 37, "text": "(\\boldsymbol\\beta - \\hat{\\boldsymbol\\beta})" }, { "math_id": 38, "text": "\\boldsymbol\\beta" }, { "math_id": 39, "text": "\\rho(\\boldsymbol\\beta, \\boldsymbol\\Sigma_{\\epsilon}) = \\rho(\\boldsymbol\\Sigma_{\\epsilon})\\rho(\\boldsymbol\\beta|\\boldsymbol\\Sigma_{\\epsilon})," }, { "math_id": 40, "text": " \\rho(\\boldsymbol\\Sigma_{\\epsilon}) \\sim \\mathcal{W}^{-1}(\\mathbf V_0,\\boldsymbol\\nu_0)" }, { "math_id": 41, "text": " \\rho(\\boldsymbol\\beta|\\boldsymbol\\Sigma_{\\epsilon}) \\sim N(\\boldsymbol\\beta_0, \\boldsymbol\\Sigma_{\\epsilon} \\otimes \\boldsymbol\\Lambda_0^{-1})." }, { "math_id": 42, "text": "\\begin{align}\n\\rho(\\boldsymbol\\beta,\\boldsymbol\\Sigma_{\\epsilon}|\\mathbf{Y},\\mathbf{X})\n\\propto{}& |\\boldsymbol\\Sigma_{\\epsilon}|^{-(\\boldsymbol\\nu_0 + m + 1)/2}\\exp{(-\\tfrac{1}{2}\\operatorname{tr}(\\mathbf V_0 \\boldsymbol\\Sigma_{\\epsilon}^{-1}))} \\\\\n&\\times|\\boldsymbol\\Sigma_{\\epsilon}|^{-k/2}\\exp{(-\\tfrac{1}{2} \\operatorname{tr}((\\mathbf{B}-\\mathbf B_0)^\\mathsf{T}\\boldsymbol\\Lambda_0(\\mathbf{B}-\\mathbf B_0)\\boldsymbol\\Sigma_{\\epsilon}^{-1}))} \\\\\n&\\times|\\boldsymbol\\Sigma_{\\epsilon}|^{-n/2}\\exp{(-\\tfrac{1}{2}\\operatorname{tr}((\\mathbf{Y}-\\mathbf{XB})^\\mathsf{T}(\\mathbf{Y}-\\mathbf{XB})\\boldsymbol\\Sigma_{\\epsilon}^{-1}))},\n\\end{align}" }, { "math_id": 43, "text": "\\operatorname{vec}(\\mathbf B_0) = \\boldsymbol\\beta_0" }, { "math_id": 44, "text": "\\boldsymbol\\Lambda_0 = \\mathbf{U}^\\mathsf{T}\\mathbf{U}" }, { "math_id": 45, "text": "\\begin{align}\n& \\left(\\mathbf{B} - \\mathbf B_0\\right)^\\mathsf{T} \\boldsymbol\\Lambda_0 \\left(\\mathbf{B} - \\mathbf B_0\\right) + \\left(\\mathbf{Y} - \\mathbf{XB}\\right)^\\mathsf{T} \\left(\\mathbf{Y} - \\mathbf{XB}\\right) \\\\\n={}& \\left(\\begin{bmatrix}\\mathbf Y \\\\ \\mathbf U \\mathbf B_0\\end{bmatrix} - \\begin{bmatrix}\\mathbf{X}\\\\ \\mathbf{U}\\end{bmatrix}\\mathbf{B}\\right)^\\mathsf{T} \\left(\\begin{bmatrix}\\mathbf{Y}\\\\ \\mathbf U \\mathbf B_0\\end{bmatrix}-\\begin{bmatrix}\\mathbf{X}\\\\ \\mathbf{U}\\end{bmatrix}\\mathbf{B}\\right) \\\\\n={}& \\left(\\begin{bmatrix}\\mathbf Y \\\\ \\mathbf U \\mathbf B_0\\end{bmatrix} - \\begin{bmatrix}\\mathbf{X}\\\\ \\mathbf{U}\\end{bmatrix}\\mathbf B_n\\right)^\\mathsf{T}\\left(\\begin{bmatrix}\\mathbf{Y}\\\\ \\mathbf U \\mathbf B_0\\end{bmatrix}-\\begin{bmatrix}\\mathbf{X}\\\\ \\mathbf{U}\\end{bmatrix}\\mathbf B_n\\right) + \\left(\\mathbf B - \\mathbf B_n\\right)^\\mathsf{T} \\left(\\mathbf{X}^\\mathsf{T} \\mathbf{X} + \\boldsymbol\\Lambda_0\\right) \\left(\\mathbf{B}-\\mathbf B_n\\right) \\\\\n={}& \\left(\\mathbf{Y} - \\mathbf X \\mathbf B_n \\right)^\\mathsf{T} \\left(\\mathbf{Y} - \\mathbf X \\mathbf B_n\\right) + \\left(\\mathbf B_0 - \\mathbf B_n\\right)^\\mathsf{T} \\boldsymbol\\Lambda_0 \\left(\\mathbf B_0 - \\mathbf B_n\\right) + \\left(\\mathbf{B} - \\mathbf B_n\\right)^\\mathsf{T} \\left(\\mathbf{X}^\\mathsf{T} \\mathbf{X} + \\boldsymbol\\Lambda_0\\right)\\left(\\mathbf B - \\mathbf B_n\\right),\n\\end{align}" }, { "math_id": 46, "text": "\\mathbf B_n = \\left(\\mathbf{X}^\\mathsf{T}\\mathbf{X} + \\boldsymbol\\Lambda_0\\right)^{-1}\\left(\\mathbf{X}^\\mathsf{T} \\mathbf{X} \\hat{\\mathbf{B}} + \\boldsymbol\\Lambda_0\\mathbf B_0\\right) = \\left(\\mathbf{X}^\\mathsf{T} \\mathbf{X} + \\boldsymbol\\Lambda_0\\right)^{-1}\\left(\\mathbf{X}^\\mathsf{T} \\mathbf{Y} + \\boldsymbol\\Lambda_0 \\mathbf B_0\\right)." }, { "math_id": 47, "text": "\\begin{align}\n\\rho(\\boldsymbol\\beta,\\boldsymbol\\Sigma_{\\epsilon}|\\mathbf{Y},\\mathbf{X})\n\\propto{}&|\\boldsymbol\\Sigma_{\\epsilon}|^{-(\\boldsymbol\\nu_0 + m + n + 1)/2}\\exp{(-\\tfrac{1}{2}\\operatorname{tr}((\\mathbf V_0 + (\\mathbf{Y}-\\mathbf{XB_n})^\\mathsf{T} (\\mathbf{Y}-\\mathbf{XB_n}) + (\\mathbf B_n-\\mathbf B_0)^\\mathsf{T}\\boldsymbol\\Lambda_0(\\mathbf B_n-\\mathbf B_0))\\boldsymbol\\Sigma_{\\epsilon}^{-1}))} \\\\\n&\\times|\\boldsymbol\\Sigma_{\\epsilon}|^{-k/2}\\exp{(-\\tfrac{1}{2}\\operatorname{tr}((\\mathbf{B}-\\mathbf B_n)^\\mathsf{T} (\\mathbf{X}^T\\mathbf{X} + \\boldsymbol\\Lambda_0) (\\mathbf{B}-\\mathbf B_n)\\boldsymbol\\Sigma_{\\epsilon}^{-1}))}.\n\\end{align}" }, { "math_id": 48, "text": "\\rho(\\boldsymbol\\Sigma_{\\epsilon}|\\mathbf{Y},\\mathbf{X}) \\sim \\mathcal{W}^{-1}(\\mathbf V_n,\\boldsymbol\\nu_n)" }, { "math_id": 49, "text": " \\rho(\\mathbf{B}|\\mathbf{Y},\\mathbf{X},\\boldsymbol\\Sigma_{\\epsilon}) \\sim \\mathcal{MN}_{k,m}(\\mathbf B_n, \\boldsymbol\\Lambda_n^{-1}, \\boldsymbol\\Sigma_{\\epsilon})." }, { "math_id": 50, "text": "\\mathbf V_n = \\mathbf V_0 + (\\mathbf{Y}-\\mathbf{XB_n})^\\mathsf{T}(\\mathbf{Y}-\\mathbf{XB_n}) + (\\mathbf B_n - \\mathbf B_0)^\\mathsf{T}\\boldsymbol\\Lambda_0(\\mathbf B_n-\\mathbf B_0)" }, { "math_id": 51, "text": "\\boldsymbol\\nu_n = \\boldsymbol\\nu_0 + n" }, { "math_id": 52, "text": "\\mathbf B_n = (\\mathbf{X}^\\mathsf{T}\\mathbf{X} + \\boldsymbol\\Lambda_0)^{-1}(\\mathbf{X}^\\mathsf{T} \\mathbf{Y} + \\boldsymbol\\Lambda_0\\mathbf B_0)" }, { "math_id": 53, "text": "\\boldsymbol\\Lambda_n = \\mathbf{X}^\\mathsf{T} \\mathbf{X} + \\boldsymbol\\Lambda_0" } ]
https://en.wikipedia.org/wiki?curid=7522539
7522685
Continuous group action
In topology, a continuous group action on a topological space "X" is a group action of a topological group "G" that is continuous: i.e., formula_0 is a continuous map. Together with the group action, "X" is called a "G"-space. If formula_1 is a continuous group homomorphism of topological groups and if "X" is a "G"-space, then "H" can act on "X" "by restriction": formula_2, making "X" a "H"-space. Often "f" is either an inclusion or a quotient map. In particular, any topological space may be thought of as a "G"-space via formula_3 (and "G" would act trivially.) Two basic operations are that of taking the space of points fixed by a subgroup "H" and that of forming a quotient by "H". We write formula_4 for the set of all "x" in "X" such that formula_5. For example, if we write formula_6 for the set of continuous maps from a "G"-space "X" to another "G"-space "Y", then, with the action formula_7, formula_8 consists of "f" such that formula_9; i.e., "f" is an equivariant map. We write formula_10. Note, for example, for a "G"-space "X" and a closed subgroup "H", formula_11.
[ { "math_id": 0, "text": "G \\times X \\to X, \\quad (g, x) \\mapsto g \\cdot x" }, { "math_id": 1, "text": "f: H \\to G" }, { "math_id": 2, "text": "h \\cdot x = f(h) x" }, { "math_id": 3, "text": "G \\to 1" }, { "math_id": 4, "text": "X^H" }, { "math_id": 5, "text": "hx = x" }, { "math_id": 6, "text": "F(X, Y)" }, { "math_id": 7, "text": "(g \\cdot f)(x) = g f(g^{-1} x)" }, { "math_id": 8, "text": "F(X, Y)^G" }, { "math_id": 9, "text": "f(g x) = g f(x)" }, { "math_id": 10, "text": "F_G(X, Y) = F(X, Y)^G" }, { "math_id": 11, "text": "F_G(G/H, X) = X^H" } ]
https://en.wikipedia.org/wiki?curid=7522685
75230578
Tomer Schlank
Israeli mathematician Tomer Moshe Schlank (; born 1982) is an Israeli mathematician and a professor at the The University of Chicago. Previously, he was a professor at Hebrew University of Jerusalem. He primarily works in homotopy theory, algebraic geometry, and number theory. In 2022 he won the Erdős prize in mathematics and in 2023 he was awarded a European Research Council consolidator grant. He is an editor for the Israel Journal of Mathematics. Biography. Schlank was born on July 29, 1982, in Jerusalem, Israel. He graduated with a bachelor's degree from Tel Aviv University in 2001 and a master's degree from Tel Aviv University in 2008. He received his PhD from Hebrew University of Jerusalem in January, 2013, working under the supervision of Ehud de Shalit. His education was also influenced by the close proximity of David Kazhdan and Emmanuel Dror Farjoun. After completing his PhD, Schlank was hired as a Simons postdoctoral fellow at MIT. Afterwards he moved back to the Hebrew University in Jerusalem. Schlank is the great-grandson of the scientist Maria Pogonowska. Research. Schlank is primarily known for his work on chromatic homotopy theory. Together with Robert Burklund, Jeremy Hahn, and Ishan Levy, he disproved the telescope conjecture for all heights greater than 1 and for all primes. This was the last outstanding conjecture among Ravenel's conjectures. The disproof made use of his work on ambidexterity of the T(n)-local category and cyclotomic extensions of the T(n)-local sphere with Ben-Moshe, Carmeli, and Yanovski. With Barthel, Stapleton, and Weinstein, he calculated the homotopy groups of the rationalization of the K(n)-local sphere. With Burklund and Yuan, Schlank proved the "chromatic nullstellensatz", a version of Hilbert's nullstellensatz for the T(n)-local category in which Morava E-theories play the role of algebraically closed fields. This work resolved the Ausoni—Rognes redshift conjecture for formula_0-ring spectra and also produced formula_0-orientations of Morava E-theory. Schlank's early work was a synthesis of homotopy theory and number theory. With Harpaz, he developed homotopy obstructions to the existence of rational points on smooth varieties over number fields and related these homotopy obstructions to the Manin obstruction. He wrote his thesis, titled "Applications of homotopy theory to the study of obstructions to existence of rational points", on this topic. Schlank is known for the breadth of his work and for bringing together seemingly unrelated concepts from different fields to solve problems. In mathematics, he has published papers in algebraic geometry, algebraic topology, category theory, combinatorics, dynamical systems, geometric topology, number theory, and representation theory. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "E_\\infty" } ]
https://en.wikipedia.org/wiki?curid=75230578
75233
Hilary Putnam
American mathematician and philosopher (1926–2016) Hilary Whitehall Putnam (; July 31, 1926 – March 13, 2016) was an American philosopher, mathematician, computer scientist, and figure in analytic philosophy in the second half of the 20th century. He contributed to the studies of philosophy of mind, philosophy of language, philosophy of mathematics, and philosophy of science. Outside philosophy, Putnam contributed to mathematics and computer science. Together with Martin Davis he developed the Davis–Putnam algorithm for the Boolean satisfiability problem and he helped demonstrate the unsolvability of Hilbert's tenth problem. Putnam applied equal scrutiny to his own philosophical positions as to those of others, subjecting each position to rigorous analysis until he exposed its flaws. As a result, he acquired a reputation for frequently changing his positions. In philosophy of mind, Putnam argued against the type-identity of mental and physical states based on his hypothesis of the multiple realizability of the mental, and for the concept of functionalism, an influential theory regarding the mind–body problem. In philosophy of language, along with Saul Kripke and others, he developed the causal theory of reference, and formulated an original theory of meaning, introducing the notion of semantic externalism based on a thought experiment called Twin Earth. In philosophy of mathematics, Putnam and W. V. O. Quine developed the Quine–Putnam indispensability argument, an argument for the reality of mathematical entities, later espousing the view that mathematics is not purely logical, but "quasi-empirical". In epistemology, Putnam criticized the "brain in a vat" thought experiment, which appears to provide a powerful argument for epistemological skepticism, by challenging its coherence. In metaphysics, he originally espoused a position called metaphysical realism, but eventually became one of its most outspoken critics, first adopting a view he called "internal realism", which he later abandoned. Despite these changes of view, throughout his career Putnam remained committed to scientific realism, roughly the view that mature scientific theories are approximately true descriptions of ways things are. In his later work, Putnam became increasingly interested in American pragmatism, Jewish philosophy, and ethics, engaging with a wider array of philosophical traditions. He also displayed an interest in metaphilosophy, seeking to "renew philosophy" from what he identified as narrow and inflated concerns. He was at times a politically controversial figure, especially for his involvement with the Progressive Labor Party in the late 1960s and early 1970s.&lt;ref name="Auxier"&lt;/ Life. Hilary Whitehall Putnam was born on July 31, 1926, in Chicago, Illinois. His father, Samuel Putnam, was a scholar of Romance languages, columnist, and translator who wrote for the "Daily Worker", a publication of the American Communist Party, from 1936 to 1946. Because of his father's commitment to communism, Putnam had a secular upbringing, although his mother, Riva, was Jewish. In early 1927, six months after Hilary's birth, the family moved to France, where Samuel was under contract to translate the surviving works of François Rabelais. In a 2015 autobiographical essay, Putnam said that his first childhood memories were from his life in France, and his first language was French. Putnam completed the first two years of his primary education in France before he and his parents returned to the U.S. in 1933, settling in Philadelphia. There, he attended Central High School, where he met Noam Chomsky, who was a year behind him. The two remained friends—and often intellectual opponents—for the rest of Putnam's life. Putnam studied philosophy at the University of Pennsylvania, receiving his B.A. degree and becoming a member of the Philomathean Society, the country's oldest continually existing collegiate literary society. He did graduate work in philosophy at Harvard University and later at UCLA's philosophy department, where he received his Ph.D. in 1951 for his dissertation, "The Meaning of the Concept of Probability in Application to Finite Sequences". Putnam's dissertation supervisor Hans Reichenbach was a leading figure in logical positivism, the dominant school of philosophy of the day; one of Putnam's most consistent positions was his rejection of logical positivism as self-defeating. Over the course of his life, Putnam was his own philosophical adversary, changing his positions on philosophical questions and critiquing his previous views. After obtaining his PhD, Putnam taught at Northwestern University (1951–52), Princeton University (1953–61), and MIT (1961–65). For the rest of his career, Putnam taught at Harvard's philosophy department, becoming Cogan University Professor. In 1962, he married fellow philosopher Ruth Anna Putnam (born Ruth Anna Jacobs), who took a teaching position in philosophy at Wellesley College. Rebelling against the antisemitism they experienced during their youth, the Putnams decided to establish a traditional Jewish home for their children. Since they had no experience with the rituals of Judaism, they sought out invitations to other Jewish homes for Seder. They began to study Jewish rituals and Hebrew, became more interested in Judaism, self-identified as Jews, and actively practiced Judaism. In 1994, Hilary celebrated a belated bar mitzvah service; Ruth Anna's bat mitzvah was celebrated four years later. In the 1960s and early 1970s, Putnam was an active supporter of the American Civil Rights Movement and he was also an active opponent of the Vietnam War. In 1963, he organized one of MIT's first faculty and student anti-war committees. After moving to Harvard in 1965, he organized campus protests and began teaching courses on Marxism. Putnam became an official faculty advisor to the Students for a Democratic Society and in 1968 a member of the Progressive Labor Party (PLP). He was elected a Fellow of the American Academy of Arts and Sciences in 1965. After 1968, his political activities centered on the PLP. The Harvard administration considered these activities disruptive and attempted to censure Putnam. Putnam permanently severed his relationship with the PLP in 1972. In 1997, at a meeting of former draft resistance activists at Boston's Arlington Street Church, he called his involvement with the PLP a mistake. He said he had been impressed at first with the PLP's commitment to alliance-building and its willingness to attempt to organize from within the armed forces. In 1976, Putnam was elected president of the American Philosophical Association. The next year, he was selected as Walter Beverly Pearson Professor of Mathematical Logic in recognition of his contributions to the philosophy of logic and mathematics. While breaking with his radical past, Putnam never abandoned his belief that academics have a particular social and ethical responsibility toward society. He continued to be forthright and progressive in his political views, as expressed in the articles "How Not to Solve Ethical Problems" (1983) and "Education for Democracy" (1993). Putnam was a Corresponding Fellow of the British Academy. He was elected to the American Philosophical Society in 1999. He retired from teaching in June 2000, becoming Cogan University Professor Emeritus, but as of 2009 continued to give a seminar almost yearly at Tel Aviv University. He also held the Spinoza Chair of Philosophy at the University of Amsterdam in 2001. His corpus includes five volumes of collected works, seven books, and more than 200 articles. Putnam's renewed interest in Judaism inspired him to publish several books and essays on the topic. With his wife, he co-authored several essays and a book on the late-19th-century American pragmatist movement. For his contributions in philosophy and logic, Putnam was awarded the Rolf Schock Prize in 2011 and the Nicholas Rescher Prize for Systematic Philosophy in 2015. Putnam died at his home in Arlington, Massachusetts, on March 13, 2016. At the time of his death, Putnam was Cogan University Professor Emeritus at Harvard University. Philosophy of mind. Multiple realizability. Putnam's best-known work concerns philosophy of mind. His most noted original contributions to that field came in several key papers published in the late 1960s that set out the hypothesis of multiple realizability. In these papers, Putnam argues that, contrary to the famous claim of the type-identity theory, pain may correspond to utterly different physical states of the nervous system in different organisms even if they all experience the same mental state of "being in pain". Putnam cited examples from the animal kingdom to illustrate his thesis. He asked whether it was likely that the brain structures of diverse types of animals realize pain, or other mental states, the same way. If they do not share the same brain structures, they cannot share the same mental states and properties, in which case mental states must be realized by different physical states in different species. Putnam then took his argument a step further, asking about such things as the nervous systems of alien beings, artificially intelligent robots and other silicon-based life forms. These hypothetical entities, he contended, should not be considered incapable of experiencing pain just because they lack human neurochemistry. Putnam concluded that type-identity theorists had been making an "ambitious" and "highly implausible" conjecture that could be disproved by one example of multiple realizability. This is sometimes called the "likelihood argument", as it focuses on the claim that multiple realizability is more likely than type-identity theory.640 Putnam also formulated an "a priori" argument in favor of multiple realizability based on what he called "functional isomorphism". He defined the concept in these terms: "Two systems are functionally isomorphic if 'there is a correspondence between the states of one and the states of the other that preserves functional relations'." In the case of computers, two machines are functionally isomorphic if and only if the sequential relations among states in the first exactly mirror the sequential relations among states in the other. Therefore, a computer made of silicon chips and one made of cogs and wheels can be functionally isomorphic but constitutionally diverse. Functional isomorphism implies multiple realizability.637 Putnam, Jerry Fodor, and others argued that along with being an effective argument against type-identity theories, multiple realizability implies that any low-level explanation of higher-level mental phenomena is insufficiently abstract and general. Functionalism, which identifies mental kinds with functional kinds that are characterized exclusively in terms of causes and effects, abstracts from the level of microphysics, and therefore seemed to be a better explanation of the relation between mind and body. In fact, there are many functional kinds, including mousetraps and eyes, that are multiply realized at the physical level.6 Multiple realizability has been criticized on the grounds that, if it were true, research and experimentation in the neurosciences would be impossible. According to William Bechtel and Jennifer Mundale, to be able to conduct such research in the neurosciences, universal consistencies must either exist or be assumed to exist in brain structures. It is the similarity (or homology) of brain structures that allows us to generalize across species. If multiple realizability were an empirical fact, results from experiments conducted on one species of animal (or one organism) would not be meaningful when generalized to explain the behavior of another species (or organism of the same species). Jaegwon Kim, David Lewis, Robert Richardson and Patricia Churchland have also criticized metaphysical realism. Machine state functionalism. Putnam himself put forth the first formulation of such a functionalist theory. This formulation, now called "machine-state functionalism", was inspired by analogies Putnam and others made between the mind and Turing machines. The point for functionalism is the nature of the states of the Turing machine. Each state can be defined in terms of its relations to the other states and to the inputs and outputs, and the details of how it accomplishes what it accomplishes and of its material constitution are completely irrelevant. According to machine-state functionalism, the nature of a mental state is just like the nature of a Turing machine state. Just as "state one" simply is the state in which, given a particular input, such-and-such happens, so being in pain is the state which disposes one to cry "ouch", become distracted, wonder what the cause is, and so forth. Rejection of functionalism. Ian Hacking called "Representation and Reality" (1988) a book that "will mostly be read as Putnam’s denunciation of his former philosophical psychology, to which he gave the name 'functionalism'." Writing in "Noûs", Barbara Hannon described "the inventor of functionalism" as arguing "against his own former computationalist views". Putnam's change of mind was primarily due to the difficulties computational theories have in explaining certain intuitions with respect to the externalism of mental content. This is illustrated by his Twin Earth thought experiment. In 1988 Putnam also developed a separate argument against functionalism based on Fodor's generalized version of multiple realizability. Asserting that functionalism is really a watered-down identity theory in which mental kinds are identified with functional kinds, he argued that mental kinds may be multiply realizable over functional kinds. The argument for functionalism is that the same mental state could be implemented by the different states of a universal Turing machine. Despite Putnam's rejection of functionalism, it has continued to flourish and been developed into numerous versions by Fodor, David Marr, Daniel Dennett, and David Lewis, among others. Functionalism helped lay the foundations for modern cognitive science and is the dominant theory of mind in philosophy today. By 2012 Putnam accepted a modification of functionalism called "liberal functionalism". The view holds that "what matters for consciousness and for mental properties generally is the right sort of functional capacities and not the particular matter that subserves those capacities". The specification of these capacities may refer to what goes on outside the organism's "brain", may include intentional idioms, and need not describe a capacity to compute something or other. Putnam himself formulated one of the main arguments against functionalism, the Twin Earth thought experiment, though there have been additional criticisms. John Searle's Chinese room argument (1980) is a direct attack on the claim that thought can be represented as a set of functions. It is designed to show that it is possible to mimic intelligent action with a purely functional system, without any interpretation or understanding. Searle describes a situation in which a person who speaks only English is locked in a room with Chinese symbols in baskets and a rule book in English for moving the symbols around. People outside the room instruct the person inside to follow the rule book for sending certain symbols out of the room when given certain symbols. The people outside the room speak Chinese and are communicating with the person inside via the Chinese symbols. According to Searle, it would be absurd to claim that the English speaker inside "knows" Chinese based on these syntactic processes alone. This argument attempts to show that systems that operate merely on syntactic processes cannot realize any semantics (meaning) or intentionality (aboutness). Searle thus attacks the idea that thought can be equated with following a set of syntactic rules and concludes that functionalism is an inadequate theory of the mind. Ned Block has advanced several other arguments against functionalism. Philosophy of language. Semantic externalism. One of Putnam's contributions to philosophy of language is his semantic externalism, the claim that terms' meanings are determined by factors outside the mind, encapsulated in his slogan that "meaning just ain't in the head". His views on meaning, first laid out in "Meaning and Reference" (1973), then in" The Meaning of "Meaning"" (1975), use his "Twin Earth" thought experiment to defend this thesis. Twin Earth shows this, according to Putnam, since on Twin Earth everything is identical to Earth, except that its lakes, rivers and oceans are filled with XYZ rather than H2O. Consequently, when an earthling, Fredrick, uses the Earth-English word "water", it has a different meaning from the Twin Earth-English word "water" when used by his physically identical twin, Frodrick, on Twin Earth. Since Fredrick and Frodrick are physically indistinguishable when they utter their respective words, and since their words have different meanings, meaning cannot be determined solely by what is in their heads. This led Putnam to adopt a version of semantic externalism with regard to meaning and mental content. The philosopher of mind and language Donald Davidson, despite his many differences of opinion with Putnam, wrote that semantic externalism constituted an "anti-subjectivist revolution" in philosophers' way of seeing the world. Since Descartes's time, philosophers had been concerned with proving knowledge from the basis of subjective experience. Thanks to Putnam, Saul Kripke, Tyler Burge and others, Davidson said, philosophy could now take the objective realm for granted and start questioning the alleged "truths" of subjective experience. Theory of meaning. Along with Kripke, Keith Donnellan, and others, Putnam contributed to what is known as the causal theory of reference. In particular, he maintained in "The Meaning of "Meaning"" that the objects referred to by natural kind terms—such as "tiger", "water", and "tree"—are the principal elements of the meaning of such terms. There is a linguistic division of labor, analogous to Adam Smith's economic division of labor, according to which such terms have their references fixed by the "experts" in the particular field of science to which the terms belong. So, for example, the reference of the term "lion" is fixed by the community of zoologists, the reference of the term "elm tree" is fixed by the community of botanists, and chemists fix the reference of the term "table salt" as sodium chloride. These referents are considered rigid designators in the Kripkean sense and are disseminated outward to the linguistic community. Putnam specifies a finite sequence of elements (a vector) for the description of the meaning of every term in the language. Such a vector consists of four components: Such a "meaning-vector" provides a description of the reference and use of an expression within a particular linguistic community. It provides the conditions for its correct usage and makes it possible to judge whether a single speaker attributes the appropriate meaning to it or whether its use has changed enough to cause a difference in its meaning. According to Putnam, it is legitimate to speak of a change in the meaning of an expression only if the reference of the term, and not its stereotype, has changed. But since no possible algorithm can determine which aspect—the stereotype or the reference—has changed in a particular case, it is necessary to consider the usage of other expressions of the language. Since there is no limit to the number of such expressions to be considered, Putnam embraced a form of semantic holism. Despite the many changes in his other positions, Putnam consistently adhered to semantic holism. Michael Dummett, Jerry Fodor, Ernest Lepore, and others have identified problems with this position. In the first place, they suggest that, if semantic holism is true, it is impossible to understand how a speaker of a language can learn the meaning of an expression in the language. Given the limits of our cognitive abilities, we will never be able to master the whole of the English (or any other) language, even based on the (false) assumption that languages are static and immutable entities. Thus, if one must understand all of a natural language to understand a single word or expression, language learning is simply impossible. Semantic holism also fails to explain how two speakers can mean the same thing when using the same expression, and therefore how any communication is possible between them. Given a sentence "P", since Fred and Mary have each mastered different parts of the English language and "P" is related in different ways to the sentences in each part, "P" means one thing to Fred and something else to Mary. Moreover, if "P" derives its meaning from its relations with all the sentences of a language, as soon as the vocabulary of an individual changes by the addition or elimination of a sentence, the totality of relations changes, and therefore also the meaning of "P". As this is a common phenomenon, the result is that "P" has two different meanings in two different moments in the life of the same person. Consequently, if one accepts the truth of a sentence and then rejects it later on, the meaning of what one rejected and what one accepted are completely different and therefore one cannot change opinions with regard to the same sentences. Philosophy of mathematics. In the philosophy of mathematics, Putnam has utilized indispensability arguments to argue for a realist interpretation of mathematics. In his 1971 book "Philosophy of Logic", he presented what has since been called the "locus classicus" of the Quine–Putnam indispensability argument. The argument, which he attributed to Willard Van Orman Quine, is presented in the book as "quantification over mathematical entities is indispensable for science, both formal and physical; therefore we should accept such quantification; but this commits us to accepting the existence of the mathematical entities in question." According to Charles Parsons, Putnam "very likely" endorsed this version of the argument in his early work, but later came to deny some of the views present in it. In 1975, Putnam formulated his own indispensability argument based on the no miracles argument in the philosophy of science, saying, "I believe that the positive argument for realism [in science] has an analogue in the case of mathematical realism. Here too, I believe, realism is the only philosophy that doesn't make the success of the science a miracle". According to Putnam, Quine's version of the argument was an argument for the existence of abstract mathematical objects, while Putnam's own argument was simply for a realist interpretation of mathematics, which he believed could be provided by a "mathematics as modal logic" interpretation that need not imply the existence of abstract objects. Putnam also held the view that mathematics, like physics and other empirical sciences, uses both strict logical proofs and "quasi-empirical" methods.150 For example, Fermat's Last Theorem states that for no integer formula_0 are there positive integer values of "x", "y", and "z" such that formula_1. Before Andrew Wiles proved this for all formula_0 in 1995, it had been proved for many values of "n". These proofs inspired further research in the area, and formed a quasi-empirical consensus for the theorem. Even though such knowledge is more conjectural than a strictly proved theorem, it was still used in developing other mathematical ideas. The Quine–Putnam indispensability argument has been extremely influential in the philosophy of mathematics, inspiring continued debate and development of the argument in contemporary philosophy of mathematics. According to the "Stanford Encyclopedia of Philosophy", many in the field consider it the best argument for mathematical realism. Prominent counterarguments come from Hartry Field, who argues that mathematics is not indispensable to science, and Penelope Maddy and Elliott Sober, who dispute whether we are committed to mathematical realism even if it is indispensable to science. Mathematics and computer science. Putnam has contributed to scientific fields not directly related to his work in philosophy. As a mathematician, he contributed to the resolution of Hilbert's tenth problem in mathematics. This problem (now known as Matiyasevich's theorem or the MRDP theorem) was settled by Yuri Matiyasevich in 1970, with a proof that relied heavily on previous research by Putnam, Julia Robinson and Martin Davis. In computability theory, Putnam investigated the structure of the ramified analytical hierarchy, its connection with the constructible hierarchy and its Turing degrees. He showed that there are many levels of the constructible hierarchy that add no subsets of the integers. Later, with his student George Boolos, he showed that the first such "non-index" is the ordinal formula_2 of ramified analysis (this is the smallest formula_3 such that formula_4 is a model of full second-order comprehension). Also, together with a separate paper with his student Richard Boyd and Gustav Hensel, he demonstrated how the Davis–Mostowski–Kleene hyperarithmetical hierarchy of arithmetical degrees can be naturally extended up to formula_2. In computer science, Putnam is known for the Davis–Putnam algorithm for the Boolean satisfiability problem (SAT), developed with Martin Davis in 1960. The algorithm finds whether there is a set of true or false values that satisfies a given Boolean expression so that the entire expression becomes true. In 1962, they further refined the algorithm with the help of George Logemann and Donald W. Loveland. It became known as the DPLL algorithm. It is efficient and still forms the basis of most complete SAT solvers. Epistemology. In epistemology, Putnam is known for his argument against skeptical scenarios based on the "brain in a vat" thought experiment (a modernized version of Descartes's evil demon hypothesis). The argument is that one cannot coherently suspect that one is a disembodied "brain in a vat" placed there by some "mad scientist". This follows from the causal theory of reference. Words always refer to the kinds of things they were coined to refer to, the kinds of things their user, or the user's ancestors, experienced. So, if some person, Mary, is a "brain in a vat", whose every experience is received through wiring and other gadgetry created by the mad scientist, then Mary's idea of a brain does not refer to a real brain, since she and her linguistic community have never encountered such a thing. To her a brain is actually an image fed to her through the wiring. Nor does her idea of a vat refer to a real vat. So if, as a brain in a vat, she says, "I'm a brain in a vat", she is actually saying, "I'm a brain-image in a vat-image", which is incoherent. On the other hand, if she is not a brain in a vat, then saying that she is a brain in a vat is still incoherent, because she actually means the opposite. This is a form of epistemological externalism: knowledge or justification depends on factors outside the mind and is not solely determined internally. Putnam has clarified that his real target in this argument was never skepticism, but metaphysical realism, which he thought implied such skeptical scenarios were possible. Since realism of this kind assumes the existence of a gap between how one conceives the world and the way the world really is, skeptical scenarios such as this one (or Descartes's evil demon) present a formidable challenge. By arguing that such a scenario is impossible, Putnam attempts to show that this notion of a gap between one's concept of the world and the way it is is absurd. One cannot have a "God's-eye" view of reality. One is limited to one's conceptual schemes, and metaphysical realism is therefore false. Putnam's brain in a vat argument has been criticized. Crispin Wright argues that Putnam's formulation of the brain-in-a-vat scenario is too narrow to refute global skepticism. The possibility that one is a recently disembodied brain in a vat is not undermined by semantic externalism. If a person has lived her entire life outside the vat—speaking the English language and interacting normally with the outside world—prior to her "envatment" by a mad scientist, when she wakes up inside the vat, her words and thoughts (e.g., "tree" and "grass") will still refer to the objects or events in the external world that they referred to before her envatment. Metaphilosophy and ontology. In the late 1970s and the 1980s, stimulated by results from mathematical logic and by some of Quine's ideas, Putnam abandoned his long-standing defense of metaphysical realism—the view that the categories and structures of the external world are both causally and ontologically independent of the conceptualizations of the human mind—and adopted a rather different view, which he called "internal realism" or "pragmatic realism".404 Internal realism is the view that, although the world may be "causally" independent of the human mind, the world's structure—its division into kinds, individuals and categories—is a function of the human mind, and hence the world is not "ontologically" independent. The general idea is influenced by Immanuel Kant's idea of the dependence of our knowledge of the world on the categories of thought. According to Putnam, the problem with metaphysical realism is that it fails to explain the possibility of reference and truth. According to the metaphysical realist, our concepts and categories refer because they match up in some mysterious manner with the categories, kinds and individuals inherent in the external world. But how is it possible that the world "carves up" into certain structures and categories, the mind carves up the world into its own categories and structures, and the two carvings perfectly coincide? The answer must be that the world does not come pre-structured but that the human mind and its conceptual schemes impose structure on it. In "Reason, Truth, and History", Putnam identified truth with what he termed "idealized rational acceptability." The theory is that a belief is true if it would be accepted by anyone under ideal epistemic conditions.§7.1 Nelson Goodman formulated a similar notion in "Fact, Fiction and Forecast" (1956). "We have come to think of the actual as one among many possible worlds. We need to repaint that picture. All possible worlds lie within the actual one", Goodman wrote. Putnam rejected this form of social constructivism, but retained the idea that there can be many correct descriptions of reality. None of these descriptions can be scientifically proven to be the "one, true" description of the world. He thus accepted "conceptual relativity"—the view that it may be a matter of choice or convention, e.g., whether mereological sums exist, or whether spacetime points are individuals or mere limits. Curtis Brown has criticized Putnam's internal realism as a disguised form of subjective idealism, in which case it is subject to the traditional arguments against that position. In particular, it falls into the trap of solipsism. That is, if existence depends on experience, as subjective idealism maintains, and if one's consciousness ceased to exist, then the rest of the universe would also cease to exist. In his reply to Simon Blackburn in the volume "Reading Putnam", Putnam renounced internal realism because it assumed a "cognitive interface" model of the relation between the mind and the world. Under the increasing influence of William James and the pragmatists, he adopted a direct realist view of this relation. Although he abandoned internal realism, Putnam still resisted the idea that any given thing or system of things can be described in exactly one complete and correct way. He came to accept metaphysical realism in a broader sense, rejecting all forms of verificationism and all talk of our "making" the world. In the philosophy of perception, Putnam came to endorse direct realism, according to which perceptual experiences directly present one with the external world. He once further held that there are no mental representations, sense data, or other intermediaries between the mind and the world. By 2012, however, he rejected this commitment in favor of "transactionalism", a view that accepts both that perceptual experiences are world-involving transactions, and that these transactions are functionally describable (provided that worldly items and intentional states may be referred to in the specification of the function). Such transactions can further involve qualia. Quantum mechanics. During his career, Putnam espoused various positions on the interpretation of quantum mechanics. In the 1960s and 1970s, he contributed to the quantum logic tradition, holding that the way to resolve quantum theory's apparent paradoxes is to modify the logical rules by which propositions' truth values are deduced. Putnam's first foray into this topic was "A Philosopher Looks at Quantum Mechanics" in 1965, followed by his 1969 essay "Is Logic Empirical?". He advanced different versions of quantum logic over the years, and eventually turned away from it in the 1990s, due to critiques by Nancy Cartwright, Michael Redhead, and others. In 2005, he wrote that he rejected the many-worlds interpretation because he could see no way for it to yield meaningful probabilities. He found both de Broglie–Bohm theory and the spontaneous collapse theory of Ghirardi, Rimini, and Weber to be promising, yet also dissatisfying, since it was not clear that either could be made fully consistent with special relativity's symmetry requirements. Neopragmatism and Wittgenstein. In the mid-1970s, Putnam became increasingly disillusioned with what he perceived as modern analytic philosophy's "scientism" and focus on metaphysics over ethics and everyday concerns. He also became convinced by his readings of James and John Dewey that there is no fact–value dichotomy; that is, normative (e.g., ethical and aesthetic) judgments often have a factual basis, while scientific judgments have a normative element.240 For a time, under Ludwig Wittgenstein's influence, Putnam adopted a pluralist view of philosophy itself and came to view most philosophical problems as no more than conceptual or linguistic confusions philosophers created by using ordinary language out of context. A book of articles on pragmatism by Ruth Anna Putnam and Hilary Putnam, "Pragmatism as a Way of Life: The Lasting Legacy of William James and John Dewey", edited by David Macarthur, was published in 2017. Many of Putnam's last works addressed the concerns of ordinary people, particularly social problems. For example, he wrote about the nature of democracy, social justice and religion. He also discussed Jürgen Habermas's ideas, and wrote articles influenced by continental philosophy. Works. Select papers, book chapters and essays. An exhaustive bibliography of Putnam's writings, compiled by John R. Shook, can be found in "The Philosophy Of Hilary Putnam" (2015). References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "n>2" }, { "math_id": 1, "text": "x^n+y^n=z^n" }, { "math_id": 2, "text": "\\beta_0" }, { "math_id": 3, "text": "\\beta" }, { "math_id": 4, "text": "L_\\beta" } ]
https://en.wikipedia.org/wiki?curid=75233
75237428
Hashrate
Measure of the speed of blockchain miners The proof-of-work distributed computing schemes, including Bitcoin, frequently use cryptographic hashes as a proof-of-work algorithm. Hashrate is a measure of the total computational power of all participating nodes expressed in units of hash calculations per second. The hash/second units are small, so usually multiples are used, for large networks the preferred unit is terahash (1 trillion hashes), for example, in 2023 the Bitcoin hashrate was about 300,000,000 terahashes per second (that is 300 exahashes or formula_0 hash calculations every second). Impact on network security. A higher hashrate signifies a stronger and more secure blockchain network. Increased computational power dedicated to mining operations acts as a defense mechanism, making it more challenging for malicious entities to disrupt network operations. It serves as a barrier against potential attacks, particularly the significant concern of a 51% attack. Mining difficulty. Mining difficulty, intrinsically connected to hashrate, indicates the challenge miners face in producing a hash lower than the target hash. It is purposefully designed to adjust periodically, ensuring a consistent addition of blocks to the blockchain. Hashrate and miner participation. An increase in the miner count results in higher hashrate. This surge is often driven by the attractiveness of potential returns due to the escalated demand for cryptocurrencies, such as Bitcoin or Ethereum. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "3 \\cdot {10}^{20}" } ]
https://en.wikipedia.org/wiki?curid=75237428
7523925
Conjugate (square roots)
Change of the sign of a square root In mathematics, the conjugate of an expression of the form formula_0 is formula_1 provided that formula_2 does not appear in a and b. One says also that the two expressions are conjugate. In particular, the two solutions of a quadratic equation are conjugate, as per the formula_3 in the quadratic formula formula_4. Complex conjugation is the special case where the square root is formula_5 the imaginary unit. Properties. As formula_6 and formula_7 the sum and the product of conjugate expressions do not involve the square root anymore. This property is used for removing a square root from a denominator, by multiplying the numerator and the denominator of a fraction by the conjugate of the denominator (see Rationalisation). An example of this usage is: formula_8 Hence: formula_9 A corollary property is that the subtraction: formula_10 leaves only a term containing the root.
[ { "math_id": 0, "text": "a + b \\sqrt d" }, { "math_id": 1, "text": "a - b \\sqrt d," }, { "math_id": 2, "text": "\\sqrt d" }, { "math_id": 3, "text": "\\pm" }, { "math_id": 4, "text": "x = \\frac{-b \\pm \\sqrt{b^2 - 4ac} }{2a}" }, { "math_id": 5, "text": "i = \\sqrt{-1}," }, { "math_id": 6, "text": "(a + b \\sqrt d)(a - b \\sqrt d) = a^2 - b^2 d" }, { "math_id": 7, "text": "(a + b \\sqrt d) + (a - b \\sqrt d) = 2a," }, { "math_id": 8, "text": "\\frac{a + b \\sqrt d}{x + y\\sqrt d} = \\frac{(a + b \\sqrt d)(x - y \\sqrt d)}{(x + y \\sqrt d)(x - y \\sqrt d)} \n= \\frac{ax - dby + (xb - ay) \\sqrt d}{x^2 - y^2 d}." }, { "math_id": 9, "text": "\\frac{1}{a + b \\sqrt d} = \\frac{a - b \\sqrt d}{a^2 - db^2}." }, { "math_id": 10, "text": "(a+b\\sqrt d) - (a-b\\sqrt d)= 2b\\sqrt d," } ]
https://en.wikipedia.org/wiki?curid=7523925
75245734
Shai Haran
Israeli mathematician and professor Shai Haran (born 1958) is an Israeli mathematician and professor at the Technion – Israel Institute of Technology. He is known for his work in p-adic analysis, p-adic quantum mechanics, and non-additive geometry, including the field with one element, in relation to strategies for proving the Riemann Hypothesis. Life. Born in Jerusalem on October 8, 1958, Haran graduated from the Hebrew University in 1979, and, in 1983, received his PhD in mathematics from the Massachusetts Institute of Technology (MIT) on "p-Adic L-functions for Elliptic Curves over CM Fields" under his advisor Barry Mazur from Harvard University, and his mentors Michael Artin and Daniel Quillen from MIT. Haran is a professor at the Technion – Israel Institute of Technology. He was a frequent visitor at Stanford University, MIT, Harvard and Columbia University, the Institut des Hautes Études Scientifiques, Max-Planck Institute, Kyushu University and the Tokyo Institute of Technology, among other institutions. Work. His early work was in the construction of p-adic L-functions for modular forms on GL(2) over any number field. He gave a formula for the explicit sums of arithmetic functions expressing in a uniform way the contribution of a prime, finite or real, as the derivative at formula_0 of the Riesz potential of order formula_1. This formula is one of the inspirations for the non-commutative geometry approach to the Riemann Hypothesis of Alain Connes. He then developed potential theory and quantum mechanics over the p-adic numbers, and is currently an editor of the journal "p-Adic Numbers, Ultrametric Analysis and Applications" "." Haran also studied the tree structure of the p-adic integers within the real and complex numbers and showed that it is given by the theory of classic orthogonal polynomials. He constructed Markov chains over the p-adic, real, and complex numbers, giving finite approximations to the harmonic beta measure. In particular, he showed that there is a q-analogue theory that interpolates between the p-adic theory and the real and complex theory. With his students Uri Onn and Uri Badder, he developed the higher rank theory for GL(n). His recent work is focused on the development of mathematical foundations for non-additive geometry, a geometric theory that is not based on commutative rings. In this theory, the field with one element formula_2 is defined as the category of finite sets with partial bijections, or equivalently, of finite pointed sets with maps that preserve the distinguished points. The non-additive geometry is then developed using two languages, formula_3 and "generalized rings", to replace commutative rings in usual algebraic geometry. In this theory, it is possible to consider the compactification of the spectrum of formula_4 and a model for the arithmetic plane that does not reduce to the diagonal formula_4. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\alpha=0" }, { "math_id": 1, "text": "\\alpha" }, { "math_id": 2, "text": "\\mathbb{F}" }, { "math_id": 3, "text": "\\mathbb{F}-\\mathcal{R}\\text{ings}" }, { "math_id": 4, "text": "\\mathbb{Z}" } ]
https://en.wikipedia.org/wiki?curid=75245734
75247706
Random utility model
In economics, a random utility model (RUM), also called stochastic utility model, is a mathematical description of the preferences of a person, whose choices are not deterministic, but depend on a random state variable. Background. A basic assumption in classic economics is that the choices of a rational person choices are guided by a preference relation, which can usually be described by a utility function. When faced with several alternatives, the person will choose the alternative with the highest utility. The utility function is not visible; however, by observing the choices made by the person, we can "reverse-engineer" his utility function. This is the goal of revealed preference theory. In practice, however, people are not rational. Ample empirical evidence shows that, when faced with the same set of alternatives, people may make different choices. To an outside observer, their choices may appear random. One way to model this behavior is called stochastic rationality. It is assumed that each agent has an unobserved "state", which can be considered a random variable. Given that state, the agent behaves rationally. In other words: each agent has, not a single preference-relation, but a "distribution" over preference-relations (or utility functions). The representation problem. Block and Marschak presented the following problem. Suppose we are given as input, a set of "choice probabilities" "Pa,B", describing the probability that an agent chooses alternative "a" from the set "B". We want to "rationalize" the agent's behavior by a probability distribution over preference relations. That is: we want to find a distribution such that, for all pairs "a,B" given in the input, "Pa,B" = Prob[a is weakly preferred to all alternatives in B]. What conditions on the set of probabilities "Pa,B" guarantee the existence of such a distribution? Falmagne solved this problem for the case in which the set of alternatives is finite: he proved that a probability distribution exists iff a set of polynomials derived from the choice-probabilities, denoted "Block-Marschak polynomials," are nonnegative. His solution is constructive, and provides an algorithm for computing the distribution. Barbera and Pattanaik extend this result to settings in which the agent may choose sets of alternatives, rather than just singletons. Uniqueness. Block and Marschak proved that, when there are at most 3 alternatives, the random utility model is unique ("identified"); however, when there are 4 or more alternatives, the model may be non-unique. For example, we can compute the probability that the agent prefers w to x (w&gt;x), and the probability that y&gt;z, but may not be able to know the probability that both w&gt;x and y&gt;z. There are even distributions with disjoint supports, which induce the same set of choice probabilities. Some conditions for uniqueness were given by Falmagne. Turansick presents two characterizations for the existence of a unique random utility representation. Models. There are various RUMs, which differ in the assumptions on the probability distributions of the agent's utility, A popular RUM was developed by Luce and Plackett. The Plackett-Luce model was applied in econometrics, for example, to analyze automobile prices in market equilibrium. It was also applied in machine learning and information retrieval. It was also applied in social choice, to analyze an opinion poll conducted during the Irish presidential election. Efficient methods for expectation-maximization and Expectation propagation exist for the Plackett-Luce model. Application to social choice. RUMs can be used not only for modeling the behavior of a single agent, but also for decision-making among a society of agents. One approach to social choice, first formalized by Condorcet's jury theorem, is that there is a "ground truth" - a true ranking of the alternatives. Each agent in society receives a noisy signal of this true ranking. The best way to approach the ground truth is using maximum likelihood estimation: construct a social ranking which maximizes the likelihood of the set of individual rankings. Condorcet's original model assumes that the probabilities of agents' mistakes in pairwise comparisons are independent and identically distributed: all mistakes have the same probability "p". This model has several drawbacks: RUM provides an alternative model: there is a ground-truth vector of utilities; each agent draws a utility for each alternative, based on a probability distribution whose mean value is the ground-truth. This model captures the strength of preferences, and rules out cyclic preferences. Moreover, for some common probability distributions (particularly, the Plackett-Luce model), the maximum likelihood estimators can be computed efficiently. Generalizations. Walker and Ben-Akiva generalize the classic RUM in several ways, aiming to improve the accuracy of forecasts: Blavatzkyy studies stochastic utility theory based on choices between lotteries. The input is a set of "choice probabilities", which indicate the likelihood that the agent choose one lottery over the other.
[ { "math_id": 0, "text": "\\Theta^P_2" } ]
https://en.wikipedia.org/wiki?curid=75247706
75252094
Flip distance
Flip distance in triangulations In discrete mathematics and theoretical computer science, the flip distance between two triangulations of the same point set is the number of flips required to transform one triangulation into another. A flip removes an edge between two triangles in the triangulation and then adds the other diagonal in the edge's enclosing quadrilateral, forming a different triangulation of the same point set. This problem is known to be NP-hard. However, the computational complexity of determining the flip distance between convex polygons, a special case of this problem, is unknown. Computing the flip distance between convex polygon triangulations is also equivalent to rotation distance, the number of rotations required to transform one binary tree into another. Definition. Given a family of triangulations of some geometric object, a "flip" is an operation that transforms one triangulation to another by removing an edge between two triangles and adding the opposite diagonal to the resulting quadrilateral. The flip distance between two triangulations is the minimum number of flips needed to transform one triangulation into another. It can also be described as the shortest path distance in a "flip graph", a graph that has a vertex for each triangulation and an edge for each flip between two triangulations. Flips and flip distances can be defined in this way for several different kinds of triangulations, including triangulations of sets of points in the Euclidean plane, triangulations of polygons, and triangulations of abstract manifolds. Feasiblity. The flip distance is well-defined only if any triangulation can be converted to any other triangulation via a sequence of flips. An equivalent condition is that the flip graph must be connected. In 1936, Klaus Wagner showed that maximal planar graphs on a sphere can be transformed to any other maximal planar graph with the same vertices through flipping. A. K. Dewdney generalized this result to triangulations on the surface of a torus while Charles Lawson to triangulations of a point set on a 2-dimensional plane. For triangulations of a point set in dimension 5 or above, there exists examples where the flip graph is disconnected and a triangulation cannot be obtained from other triangulations via flips. Whether all flip graphs of finite 3- or 4-dimensional point sets are connected is an open problem. Diameter of the flip graph. The maximum number of flips required to transform a triangulation into another is the diameter of the flip graph. The diameter of the flip graph of a convex formula_0-gon has been obtained by Daniel Sleator, Robert Tarjan, and William Thurston when formula_0 is sufficiently large and by Lionel Pournin for all formula_0. This diameter is equal to formula_1 when formula_2. The diameter of other flip graphs has been studied. For instance Klaus Wagner provided a quadratic upper bound on the diameter of the flip graph of a set of formula_0 unmarked points on the sphere. The current upper bound on the diameter is formula_3, while the best-known lower bound is formula_4. The diameter of the flip graphs of arbitrary topological surfaces with boundary has also been studied and their exact value is known in several cases. Equivalence with other problems. The flip distance between triangulations of a convex polygon is equivalent to the rotation distance between two binary trees. Computational complexity. &lt;templatestyles src="Unsolved/styles.css" /&gt; Unsolved problem in computer science: What is the complexity of computing the flip distance between two triangulations of a convex polygon? Computing the flip distance between triangulations of a point set is both NP-complete and APX-hard. However, it is fixed-parameter tractable (FPT), and several FPT algorithms that run in exponential time have been proposed. Computing the flip distance between triangulations of a simple polygon is also NP-hard. The complexity of computing the flip distance between triangulations of a convex polygon remains an open problem. Algorithms. Let n be the number of points in the point set and k be the flip distance. The current best FPT algorithm runs in formula_5. A faster FPT algorithm exists for the flip distance between convex polygon triangulations; it has time complexity formula_6 If no five points of a point set form an empty pentagon, there exists a formula_7 algorithm for the flip distance between triangulations of this point set. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "n" }, { "math_id": 1, "text": "2n-10" }, { "math_id": 2, "text": "n\\geq13" }, { "math_id": 3, "text": "5.2n - 33.6" }, { "math_id": 4, "text": "7n/3+\\Theta(1)" }, { "math_id": 5, "text": "O(n + k \\cdot 32^k)" }, { "math_id": 6, "text": "O(3.82^k)" }, { "math_id": 7, "text": "O(n^2)" } ]
https://en.wikipedia.org/wiki?curid=75252094
7525735
Spread option
Financial derivatives trading strategy In finance, a spread option is a type of option where the payoff is based on the difference in price between two underlying assets. For example, the two assets could be crude oil and heating oil; trading such an option might be of interest to oil refineries, whose profits are a function of the difference between these two prices. Spread options are generally traded over the counter, rather than on exchange. A 'spread option' is not the same as an 'option spread'. A spread option is a new, relatively rare type of exotic option on two underlyings, while an option spread is a combination trade: the purchase of one (vanilla) option and the sale of another option on the same underlying. Spread option valuation. For a spread call, the payoff can be written as formula_0 where S1 and S2 are the prices of the two assets and K is a constant called the strike price. For a spread put it is formula_1. When K equals zero a spread option is the same as an option to exchange one asset for another. An explicit solution, Margrabe's formula, is available in this case, and this type of option is also known as a Margrabe option or an outperformance option. In 1995 Kirk's Approximation, a formula valid when K is small but non-zero, was published. This amounts to a modification of the standard Black–Scholes formula, with a special expression for the sigma (volatility) to be used, which is based on the volatilities and the correlation of the two assets. Kirk's approximation can also be derived explicitly from Margrabe's formula. The same year Pearson published an algorithm requiring a one-dimensional numerical integration to compute the option value. Used with an appropriate rotation of the domain and Gauss-Hermite quadrature, Choi (2018) showed that the numerical integral can be done very efficiently. Li, Deng, and Zhou (2006) published accurate approximation formulas for both spread option prices and their Greeks. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "C = \\max(0,S_1-S_2-K)" }, { "math_id": 1, "text": "P = \\max(0,K-S_1+S_2)" } ]
https://en.wikipedia.org/wiki?curid=7525735
75271665
L-H mode transition
Low to High Confinement Mode Transition, more commonly referred to as L-H transition, is a phenomenon in the fields of plasma physics and magnetic confinement fusion, signifying the transition from less efficient plasma confinement to highly efficient modes. The L-H transition, a milestone in the development of nuclear fusion, enables the confinement of high-temperature plasmas (ionized gases at extremely high temperatures). The transition is dependent on many factors such as density, magnetic field strength, heating method, plasma fueling, and edge plasma control, and is made possible through mechanisms such as edge turbulence, E×B shear, edge electric field, and edge current and plasma flow. Researchers studying this field use tools such as Electron Cyclotron Emission, Thomson Scattering, magnetic diagnostics, and Langmuir probes to gauge the PLH (energy needed for the transition) and seek to lower this value. This confinement is a necessary condition for sustaining the fusion reactions, which involve the combination of atomic nuclei, leading to the release of vast amounts of energy. Background. Key terms and concepts needed to comprehend L-H Transition include understanding plasma and fusion. Plasma. Plasma is one of the four fundamental states of matter, other than solid, liquid, and gas. In contrast to other states, plasma is composed of ionized gas particles, which cause the separation of its electrons from atoms/molecules and result in the creation of an electrically conductive medium. It occurs in phenomena like lightning, stars, and fusion plasma. Fusion. Fusion is a nuclear process in which two atomic nuclei combine to form a single bigger nucleus. This phenomenon releases a substantial amount of energy and is the process that powers stars. On Earth, controlled nuclear fusion is being pursued as a clean and virtually limitless energy source. It involves the fusion of isotopes like deuterium (hydrogen atom with 1 neutron) and tritium (hydrogen atom with 2 neutrons), and generates energy in the form of kinetic energy (energy in the form of motion/high speed) of released particles, such as neutrons, and intense heat. The principle is based on Einstein's equation E=mc^2, and as the resulting helium is marginally lighter than the two original hydrogens, the difference in the mass is converted into energy, known as mass defect. It is this energy that can be converted into clean electricity without producing waste. Overview of Confinement Modes. Plasma in both L-Mode and H-Mode exhibit distinct characteristics related to turbulence, control, power thresholds, energy efficiency, and confinement durations. PLH (H-Mode Power Threshold). PLH. PLH (H-mode power threshold) is an essential parameter in nuclear fusion. It represents the minimum power input required to trigger the transition from a low-confinement mode (L-Mode) to a high-confinement mode (H-Mode) in plasma confinement devices, such as tokamaks or stellarators. The PLH signifies the point at which the plasma attains the conditions necessary for enhanced energy confinement, reduced turbulence, and improved stability characteristic of H-Mode. Controlled nuclear fusion requires understanding and precise control of the PLH in order to facilitate the continuous generation of energy from the fusion process. Factors Influencing PLH. Plasma Density and Magnetic Field Strength. H-Mode Power Threshold (PLH) in experimental nuclear-controlled fusion is highly dependent on both plasma confinement and magnetic field intensity. Higher plasma densities and stronger magnetic fields correlate positively with the elevated PLH. formula_0 Higher plasma densities result in increased particle collisions, enhancing the confinement of energy and increasing the plasma's stability. The greater the density, the higher the threshold of power (PLH) required to transition from L-Mode to H-Mode. The increased particle density allows for improved plasma confinement, which is vital for sustaining fusion reactions efficiently. Similarly, stronger magnetic fields serve to contain and shape the plasma, mitigating its loss and preventing contact with the reactor's walls, which would ultimately lead to the reaction's failure. This magnetic confinement is essential for preventing energy losses and ensuring that the plasma reaches the conditions necessary for the L-Mode to H-Mode transition. Heating Method. The heating methods used in fusion devices significantly impact the PLH. Various techniques, such as neutral beam injection (introduction high energy neutral particles to increase plasma temperature), radio frequency heating (uses radiofrequency waves to increase kinetic energy of particles), and magnetic confinement(uses magnetic fields to control extremely hot plasma), are employed to heat the plasma to the required temperatures for H-Mode. The choice of heating method and the effectiveness of energy transfer to the plasma are key factors in determining the PLH. Plasma Fueling. Plasma fueling, which involves introducing additional fuel into the plasma, is another factor influencing the PLH. By injecting fuel, researchers can alter the plasma's density and temperature. An efficient and well-calibrated fueling system can elevate the plasma density, increasing the number of particles within the plasma, which is essential for enhancing confinement and stability. Additionally, effective fueling contributes to the rise in plasma temperature, a vital factor in achieving the conditions required for the L-Mode to H-Mode transition. Edge Plasma Control. Edge plasma control is an important aspect of achieving and maintaining H-Mode in fusion devices. The edge plasma region, located at the outer boundary of the plasma confinement area, is susceptible to instabilities and turbulence. The edge plasma is sensitive to disturbances because it's close to the magnetic confinement boundaries, where the plasma interacts with the walls of the containment vessel. These disturbances can lead to issues like uneven heat and particle movement or localized turbulence, which affect the transition to H-Mode. To tackle this techniques such as magnetic shaping and advanced tools can control the edge plasma. The aim is to reduce these disturbances and make the edge plasma more stable. By regulating factors such as temperature, density, and impurities in the edge plasma, researchers can influence the PLH (H-Mode Power Threshold). Effective control of these factors ensures that the conditions for transitioning from L-Mode to H-Mode are met and maintained. Methods for Measuring and Determining PLH. Electron Cyclotron Emissions (ECE). Electron Cyclotron Emission (ECE) diagnostics, involve observing the radiation emitted by electrons as they undergo cyclotron motion (motion where a particle moves in a spiral path away from the center) in the magnetic field. This technique provides valuable insights into plasma parameters, including electron temperature and density. By analyzing the emitted radiation's spectral characteristics, researchers can precisely measure these properties, aiding in the determination of PLH. Thomson Scattering. Thomson scattering employs laser beams to scatter off plasma electrons. The scattered light's characteristics show data on the velocity and temperature of these electrons, providing critical information about the plasma's thermal energy. Magnetic Diagnostics. Magnetic sensors and probes are employed to map the magnetic fields within the plasma confinement device. Knowledge of the magnetic field's strength and configuration is fundamental for determining PLH, as it directly affects plasma stability and confinement. Langmuir Probes. Langmuir probes are small electrodes inserted into the plasma to measure its properties, including electron temperature, density, and plasma potential. These measurements are critical for evaluating PLH and understanding the behavior of the plasma. Transition Mechanisms. A few key processes that make the transition between L-H transition possible and allow for the improved stability of H-mode are edge turbulence, E×B shear, edge electric field, edge current, and plasma flow. Mechanisms Driving L-H Transition. Edge Turbulence. The behavior of edge turbulence, a common feature in plasmas, is closely linked to the L-H transition. Researchers study how turbulence responds to changes in parameters like E×B shear, Er gradients, and other variables. E×B Shear. One of the mechanisms thought to be responsible for triggering the L-H transition is the phenomenon known as E×B shear stabilization of turbulence. This refers to the rotation of the plasma resulting from the interaction between the electric field (E) and the magnetic field (B). As the plasma approaches the transition point, the E×B shear increases, creating a shearing (moving in a way that opposes the turbulent transport of particles, heat, and energy) motion within the plasma. This shearing motion suppresses turbulent transport (turbulent structures, such as eddies and vortices, within the plasma), promoting stability and improved confinement characteristic of H-mode. Edge Electric Field (Er). The behavior of the plasma at its edge, specifically the edge electric field (Er), plays a role in the L-H transition. As the transition approaches, there is the emergence of increasingly steep Er gradients near the plasma's edge. These gradient changes are closely associated with the suppression of turbulent transport, which refers to the erratic movement of particles and heat within the plasma. This suppression marks the shift to the H-mode, a state of plasma confinement that is significantly more efficient and stable, making it a key goal in nuclear fusion research. Edge Current and Plasma Flow. The L-H transition's characteristics are further influenced by edge current and the toroidal flow of plasma. The complex interactions between these two elements can introduce variability in the threshold conditions for the transition to the more efficient H-mode. Future Implications. L-H transition in nuclear fusion, if understood and used correctly, has the potential for clean energy and sustainable power plants. Importance of Understanding L-H Transition in Nuclear Fusion. Enhanced Confinement. The transition to H-Mode brings about an improvement in plasma confinement. This leads to increased energy production and more efficient fusion reactions. Pedestal Formation. H-Mode is associated with the development of a "pedestal" in the plasma profile. This pedestal acts as a protective barrier, preventing the plasma from contacting the reactor walls. The pedestal enhances stability and enables the plasma to reach the conditions necessary for sustained fusion reactions. PLH Optimization. Achieving and maintaining H-Mode requires reaching the PLH (H-Mode Power Threshold). Understanding the factors that influence PLH, such as plasma density, magnetic field strength, heating methods, and edge plasma control, is essential for ensuring a smooth transition and sustained H-Mode operation. Future Energy Solutions. Controlled nuclear fusion has the potential to revolutionize the energy sector. It offers a clean and virtually limitless energy source, significantly reducing greenhouse gas emissions and addressing energy demands. The L-H transition is a critical step towards harnessing the immense energy release of fusion reactions. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\tau=(n*V)/(2*B)" } ]
https://en.wikipedia.org/wiki?curid=75271665
7527818
Proofs of trigonometric identities
Collection of proofs of equations involving trigonometric functions There are several equivalent ways for defining trigonometric functions, and the proofs of the trigonometric identities between them depend on the chosen definition. The oldest and most elementary definitions are based on the geometry of right triangles and the ratio between their sides. The proofs given in this article use these definitions, and thus apply to non-negative angles not greater than a right angle. For greater and negative angles, see Trigonometric functions. Other definitions, and therefore other proofs are based on the Taylor series of sine and cosine, or on the differential equation formula_0 to which they are solutions. Elementary trigonometric identities. Definitions. The six trigonometric functions are defined for every real number, except, for some of them, for angles that differ from 0 by a multiple of the right angle (90°). Referring to the diagram at the right, the six trigonometric functions of θ are, for angles smaller than the right angle: formula_1 formula_2 formula_3 formula_4 formula_5 formula_6 Ratio identities. In the case of angles smaller than a right angle, the following identities are direct consequences of above definitions through the division identity formula_7 They remain valid for angles greater than 90° and for negative angles. formula_8 formula_9 formula_10 formula_11 formula_12 Or formula_13 formula_14 Complementary angle identities. Two angles whose sum is π/2 radians (90 degrees) are "complementary". In the diagram, the angles at vertices A and B are complementary, so we can exchange a and b, and change θ to π/2 − θ, obtaining: formula_15 formula_16 formula_17 formula_18 formula_19 formula_20 Pythagorean identities. Identity 1: formula_21 The following two results follow from this and the ratio identities. To obtain the first, divide both sides of formula_21 by formula_22; for the second, divide by formula_23. formula_24 formula_25 Similarly formula_26 formula_27 Identity 2: The following accounts for all three reciprocal functions. formula_28 Proof 2: Refer to the triangle diagram above. Note that formula_29 by Pythagorean theorem. formula_30 Substituting with appropriate functions - formula_31 Rearranging gives: formula_28 Angle sum identities. Sine. Draw a horizontal line (the "x"-axis); mark an origin O. Draw a line from O at an angle formula_32 above the horizontal line and a second line at an angle formula_33 above that; the angle between the second line and the "x"-axis is formula_34. Place P on the line defined by formula_34 at a unit distance from the origin. Let PQ be a line perpendicular to line OQ defined by angle formula_32, drawn from point Q on this line to point P. formula_35 OQP is a right angle. Let QA be a perpendicular from point A on the "x"-axis to Q and PB be a perpendicular from point B on the "x"-axis to P. formula_35 OAQ and OBP are right angles. Draw R on PB so that QR is parallel to the "x"-axis. Now angle formula_36 (because formula_37, making formula_38, and finally formula_36) formula_39 formula_40 formula_41 formula_42 formula_43, so formula_44 formula_45, so formula_46 formula_47 By substituting formula_48 for formula_33 and using the reflection identities of even and odd functions, we also get: formula_49 formula_50 Cosine. Using the figure above, formula_40 formula_41 formula_42 formula_51, so formula_52 formula_53, so formula_54 formula_55 By substituting formula_48 for formula_33 and using the reflection identities of even and odd functions, we also get: formula_56 formula_57 Also, using the complementary angle formulae, formula_58 Tangent and cotangent. From the sine and cosine formulae, we get formula_59 Dividing both numerator and denominator by formula_60, we get formula_61 Subtracting formula_62 from formula_63, using formula_64, formula_65 Similarly, from the sine and cosine formulae, we get formula_66 Then by dividing both numerator and denominator by formula_67, we get formula_68 Or, using formula_69, formula_70 Using formula_71, formula_72 Double-angle identities. From the angle sum identities, we get formula_73 and formula_74 The Pythagorean identities give the two alternative forms for the latter of these: formula_75 formula_76 The angle sum identities also give formula_77 formula_78 It can also be proved using Euler's formula formula_79 Squaring both sides yields formula_80 But replacing the angle with its doubled version, which achieves the same result in the left side of the equation, yields formula_81 It follows that formula_82. Expanding the square and simplifying on the left hand side of the equation gives formula_83. Because the imaginary and real parts have to be the same, we are left with the original identities formula_84, and also formula_85. Half-angle identities. The two identities giving the alternative forms for cos 2θ lead to the following equations: formula_86 formula_87 The sign of the square root needs to be chosen properly—note that if 2π is added to θ, the quantities inside the square roots are unchanged, but the left-hand-sides of the equations change sign. Therefore, the correct sign to use depends on the value of θ. For the tan function, the equation is: formula_88 Then multiplying the numerator and denominator inside the square root by (1 + cos θ) and using Pythagorean identities leads to: formula_89 Also, if the numerator and denominator are both multiplied by (1 - cos θ), the result is: formula_90 This also gives: formula_91 Similar manipulations for the cot function give: formula_92 Miscellaneous – the triple tangent identity. If formula_93 half circle (for example, formula_94, formula_95 and formula_96 are the angles of a triangle), formula_97 Proof: formula_98 Miscellaneous – the triple cotangent identity. If formula_99 quarter circle, formula_100. Proof: Replace each of formula_101, formula_102, and formula_103 with their complementary angles, so cotangents turn into tangents and vice versa. Given formula_104 formula_105 so the result follows from the triple tangent identity. Sum to product identities. Proof of sine identities. First, start with the sum-angle identities: formula_109 formula_50 By adding these together, formula_110 Similarly, by subtracting the two sum-angle identities, formula_111 Let formula_112 and formula_113, formula_114 and formula_115 Substitute formula_95 and formula_96 formula_116 formula_117 Therefore, formula_118 Proof of cosine identities. Similarly for cosine, start with the sum-angle identities: formula_119 formula_57 Again, by adding and subtracting formula_120 formula_121 Substitute formula_95 and formula_96 as before, formula_122 formula_123 Inequalities. The figure at the right shows a sector of a circle with radius 1. The sector is "θ"/(2π) of the whole circle, so its area is "θ"/2. We assume here that "θ" &lt; π/2. formula_124 formula_125 formula_126 The area of triangle "OAD" is "AB"/2, or sin("θ")/2. The area of triangle "OCD" is "CD"/2, or tan("θ")/2. Since triangle "OAD" lies completely inside the sector, which in turn lies completely inside triangle "OCD", we have formula_127 This geometric argument relies on definitions of arc length and area, which act as assumptions, so it is rather a condition imposed in construction of trigonometric functions than a provable property. For the sine function, we can handle other values. If "θ" &gt; π/2, then "θ" &gt; 1. But sin "θ" ≤ 1 (because of the Pythagorean identity), so sin "θ" &lt; "θ". So we have formula_128 For negative values of "θ" we have, by the symmetry of the sine function formula_129 Hence formula_130 and formula_131 formula_132 formula_133 formula_134 Identities involving calculus. Sine and angle ratio identity. In other words, the function sine is differentiable at 0, and its derivative is 1. Proof: From the previous inequalities, we have, for small angles formula_135, Therefore, formula_136, Consider the right-hand inequality. Since formula_137 formula_138 Multiply through by formula_139 formula_140 Combining with the left-hand inequality: formula_141 Taking formula_139 to the limit as formula_142 formula_133 Therefore, formula_134 formula_143 Cosine and angle ratio identity. Proof: formula_144 The limits of those three quantities are 1, 0, and 1/2, so the resultant limit is zero. formula_145 Cosine and square of angle ratio identity. Proof: As in the preceding proof, formula_146 The limits of those three quantities are 1, 1, and 1/2, so the resultant limit is 1/2. Proof of compositions of trig and inverse trig functions. All these functions follow from the Pythagorean trigonometric identity. We can prove for instance the function formula_147 Proof: We start from formula_148 (I) Then we divide this equation (I) by formula_22 formula_149 (II) formula_150 Then use the substitution formula_151: formula_152 formula_153 Then we use the identity formula_154 formula_155 (III) And initial Pythagorean trigonometric identity proofed... Similarly if we divide this equation (I) by formula_23 formula_156 (II) formula_157 Then use the substitution formula_151: formula_153 Then we use the identity formula_154 formula_155 (III) And initial Pythagorean trigonometric identity proofed... formula_158 formula_159 formula_160 (IV) Let we guess that we have to prove: formula_161 formula_162 (V) Replacing (V) into (IV) : formula_163 formula_164 So it's true: formula_165 and guessing statement was true: formula_161 formula_166 Now y can be written as x ; and we have [arcsin] expressed through [arctan]... formula_167 Similarly if we seek :formula_168... formula_169 formula_170 formula_171 formula_172 formula_173 From :formula_174... formula_175 formula_176 And finally we have [arccos] expressed through [arctan]... formula_177 See also. &lt;templatestyles src="Div col/styles.css"/&gt; Notes. &lt;templatestyles src="Reflist/styles.css" /&gt; References. &lt;templatestyles src="Refbegin/styles.css" /&gt;
[ { "math_id": 0, "text": "f''+f=0" }, { "math_id": 1, "text": " \\sin \\theta = \\frac {\\mathrm{opposite}}{\\mathrm{hypotenuse}} = \\frac {a}{h}" }, { "math_id": 2, "text": " \\cos \\theta = \\frac {\\mathrm{adjacent}}{\\mathrm{hypotenuse}} = \\frac {b}{h}" }, { "math_id": 3, "text": " \\tan \\theta = \\frac {\\mathrm{opposite}}{\\mathrm{adjacent}} = \\frac {a}{b}" }, { "math_id": 4, "text": " \\cot \\theta = \\frac {\\mathrm{adjacent}}{\\mathrm{opposite}} = \\frac {b}{a}" }, { "math_id": 5, "text": " \\sec \\theta = \\frac {\\mathrm{hypotenuse}}{\\mathrm{adjacent}} = \\frac {h}{b}" }, { "math_id": 6, "text": " \\csc \\theta = \\frac {\\mathrm{hypotenuse}}{\\mathrm{opposite}} = \\frac {h}{a}" }, { "math_id": 7, "text": " \\frac {a}{b}= \\frac {\\left(\\frac {a}{h}\\right)} {\\left(\\frac {b}{h}\\right) }." }, { "math_id": 8, "text": " \\tan \\theta\n= \\frac{\\mathrm{opposite}}{\\mathrm{adjacent}}\n= \\frac { \\left( \\frac{\\mathrm{opposite}}{\\mathrm{hypotenuse}} \\right) } { \\left( \\frac{\\mathrm{adjacent}}{\\mathrm{hypotenuse}}\\right) }\n= \\frac {\\sin \\theta} {\\cos \\theta} " }, { "math_id": 9, "text": " \\cot \\theta =\\frac{\\mathrm{adjacent}}{\\mathrm{opposite}}\n= \\frac { \\left( \\frac{\\mathrm{adjacent}}{\\mathrm{adjacent}} \\right) } { \\left( \\frac {\\mathrm{opposite}}{\\mathrm{adjacent}} \\right) }\n= \\frac {1}{\\tan \\theta} = \\frac {\\cos \\theta}{\\sin \\theta} " }, { "math_id": 10, "text": " \\sec \\theta = \\frac {1}{\\cos \\theta} = \\frac{\\mathrm{hypotenuse}}{\\mathrm{adjacent}} " }, { "math_id": 11, "text": " \\csc \\theta = \\frac {1}{\\sin \\theta} = \\frac{\\mathrm{hypotenuse}}{\\mathrm{opposite}} " }, { "math_id": 12, "text": " \\tan \\theta = \\frac{\\mathrm{opposite}}{\\mathrm{adjacent}}\n= \\frac{\\left(\\frac{\\mathrm{opposite} \\times \\mathrm{hypotenuse}}{\\mathrm{opposite} \\times \\mathrm{adjacent}} \\right) } { \\left( \\frac {\\mathrm{adjacent} \\times \\mathrm{hypotenuse}} {\\mathrm{opposite} \\times \\mathrm{adjacent} } \\right) }\n= \\frac{\\left( \\frac{\\mathrm{hypotenuse}}{\\mathrm{adjacent}} \\right)} { \\left( \\frac{\\mathrm{hypotenuse}}{\\mathrm{opposite}} \\right)}\n= \\frac {\\sec \\theta}{\\csc \\theta} " }, { "math_id": 13, "text": " \\tan \\theta = \\frac{\\sin \\theta}{\\cos \\theta}\n= \\frac{\\left( \\frac{1}{\\csc \\theta} \\right) }{\\left( \\frac{1}{\\sec \\theta} \\right) }\n= \\frac{\\left( \\frac{\\csc \\theta \\sec \\theta}{\\csc \\theta} \\right) }{\\left( \\frac{\\csc \\theta \\sec \\theta}{\\sec \\theta} \\right) }\n= \\frac{\\sec \\theta}{\\csc \\theta} " }, { "math_id": 14, "text": " \\cot \\theta = \\frac {\\csc \\theta}{\\sec \\theta}" }, { "math_id": 15, "text": " \\sin\\left( \\pi/2-\\theta\\right) = \\cos \\theta" }, { "math_id": 16, "text": " \\cos\\left( \\pi/2-\\theta\\right) = \\sin \\theta" }, { "math_id": 17, "text": " \\tan\\left( \\pi/2-\\theta\\right) = \\cot \\theta" }, { "math_id": 18, "text": " \\cot\\left( \\pi/2-\\theta\\right) = \\tan \\theta" }, { "math_id": 19, "text": " \\sec\\left( \\pi/2-\\theta\\right) = \\csc \\theta" }, { "math_id": 20, "text": " \\csc\\left( \\pi/2-\\theta\\right) = \\sec \\theta" }, { "math_id": 21, "text": "\\sin^2\\theta + \\cos^2\\theta = 1" }, { "math_id": 22, "text": "\\cos^2\\theta" }, { "math_id": 23, "text": "\\sin^2\\theta" }, { "math_id": 24, "text": "\\tan^2\\theta + 1\\ = \\sec^2\\theta " }, { "math_id": 25, "text": "\\sec^2\\theta - \\tan^2\\theta = 1 " }, { "math_id": 26, "text": "1\\ + \\cot^2\\theta = \\csc^2\\theta " }, { "math_id": 27, "text": "\\csc^2\\theta - \\cot^2\\theta = 1" }, { "math_id": 28, "text": " \\csc^2\\theta + \\sec^2\\theta - \\cot^2\\theta = 2\\ + \\tan^2\\theta " }, { "math_id": 29, "text": "a^2+b^2=h^2" }, { "math_id": 30, "text": "\\csc^2\\theta + \\sec^2\\theta = \\frac{h^2}{a^2} + \\frac{h^2}{b^2} = \\frac{a^2+b^2}{a^2} + \\frac{a^2+b^2}{b^2} = 2\\ + \\frac{b^2}{a^2} + \\frac{a^2}{b^2}" }, { "math_id": 31, "text": " 2\\ + \\frac{b^2}{a^2} + \\frac{a^2}{b^2} = 2\\ + \\tan^2\\theta+ \\cot^2\\theta " }, { "math_id": 32, "text": "\\alpha" }, { "math_id": 33, "text": "\\beta" }, { "math_id": 34, "text": "\\alpha + \\beta" }, { "math_id": 35, "text": "\\therefore" }, { "math_id": 36, "text": "RPQ = \\alpha" }, { "math_id": 37, "text": "OQA = \\frac{\\pi}{2} - \\alpha" }, { "math_id": 38, "text": "RQO = \\alpha, RQP = \\frac{\\pi}{2}-\\alpha" }, { "math_id": 39, "text": "RPQ = \\tfrac{\\pi}{2} - RQP = \\tfrac{\\pi}{2} - (\\tfrac{\\pi}{2} - RQO) = RQO = \\alpha" }, { "math_id": 40, "text": "OP = 1" }, { "math_id": 41, "text": "PQ = \\sin \\beta" }, { "math_id": 42, "text": "OQ = \\cos \\beta" }, { "math_id": 43, "text": "\\frac{AQ}{OQ} = \\sin \\alpha" }, { "math_id": 44, "text": "AQ = \\sin \\alpha \\cos \\beta" }, { "math_id": 45, "text": "\\frac{PR}{PQ} = \\cos \\alpha" }, { "math_id": 46, "text": "PR = \\cos \\alpha \\sin \\beta" }, { "math_id": 47, "text": "\\sin (\\alpha + \\beta) = PB = RB+PR = AQ+PR = \\sin \\alpha \\cos \\beta + \\cos \\alpha \\sin \\beta" }, { "math_id": 48, "text": "-\\beta" }, { "math_id": 49, "text": "\\sin (\\alpha - \\beta) = \\sin \\alpha \\cos (-\\beta) + \\cos \\alpha \\sin (-\\beta)" }, { "math_id": 50, "text": "\\sin (\\alpha - \\beta) = \\sin \\alpha \\cos \\beta - \\cos \\alpha \\sin \\beta" }, { "math_id": 51, "text": "\\frac{OA}{OQ} = \\cos \\alpha" }, { "math_id": 52, "text": "OA = \\cos \\alpha \\cos \\beta" }, { "math_id": 53, "text": "\\frac{RQ}{PQ} = \\sin \\alpha" }, { "math_id": 54, "text": "RQ = \\sin \\alpha \\sin \\beta" }, { "math_id": 55, "text": "\\cos (\\alpha + \\beta) = OB = OA-BA = OA-RQ = \\cos \\alpha \\cos \\beta\\ - \\sin \\alpha \\sin \\beta" }, { "math_id": 56, "text": "\\cos (\\alpha - \\beta) = \\cos \\alpha \\cos (-\\beta) - \\sin \\alpha \\sin (-\\beta)," }, { "math_id": 57, "text": "\\cos (\\alpha - \\beta) = \\cos \\alpha \\cos \\beta + \\sin \\alpha \\sin \\beta" }, { "math_id": 58, "text": "\n\\begin{align}\n\\cos (\\alpha + \\beta) & = \\sin\\left( \\pi/2-(\\alpha + \\beta)\\right) \\\\\n& = \\sin\\left( (\\pi/2-\\alpha) - \\beta\\right) \\\\\n& = \\sin\\left( \\pi/2-\\alpha\\right) \\cos \\beta - \\cos\\left( \\pi/2-\\alpha\\right) \\sin \\beta \\\\\n& = \\cos \\alpha \\cos \\beta - \\sin \\alpha \\sin \\beta \\\\\n\\end{align}\n" }, { "math_id": 59, "text": "\\tan (\\alpha + \\beta) = \\frac{\\sin (\\alpha + \\beta)}{\\cos (\\alpha + \\beta)}\n= \\frac{\\sin \\alpha \\cos \\beta + \\cos \\alpha \\sin \\beta}{\\cos \\alpha \\cos \\beta - \\sin \\alpha \\sin \\beta}" }, { "math_id": 60, "text": " \\cos \\alpha \\cos \\beta " }, { "math_id": 61, "text": "\\tan (\\alpha + \\beta) = \\frac{\\tan \\alpha + \\tan \\beta}{1 - \\tan \\alpha \\tan \\beta}" }, { "math_id": 62, "text": " \\beta " }, { "math_id": 63, "text": " \\alpha " }, { "math_id": 64, "text": "\\tan (- \\beta) = -\\tan \\beta " }, { "math_id": 65, "text": "\\tan (\\alpha - \\beta) = \\frac{\\tan \\alpha + \\tan (-\\beta)}{1 - \\tan \\alpha \\tan (-\\beta)} = \\frac{\\tan \\alpha - \\tan \\beta}{1 + \\tan \\alpha \\tan \\beta}" }, { "math_id": 66, "text": "\\cot (\\alpha + \\beta) = \\frac{\\cos (\\alpha + \\beta)}{\\sin (\\alpha + \\beta)}\n= \\frac{\\cos \\alpha \\cos \\beta - \\sin \\alpha \\sin \\beta}{\\sin \\alpha \\cos \\beta + \\cos \\alpha \\sin \\beta}" }, { "math_id": 67, "text": " \\sin \\alpha \\sin \\beta " }, { "math_id": 68, "text": "\\cot (\\alpha + \\beta) = \\frac{\\cot \\alpha \\cot \\beta - 1}{\\cot \\alpha + \\cot \\beta}" }, { "math_id": 69, "text": " \\cot \\theta = \\frac{1}{\\tan \\theta} " }, { "math_id": 70, "text": "\\cot (\\alpha + \\beta) = \\frac{1 - \\tan \\alpha \\tan \\beta}{\\tan \\alpha + \\tan \\beta}\n= \\frac{\\frac{1}{\\tan \\alpha \\tan \\beta} - 1}{\\frac{1}{\\tan \\alpha} + \\frac{1}{\\tan \\beta}}\n= \\frac{\\cot \\alpha \\cot \\beta - 1}{\\cot \\alpha + \\cot \\beta} " }, { "math_id": 71, "text": "\\cot (- \\beta) = -\\cot \\beta " }, { "math_id": 72, "text": "\\cot (\\alpha - \\beta) = \\frac{\\cot \\alpha \\cot (-\\beta) - 1}{ \\cot \\alpha + \\cot (-\\beta) } = \\frac{\\cot \\alpha \\cot \\beta + 1}{\\cot \\beta - \\cot \\alpha}" }, { "math_id": 73, "text": "\\sin (2 \\theta) = 2 \\sin \\theta \\cos \\theta" }, { "math_id": 74, "text": "\\cos (2 \\theta) = \\cos^2 \\theta - \\sin^2 \\theta" }, { "math_id": 75, "text": "\\cos (2 \\theta) = 2 \\cos^2 \\theta - 1" }, { "math_id": 76, "text": "\\cos (2 \\theta) = 1 - 2 \\sin^2 \\theta" }, { "math_id": 77, "text": "\\tan (2 \\theta) = \\frac{2 \\tan \\theta}{1 - \\tan^2 \\theta} = \\frac{2}{\\cot \\theta - \\tan \\theta}" }, { "math_id": 78, "text": "\\cot (2 \\theta) = \\frac{\\cot^2 \\theta - 1}{2 \\cot \\theta} = \\frac{\\cot \\theta - \\tan \\theta}{2}" }, { "math_id": 79, "text": " e^{i \\varphi}=\\cos \\varphi +i \\sin \\varphi" }, { "math_id": 80, "text": " e^{i 2\\varphi}=(\\cos \\varphi +i \\sin \\varphi)^{2}" }, { "math_id": 81, "text": " e^{i 2\\varphi}=\\cos 2\\varphi +i \\sin 2\\varphi" }, { "math_id": 82, "text": "(\\cos \\varphi +i \\sin \\varphi)^{2}=\\cos 2\\varphi +i \\sin 2\\varphi" }, { "math_id": 83, "text": "i(2 \\sin \\varphi \\cos \\varphi) + \\cos^2 \\varphi - \\sin^2 \\varphi\\ = \\cos 2\\varphi +i \\sin 2\\varphi" }, { "math_id": 84, "text": "\\cos^2 \\varphi - \\sin^2 \\varphi\\ = \\cos 2\\varphi" }, { "math_id": 85, "text": "2 \\sin \\varphi \\cos \\varphi = \\sin 2\\varphi" }, { "math_id": 86, "text": "\\cos \\frac{\\theta}{2} = \\pm\\, \\sqrt\\frac{1 + \\cos \\theta}{2}," }, { "math_id": 87, "text": "\\sin \\frac{\\theta}{2} = \\pm\\, \\sqrt\\frac{1 - \\cos \\theta}{2}." }, { "math_id": 88, "text": "\\tan \\frac{\\theta}{2} = \\pm\\, \\sqrt\\frac{1 - \\cos \\theta}{1 + \\cos \\theta}." }, { "math_id": 89, "text": "\\tan \\frac{\\theta}{2} = \\frac{\\sin \\theta}{1 + \\cos \\theta}." }, { "math_id": 90, "text": "\\tan \\frac{\\theta}{2} = \\frac{1 - \\cos \\theta}{\\sin \\theta}." }, { "math_id": 91, "text": "\\tan \\frac{\\theta}{2} = \\csc \\theta - \\cot \\theta." }, { "math_id": 92, "text": "\\cot \\frac{\\theta}{2} = \\pm\\, \\sqrt\\frac{1 + \\cos \\theta}{1 - \\cos \\theta} = \\frac{1 + \\cos \\theta}{\\sin \\theta} = \\frac{\\sin \\theta}{1 - \\cos \\theta} = \\csc \\theta + \\cot \\theta." }, { "math_id": 93, "text": "\\psi + \\theta + \\phi = \\pi = " }, { "math_id": 94, "text": "\\psi" }, { "math_id": 95, "text": "\\theta" }, { "math_id": 96, "text": "\\phi" }, { "math_id": 97, "text": "\\tan(\\psi) + \\tan(\\theta) + \\tan(\\phi) = \\tan(\\psi)\\tan(\\theta)\\tan(\\phi)." }, { "math_id": 98, "text": "\n\\begin{align}\n\\psi & = \\pi - \\theta - \\phi \\\\\n\\tan(\\psi) & = \\tan(\\pi - \\theta - \\phi) \\\\\n& = - \\tan(\\theta + \\phi) \\\\\n& = \\frac{- \\tan\\theta - \\tan\\phi}{1 - \\tan\\theta \\tan\\phi} \\\\\n& = \\frac{\\tan\\theta + \\tan\\phi}{\\tan\\theta \\tan\\phi - 1} \\\\\n(\\tan\\theta \\tan\\phi - 1) \\tan\\psi & = \\tan\\theta + \\tan\\phi \\\\\n\\tan\\psi \\tan\\theta \\tan\\phi - \\tan\\psi & = \\tan\\theta + \\tan\\phi \\\\\n\\tan\\psi \\tan\\theta \\tan\\phi & = \\tan\\psi + \\tan\\theta + \\tan\\phi \\\\\n\\end{align}\n" }, { "math_id": 99, "text": "\\psi + \\theta + \\phi = \\tfrac{\\pi}{2} = " }, { "math_id": 100, "text": " \\cot(\\psi) + \\cot(\\theta) + \\cot(\\phi) = \\cot(\\psi)\\cot(\\theta)\\cot(\\phi)" }, { "math_id": 101, "text": "\\psi " }, { "math_id": 102, "text": "\\theta " }, { "math_id": 103, "text": "\\phi " }, { "math_id": 104, "text": "\\psi + \\theta + \\phi = \\tfrac{\\pi}{2}" }, { "math_id": 105, "text": "\\therefore (\\tfrac{\\pi}{2}-\\psi) + (\\tfrac{\\pi}{2}-\\theta) + (\\tfrac{\\pi}{2}-\\phi) = \\tfrac{3\\pi}{2} - (\\psi+\\theta+\\phi) = \\tfrac{3\\pi}{2} - \\tfrac{\\pi}{2} = \\pi " }, { "math_id": 106, "text": "\\sin \\theta \\pm \\sin \\phi = 2 \\sin \\left ( \\frac{\\theta\\pm \\phi}2 \\right ) \\cos \\left ( \\frac{\\theta\\mp \\phi}2 \\right )" }, { "math_id": 107, "text": "\\cos \\theta + \\cos \\phi = 2 \\cos \\left ( \\frac{\\theta+\\phi}2 \\right ) \\cos \\left ( \\frac{\\theta-\\phi}2 \\right ) " }, { "math_id": 108, "text": "\\cos \\theta - \\cos \\phi = -2 \\sin \\left ( \\frac{\\theta+\\phi}2 \\right ) \\sin \\left ( \\frac{\\theta-\\phi}2 \\right ) " }, { "math_id": 109, "text": "\\sin (\\alpha + \\beta) = \\sin \\alpha \\cos \\beta + \\cos \\alpha \\sin \\beta" }, { "math_id": 110, "text": "\\sin (\\alpha + \\beta) + \\sin (\\alpha - \\beta) = \\sin \\alpha \\cos \\beta + \\cos \\alpha \\sin \\beta + \\sin \\alpha \\cos \\beta - \\cos \\alpha \\sin \\beta\n= 2 \\sin \\alpha \\cos \\beta " }, { "math_id": 111, "text": "\\sin (\\alpha + \\beta) - \\sin (\\alpha - \\beta) = \\sin \\alpha \\cos \\beta + \\cos \\alpha \\sin \\beta - \\sin \\alpha \\cos \\beta + \\cos \\alpha \\sin \\beta\n= 2 \\cos \\alpha \\sin \\beta " }, { "math_id": 112, "text": "\\alpha + \\beta = \\theta" }, { "math_id": 113, "text": "\\alpha - \\beta = \\phi" }, { "math_id": 114, "text": "\\therefore \\alpha = \\frac{\\theta + \\phi}2 " }, { "math_id": 115, "text": "\\beta = \\frac{\\theta - \\phi}2 " }, { "math_id": 116, "text": "\\sin \\theta + \\sin \\phi = 2 \\sin \\left( \\frac{\\theta + \\phi}2 \\right) \\cos \\left( \\frac{\\theta - \\phi}2 \\right) " }, { "math_id": 117, "text": "\\sin \\theta - \\sin \\phi = 2 \\cos \\left( \\frac{\\theta + \\phi}2 \\right) \\sin \\left( \\frac{\\theta - \\phi}2 \\right) = 2 \\sin \\left( \\frac{\\theta - \\phi}2 \\right) \\cos \\left( \\frac{\\theta + \\phi}2 \\right) " }, { "math_id": 118, "text": "\\sin \\theta \\pm \\sin \\phi = 2 \\sin \\left( \\frac{\\theta\\pm \\phi}2 \\right) \\cos \\left( \\frac{\\theta\\mp \\phi}2 \\right) " }, { "math_id": 119, "text": "\\cos (\\alpha + \\beta) = \\cos \\alpha \\cos \\beta\\ - \\sin \\alpha \\sin \\beta" }, { "math_id": 120, "text": "\\cos (\\alpha + \\beta) + \\cos (\\alpha - \\beta) = \\cos \\alpha \\cos \\beta\\ - \\sin \\alpha \\sin \\beta + \\cos \\alpha \\cos \\beta + \\sin \\alpha \\sin \\beta = 2\\cos \\alpha \\cos \\beta" }, { "math_id": 121, "text": "\\cos (\\alpha + \\beta) - \\cos (\\alpha - \\beta) = \\cos \\alpha \\cos \\beta\\ - \\sin \\alpha \\sin \\beta - \\cos \\alpha \\cos \\beta - \\sin \\alpha \\sin \\beta\n= -2 \\sin \\alpha \\sin \\beta" }, { "math_id": 122, "text": "\\cos \\theta + \\cos \\phi = 2 \\cos \\left( \\frac{\\theta+\\phi}2 \\right) \\cos \\left( \\frac{\\theta-\\phi}2 \\right) " }, { "math_id": 123, "text": "\\cos \\theta - \\cos \\phi = -2 \\sin \\left( \\frac{\\theta+\\phi}2 \\right) \\sin \\left( \\frac{\\theta-\\phi}2 \\right) " }, { "math_id": 124, "text": "OA = OD = 1" }, { "math_id": 125, "text": "AB = \\sin \\theta" }, { "math_id": 126, "text": "CD = \\tan \\theta" }, { "math_id": 127, "text": "\\sin \\theta < \\theta < \\tan \\theta." }, { "math_id": 128, "text": "\\frac{\\sin \\theta}{\\theta} < 1\\ \\ \\ \\mathrm{if}\\ \\ \\ 0 < \\theta." }, { "math_id": 129, "text": "\\frac{\\sin \\theta}{\\theta} = \\frac{\\sin (-\\theta)}{-\\theta} < 1." }, { "math_id": 130, "text": "\\frac{\\sin \\theta}{\\theta} < 1\\quad \\text{if }\\quad \\theta \\ne 0," }, { "math_id": 131, "text": "\\frac{\\tan \\theta}{\\theta} > 1\\quad \\text{if }\\quad 0 < \\theta < \\frac{\\pi}{2}." }, { "math_id": 132, "text": "\\lim_{\\theta \\to 0}{\\sin \\theta} = 0" }, { "math_id": 133, "text": "\\lim_{\\theta \\to 0}{\\cos \\theta} = 1" }, { "math_id": 134, "text": "\\lim_{\\theta \\to 0}{\\frac{\\sin \\theta}{\\theta}} = 1" }, { "math_id": 135, "text": "\\sin \\theta < \\theta < \\tan \\theta" }, { "math_id": 136, "text": "\\frac{\\sin \\theta}{\\theta} < 1 < \\frac{\\tan \\theta}{\\theta}" }, { "math_id": 137, "text": "\\tan \\theta = \\frac{\\sin \\theta}{\\cos \\theta} " }, { "math_id": 138, "text": "\\therefore 1 < \\frac{\\sin \\theta}{\\theta \\cos \\theta} " }, { "math_id": 139, "text": "\\cos \\theta " }, { "math_id": 140, "text": "\\cos \\theta < \\frac{\\sin \\theta}{\\theta} " }, { "math_id": 141, "text": "\\cos \\theta < \\frac{\\sin \\theta}{\\theta} < 1 " }, { "math_id": 142, "text": " \\theta \\to 0 " }, { "math_id": 143, "text": "\\lim_{\\theta \\to 0}\\frac{1 - \\cos \\theta}{\\theta} = 0" }, { "math_id": 144, "text": "\n\\begin{align}\n\\frac{1 - \\cos \\theta}{\\theta} & = \\frac{1 - \\cos^2 \\theta}{\\theta (1 + \\cos \\theta)}\\\\\n& = \\frac{\\sin^2 \\theta}{\\theta (1 + \\cos \\theta)}\\\\\n& = \\left( \\frac{\\sin \\theta}{\\theta} \\right) \\times \\sin \\theta \\times \\left( \\frac{1}{1 + \\cos \\theta} \\right)\\\\\n\\end{align}\n" }, { "math_id": 145, "text": " \\lim_{\\theta \\to 0}\\frac{1 - \\cos \\theta}{\\theta^2} = \\frac{1}{2} " }, { "math_id": 146, "text": "\\frac{1 - \\cos \\theta}{\\theta^2} = \\frac{\\sin \\theta}{\\theta} \\times \\frac{\\sin \\theta}{\\theta} \\times \\frac{1}{1 + \\cos \\theta}." }, { "math_id": 147, "text": "\\sin[\\arctan(x)]=\\frac{x}{\\sqrt{1+x^2}}" }, { "math_id": 148, "text": "\\sin^2\\theta+\\cos^2\\theta=1" }, { "math_id": 149, "text": "\\cos^2\\theta=\\frac{1}{\\tan^2\\theta+1}" }, { "math_id": 150, "text": "1-\\sin^2\\theta=\\frac{1}{\\tan^2\\theta+1}" }, { "math_id": 151, "text": "\\theta=\\arctan(x)" }, { "math_id": 152, "text": "1-\\sin^2[\\arctan(x)]=\\frac{1}{\\tan^2[\\arctan(x)]+1}" }, { "math_id": 153, "text": "\\sin^2[\\arctan(x)]=\\frac{\\tan^2[\\arctan(x)]}{\\tan^2[\\arctan(x)]+1}" }, { "math_id": 154, "text": "\\tan[\\arctan(x)]\\equiv x" }, { "math_id": 155, "text": "\\sin[\\arctan(x)]=\\frac{x}{\\sqrt{x^2+1}}" }, { "math_id": 156, "text": "\\sin^2\\theta=\\frac{\\frac{1}{1}}{1+\\frac{1}{\\tan^2\\theta}}" }, { "math_id": 157, "text": "\\sin^2\\theta=\\frac{\\tan^2\\theta}{\\tan^2\\theta+1}" }, { "math_id": 158, "text": "[\\arctan(x)]=[\\arcsin(\\frac{x}{\\sqrt{x^2+1}})]" }, { "math_id": 159, "text": "y=\\frac{x}{\\sqrt{x^2+1}}" }, { "math_id": 160, "text": "y^2=\\frac{x^2}{x^2+1}" }, { "math_id": 161, "text": "x=\\frac{y}{\\sqrt{1-y^2}}" }, { "math_id": 162, "text": "x^2=\\frac{y^2}{1-y^2}" }, { "math_id": 163, "text": "y^2=\\frac{\\frac{y^2}{(1-y^2)}}{\\frac{y^2}{(1-y^2)}+1}" }, { "math_id": 164, "text": "y^2=\\frac{\\frac{y^2}{(1-y^2)}}{\\frac{1}{(1-y^2)}}" }, { "math_id": 165, "text": "y^2=y^2" }, { "math_id": 166, "text": "[\\arctan(x)]=[\\arcsin(\\frac{x}{\\sqrt{x^2+1}})]=[\\arcsin(y)]=[\\arctan(\\frac{y}{\\sqrt{1-y^2}})]" }, { "math_id": 167, "text": "[\\arcsin(x)]=[\\arctan(\\frac{x}{\\sqrt{1-x^2}})]" }, { "math_id": 168, "text": "[\\arccos(x)]" }, { "math_id": 169, "text": "\\cos[\\arccos(x)]=x" }, { "math_id": 170, "text": "\\cos(\\frac{\\pi}{2}-(\\frac{\\pi}{2}-[\\arccos(x)]))=x" }, { "math_id": 171, "text": "\\sin(\\frac{\\pi}{2}-[\\arccos(x)])=x" }, { "math_id": 172, "text": "\\frac{\\pi}{2}-[\\arccos(x)]=[\\arcsin(x)]" }, { "math_id": 173, "text": "[\\arccos(x)]=\\frac{\\pi}{2}-[\\arcsin(x)]" }, { "math_id": 174, "text": "[\\arcsin(x)]" }, { "math_id": 175, "text": "[\\arccos(x)]=\\frac{\\pi}{2}-[\\arctan(\\frac{x}{\\sqrt{1-x^2}})]" }, { "math_id": 176, "text": "[\\arccos(x)]=\\frac{\\pi}{2}-[\\arccot(\\frac{\\sqrt{1-x^2}}{x})]" }, { "math_id": 177, "text": "[\\arccos(x)]=[\\arctan(\\frac{\\sqrt{1-x^2}}{x})]" } ]
https://en.wikipedia.org/wiki?curid=7527818
7528635
Magnetic dip
Angle made with the horizontal by Earth's magnetic field lines Magnetic dip, dip angle, or magnetic inclination is the angle made with the horizontal by Earth's magnetic field lines. This angle varies at different points on Earth's surface. Positive values of inclination indicate that the magnetic field of Earth is pointing downward, into Earth, at the point of measurement, and negative values indicate that it is pointing upward. The dip angle is in principle the angle made by the needle of a vertically held compass, though in practice ordinary compass needles may be weighted against dip or may be unable to move freely in the correct plane. The value can be measured more reliably with a special instrument typically known as a dip circle. Dip angle was discovered by the German engineer Georg Hartmann in 1544. A method of measuring it with a dip circle was described by Robert Norman in England in 1581. Explanation. Magnetic dip results from the tendency of a magnet to align itself with lines of magnetic field. As Earth's magnetic field lines are not parallel to the surface, the north end of a compass needle will point upward in the Southern Hemisphere (negative dip) or downward in the Northern Hemisphere (positive dip). The range of dip is from -90 degrees (at the South Magnetic Pole) to +90 degrees (at the North Magnetic Pole). Contour lines along which the dip measured at Earth's surface is equal are referred to as isoclinic lines. The locus of the points having zero dip is called the "magnetic equator" or aclinic line. Calculation for a given latitude. The inclination formula_0 is defined locally for the magnetic field due to Earth's core, and has a positive value if the field points below the horizontal (i.e. into Earth). Here we show how to determine the value of formula_0 at a given latitude, following the treatment given by Fowler. Outside Earth's core we consider Maxwell's equations in a vacuum, formula_1 and formula_2 where formula_3 and the subscript formula_4 denotes the core as the origin of these fields. The first means we can introduce the scalar potential formula_5 such that formula_6, while the second means the potential satisfies the Laplace equation formula_7. Solving to leading order gives the magnetic dipole potential formula_8 and hence the field formula_9 for magnetic moment formula_10 and position vector formula_11 on Earth's surface. From here it can be shown that the inclination formula_0 as defined above satisfies (from formula_12) formula_13 where formula_14 is the latitude of the point on Earth's surface. Practical importance. The phenomenon is especially important in aviation. Magnetic compasses on airplanes are made so that the center of gravity is significantly lower than the pivot point. As a result, the vertical component of the magnetic force is too weak to tilt the compass card significantly out of the horizontal plane, thus minimizing the dip angle shown in the compass. However, this also causes the airplane's compass to give erroneous readings during banked turns (turning error) and airspeed changes (acceleration error). Turning error. Magnetic dip shifts the center of gravity of the compass card, causing temporary inaccurate readings when turning north or south. As the aircraft turns, the force that results from the magnetic dip causes the float assembly to swing in the same direction that the float turns. This compass error is amplified with the proximity to either magnetic pole. To compensate for turning errors, pilots in the Northern Hemisphere will have to "undershoot" the turn when turning north, stopping the turn prior to the compass rotating to the correct heading; and "overshoot" the turn when turning south by stopping later than the compass. The effect is the opposite in the Southern Hemisphere. Acceleration error. The acceleration errors occur because the compass card tilts on its mount when under acceleration. In the Northern Hemisphere, when accelerating on either an easterly or westerly heading, the error appears as a turn indication toward the north. When decelerating on either of these headings, the compass indicates a turn toward the south. The effect is the opposite in the Southern Hemisphere. Balancing. Compass needles are often weighted during manufacture to compensate for magnetic dip, so that they will balance roughly horizontally. This balancing is latitude-dependent; see Compass balancing (magnetic dip). References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "I" }, { "math_id": 1, "text": "\\nabla \\times \\textbf{H}_c = \\textbf{0}" }, { "math_id": 2, "text": "\\nabla \\cdot \\textbf{B}_c = 0" }, { "math_id": 3, "text": "\\textbf{B}_c = \\mu_0\\textbf{H}_c" }, { "math_id": 4, "text": "c" }, { "math_id": 5, "text": "\\phi_c" }, { "math_id": 6, "text": "\\textbf{H}_c = -\\nabla \\phi_c" }, { "math_id": 7, "text": "\\nabla^2 \\phi_c = 0" }, { "math_id": 8, "text": "\\phi_c = \\frac{\\textbf{m}\\cdot \\textbf{r}}{4 \\pi r^3}" }, { "math_id": 9, "text": "\\textbf{B}_c = -\\mu_o \\nabla \\phi_c = \\frac{\\mu_o}{4\\pi}\\big[ \\frac{ 3\\hat{\\textbf{r}}(\\hat{\\textbf{r}}\\cdot \\textbf{m})-\\textbf{m}} {r^3} \\big]" }, { "math_id": 10, "text": "\\textbf{m}" }, { "math_id": 11, "text": "\\textbf{r}" }, { "math_id": 12, "text": "\\tan I = B_r / B_{\\theta}" }, { "math_id": 13, "text": "\\tan I = 2\\tan \\lambda" }, { "math_id": 14, "text": "\\lambda" } ]
https://en.wikipedia.org/wiki?curid=7528635
75311176
Median voting rule
Method for group decision-making The median voting rule or median mechanism is a rule for group decision-making along a one-dimensional domain. Each person votes by writing down his/her ideal value, and the rule selects a single value which is (in the basic mechanism) the "median" of all votes. Motivation. Many scenarions of group decision making involve a one-dimensional domain. Some examples are: Each member has in mind an ideal decision, called his "peak". Each agent prefers the actual amount to be as close as possible to his peak. A simple way to decide is the "average voting rule": ask each member what is his peak, and take the average of all peaks. But this rule easily manipulable. For example, suppose Alice's peak is 30, George's peak is 40, and Chana's peak is 50. If all voters report their true peaks, the actual amount will be 40. But Alice may manipulate and say that her peak is actually 0; then the average will be 30, which is Alice's actual peak. Thus, Alice has gained from the manipulation. Similarly, any agent whose peak is different than the outcome has an incentive to manipulate and report a false peak. In contrast, the median rule determines the actual budget at the "median" of all votes. This simple change makes the rule strategyproof: no voter can gain by reporting a false peak. In the above example, the median is 40, and it remains 40 even if Alice reports 0. In fact, as Alice's true peak is below the median, no false report by Alice can potentially decrease the median; Alice can only increase the median, but this will make her worse-off. Preconditions. The median voting rule holds in any setting in which the agents have single peaked preferences. This means that there exists some linear ordering &gt; of the alternatives, such that for each agent "i" with peak "pi": Once such a linear order exists, the median of any set of peaks can be computed by ordering the peaks along this linear order. Note that single-peakedness does not imply any particular distance-measure between the alternatives, and does not imply anything on alternatives at different sides of the peak. In particular, if a &gt; "pi" &gt; b, then the agent may prefer either a to b or b to a. Procedure. Each agent "i" in 1...,"n" is asked to report the value of "pi". The values are sorted in ascending order "p"1 ≤ ... ≤ "pn". In the basic mechanism, the chosen value when "n" is odd is "p"(n+1)/2, which equals the median of values (when "n" is even, the chosen value is "p"n/2):choice = median("p"1, ..., "pn"). Proof of strategyproofness. Here is a proof that the median rule is strategyproof: Using similar reasoning, one can prove that the median rule is also group-strategyproof, that is: no coalition has a coordinated manipulation that improves the utility of one of them without harming the others. Generalized median rules. Median with phantoms. The median rule is not the only strategyproof rule. One can construct alternative rules by adding fixed votes, that do not depend on the citizen votes. These fixed votes are called "phantoms". For every set of phantoms, the rule that chooses the median of the set of real votes + phantoms is group-strategyproof. For example, suppose the votes are 30, 40, and 50. Without phantoms, the median rule selects 40. If we add two phantoms at 0, then the median rule selects 30; if we add two phantoms at 100, the median rule selects 50; if we add medians at 20 and 35, the median rule selects 35. Here are some special cases of phantom-median rules, assuming all the votes are between 0 and 100: Moulin proved the following characterizations: Additional characterizations. Moulin's characterizations consider only rules that are "peak only", that is, the rule depends only on the "n" peaks. Ching proved that all rules that are strategyproof "and continuous", even if they are not "peak only", are augmented median rules, that is, can be described by a variant of the median rule with some 2"n" parameters. Moulin's characterizations require the rules to handle "all" single-peaked preferences. Several other works allow rules that handle only a subset of single-peaked preferences: Border and JordanSec.6,7 generalize the notions of single-peaked preferences and median voting rules to multidimensional settings. They consider three classes of preferences: "separable" (formula_1, where each "vi,j" is a single-peaked utility function); "quadratic" (formula_2 where A is a symmetric positive definite matrix); and their intersection "separable quadratic" (formula_3, where "ai,j" are positive constants). In quadratic non-separable domains, the only strategyproof mecanisms are dictatorial. But in separable domains, there are multidimensional strategyproof mechanisms that are composed of one-dimensional strategyproof mechanisms, one for each coordinate. Berga and SerizawaSec.4 seek rules that are both strategyproof and satisfy a condition they call "no vetoer": no individual should be able to avoid any alternative to be the outcome by declaring some preference. They characterize generalized median rules as the only strategyproof rules on "minimally-rich domains". They proved that the unique maximal domain that includes a minimally-rich domain, which allows for the existence of strategyproof rules satisfying the "no vetoer" condition, is the domain of convex preferences. Barbera, Gul and Stacchetti also generalize the notions of single-peaked preferences and median voting rules to multidimensional settings. Barbera and Jackson characterized strategyproof rules for "weakly-single-peaked" preferences, in which the maximal set may contain two alternatives. Moulin characterized strategyproof rules on "single-plateau preferences" - a generalization of single-peaked in which each agent is allowed to have an entire interval of ideal points. Application in the oil industry. In 1954, the Iranian Oil Consortium has adopted a median-like rule to determine Iran's total annual oil output. Annually, each member company's role was weighted by its fixed share of the total output. The chosen output, x, was the highest level such that the sum of the shares of members voting for levels as high as x was at least 70%.103-108 Related concepts. The median voter theorem relates to ranked voting mechanisms, in which each agent reports his/her full ranking over alternatives. The theorem says that, if the agents' preferences are single-peaked, then every Condorcet method always selects the candidate preferred by the median voter (the candidate closest to the voter whose peak is the median of all peaks). Highest median voting rules are an attempt at applying the same voting rule to elections by asking voters to submit judgments (scores) for each candidate. However, the strategyproof nature of the median voting rule does not extend to choosing candidates unless the voters have single-peaked preferences over each candidate's final "score." This may be a reasonable model of expressive voting, but the rule will not be strategyproof in situations where voters have single-peaked preferences over the "outcome" (winner) of the election. The Gibbard–Satterthwaite theorem says that every strategyproof rule on three or more alternatives must be a dictatorship. The median rule apparently contradicts this theorem, because it is strategyproof and it is not a dictatorship. In fact there is no contradiction: the Gibbard-Satterthwaite theorem applies only to rules that operate on the entire preference domain (that is, only to voting rules that can handle any set of preference rankings). In contrast, the median rule applies only to a restricted preference domain—the domain of single-peaked preferences. Dummet and Farquharson present a sufficient condition for stability in voting games. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "u_i(x) = -|x-p_i|" }, { "math_id": 1, "text": "u_i(x) = \\sum_{j=1}^m u_{i,j}(x_j)" }, { "math_id": 2, "text": "u_i(x) = -(x-p)^T A (x-p)" }, { "math_id": 3, "text": "u_i(x) = - \\sum_{j=1}^m a_{i,j}\\cdot (x_j - p_j)^2" } ]
https://en.wikipedia.org/wiki?curid=75311176
753145
Del in cylindrical and spherical coordinates
Mathematical gradient operator in certain coordinate systems This is a list of some vector calculus formulae for working with common curvilinear coordinate systems. Coordinate conversions. Note that the operation formula_2 must be interpreted as the two-argument inverse tangent, atan2. &lt;templatestyles src="Citation/styles.css"/&gt;^α This page uses formula_3 for the polar angle and formula_4 for the azimuthal angle, which is common notation in physics. The source that is used for these formulae uses formula_3 for the azimuthal angle and formula_4 for the polar angle, which is common mathematical notation. In order to get the mathematics formulae, switch formula_3 and formula_4 in the formulae shown in the table above. &lt;templatestyles src="Citation/styles.css"/&gt;^β Defined in Cartesian coordinates as formula_5. An alternative definition is formula_6. &lt;templatestyles src="Citation/styles.css"/&gt;^γ Defined in Cartesian coordinates as formula_7. An alternative definition is formula_8. Cartesian derivation. formula_15 formula_16 The expressions for formula_17 and formula_18 are found in the same way. Cylindrical derivation. formula_19 formula_20 formula_21 formula_22 formula_23 Spherical derivation. formula_24 formula_25 formula_26 formula_27 formula_28 Unit vector conversion formula. The unit vector of a coordinate parameter "u" is defined in such a way that a small positive change in "u" causes the position vector formula_29 to change in formula_30 direction. Therefore, formula_31 where s is the arc length parameter. For two sets of coordinate systems formula_32 and formula_33, according to chain rule, formula_34 Now, we isolate the formula_35th component. For formula_36, let formula_37. Then divide on both sides by formula_38 to get: formula_39 References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\theta \\in [0, \\pi]" }, { "math_id": 1, "text": "\\varphi \\in [0, 2\\pi]" }, { "math_id": 2, "text": "\\arctan\\left(\\frac{A}{B}\\right)" }, { "math_id": 3, "text": "\\theta" }, { "math_id": 4, "text": "\\varphi" }, { "math_id": 5, "text": "\\partial_i \\mathbf{A} \\otimes \\mathbf{e}_i" }, { "math_id": 6, "text": "\\mathbf{e}_i \\otimes \\partial_i \\mathbf{A}" }, { "math_id": 7, "text": "\\mathbf{e}_i \\cdot \\partial_i \\mathbf{T}" }, { "math_id": 8, "text": "\\partial_i \\mathbf{T} \\cdot \\mathbf{e}_i" }, { "math_id": 9, "text": "\\operatorname{div} \\, \\operatorname{grad} f \\equiv \\nabla \\cdot \\nabla f \\equiv \\nabla^2 f" }, { "math_id": 10, "text": "\\operatorname{curl} \\, \\operatorname{grad} f \\equiv \\nabla \\times \\nabla f = \\mathbf 0" }, { "math_id": 11, "text": "\\operatorname{div} \\, \\operatorname{curl} \\mathbf{A} \\equiv \\nabla \\cdot (\\nabla \\times \\mathbf{A}) = 0" }, { "math_id": 12, "text": "\\operatorname{curl} \\, \\operatorname{curl} \\mathbf{A} \\equiv \\nabla \\times (\\nabla \\times \\mathbf{A}) = \\nabla (\\nabla \\cdot \\mathbf{A}) - \\nabla^2 \\mathbf{A}" }, { "math_id": 13, "text": "\\nabla^2 (f g) = f \\nabla^2 g + 2 \\nabla f \\cdot \\nabla g + g \\nabla^2 f" }, { "math_id": 14, "text": "\\nabla^{2}\\left(\\mathbf{P}\\cdot\\mathbf{Q}\\right)=\\mathbf{Q}\\cdot\\nabla^{2}\\mathbf{P}-\\mathbf{P}\\cdot\\nabla^{2}\\mathbf{Q}+2\\nabla\\cdot\\left[\\left(\\mathbf{P}\\cdot\\nabla\\right)\\mathbf{Q}+\\mathbf{P}\\times\\nabla\\times\\mathbf{Q}\\right]\\quad" }, { "math_id": 15, "text": "\\begin{align}\n\\operatorname{div} \\mathbf A = \\lim_{V\\to 0} \\frac{\\iint_{\\partial V} \\mathbf A \\cdot d\\mathbf{S}}{\\iiint_V dV}\n\n&= \\frac{A_x(x+dx)\\,dy\\,dz - A_x(x)\\,dy\\,dz + A_y(y+dy)\\,dx\\,dz - A_y(y)\\,dx\\,dz + A_z(z+dz)\\,dx\\,dy - A_z(z)\\,dx\\,dy}{dx\\,dy\\,dz} \\\\\n\n&= \\frac{\\partial A_x}{\\partial x} + \\frac{\\partial A_y}{\\partial y} + \\frac{\\partial A_z}{\\partial z}\n\\end{align}" }, { "math_id": 16, "text": "\\begin{align}\n(\\operatorname{curl} \\mathbf A)_x = \\lim_{S^{\\perp \\mathbf{\\hat x}}\\to 0} \\frac{\\int_{\\partial S} \\mathbf A \\cdot d\\mathbf{\\ell}}{\\iint_{S} dS}\n&= \\frac{A_z(y+dy)\\,dz - A_z(y)\\,dz + A_y(z)\\,dy - A_y(z+dz)\\,dy }{dy\\,dz} \\\\\n&= \\frac{\\partial A_z}{\\partial y} - \\frac{\\partial A_y}{\\partial z}\n\\end{align}" }, { "math_id": 17, "text": "(\\operatorname{curl} \\mathbf A)_y" }, { "math_id": 18, "text": "(\\operatorname{curl} \\mathbf A)_z" }, { "math_id": 19, "text": "\\begin{align}\n\\operatorname{div} \\mathbf A &= \\lim_{V\\to 0} \\frac{\\iint_{\\partial V} \\mathbf A \\cdot d\\mathbf{S}}{\\iiint_V dV} \\\\\n&= \\frac{A_\\rho(\\rho+d\\rho)(\\rho+d\\rho)\\,d\\phi\\, dz - A_\\rho(\\rho)\\rho \\,d\\phi \\,dz + A_\\phi(\\phi+d\\phi)\\,d\\rho\\, dz - A_\\phi(\\phi)\\,d\\rho\\, dz + A_z(z+dz)\\,d\\rho\\, (\\rho +d\\rho/2)\\,d\\phi - A_z(z)\\,d\\rho (\\rho +d\\rho/2)\\, d\\phi}{\\rho \\,d\\phi \\,d\\rho\\, dz} \\\\\n&= \\frac 1 \\rho \\frac{\\partial (\\rho A_\\rho)}{\\partial \\rho} + \\frac 1 \\rho \\frac{\\partial A_\\phi}{\\partial \\phi} + \\frac{\\partial A_z}{\\partial z}\n\\end{align}" }, { "math_id": 20, "text": "\\begin{align}\n(\\operatorname{curl} \\mathbf A)_\\rho\n&= \\lim_{S^{\\perp \\hat{\\boldsymbol \\rho}}\\to 0} \\frac{\\int_{\\partial S} \\mathbf A \\cdot d\\boldsymbol{\\ell}}{\\iint_{S} dS} \\\\[1ex]\n&= \\frac{A_\\phi (z) \\left(\\rho+d\\rho\\right)\\,d\\phi - A_\\phi(z+dz) \\left(\\rho+d\\rho\\right)\\,d\\phi + A_z(\\phi + d\\phi)\\,dz - A_z(\\phi)\\,dz}{\\left(\\rho+d\\rho\\right)\\,d\\phi \\,dz} \\\\[1ex]\n&= -\\frac{\\partial A_\\phi}{\\partial z} + \\frac{1}{\\rho} \\frac{\\partial A_z}{\\partial \\phi}\n\\end{align}" }, { "math_id": 21, "text": "\\begin{align}\n(\\operatorname{curl} \\mathbf A)_\\phi &= \\lim_{S^{\\perp \\boldsymbol{\\hat \\phi}}\\to 0} \\frac{\\int_{\\partial S} \\mathbf A \\cdot d\\boldsymbol{\\ell}}{\\iint_{S} dS} \\\\\n&= \\frac{A_z (\\rho)\\,dz - A_z(\\rho + d\\rho)\\,dz + A_\\rho(z+dz)\\,d\\rho - A_\\rho(z)\\,d\\rho}{d\\rho \\,dz} \\\\\n&= -\\frac{\\partial A_z}{\\partial \\rho} + \\frac{\\partial A_\\rho}{\\partial z}\n\\end{align}" }, { "math_id": 22, "text": "\\begin{align}\n(\\operatorname{curl} \\mathbf A)_z &= \\lim_{S^{\\perp \\hat{\\boldsymbol z}}\\to 0} \\frac{\\int_{\\partial S} \\mathbf A \\cdot d\\mathbf{\\ell}}{\\iint_{S} dS} \\\\[1ex]\n&= \\frac{A_\\rho(\\phi)\\,d\\rho - A_\\rho(\\phi + d\\phi)\\,d\\rho + A_\\phi(\\rho + d\\rho)(\\rho + d\\rho)\\,d\\phi - A_\\phi(\\rho)\\rho \\,d\\phi}{\\rho \\,d\\rho \\,d\\phi} \\\\[1ex]\n&= -\\frac{1}{\\rho}\\frac{\\partial A_\\rho}{\\partial \\phi} + \\frac{1}{\\rho} \\frac{\\partial (\\rho A_\\phi)}{\\partial \\rho}\n\\end{align}" }, { "math_id": 23, "text": "\\begin{align}\n\\operatorname{curl} \\mathbf A &= (\\operatorname{curl} \\mathbf A)_\\rho \\hat{\\boldsymbol \\rho} + (\\operatorname{curl} \\mathbf A)_\\phi \\hat{\\boldsymbol \\phi} + (\\operatorname{curl} \\mathbf A)_z \\hat{\\boldsymbol z} \\\\[1ex]\n&= \\left(\\frac{1}{\\rho} \\frac{\\partial A_z}{\\partial \\phi} -\\frac{\\partial A_\\phi}{\\partial z} \\right) \\hat{\\boldsymbol \\rho} + \\left(\\frac{\\partial A_\\rho}{\\partial z}-\\frac{\\partial A_z}{\\partial \\rho} \\right) \\hat{\\boldsymbol \\phi} + \\frac{1}{\\rho}\\left(\\frac{\\partial (\\rho A_\\phi)}{\\partial \\rho} - \\frac{\\partial A_\\rho}{\\partial \\phi} \\right) \\hat{\\boldsymbol z}\n\\end{align}" }, { "math_id": 24, "text": "\\begin{align}\n\\operatorname{div} \\mathbf A &= \\lim_{V\\to 0} \\frac{\\iint_{\\partial V} \\mathbf A \\cdot d\\mathbf{S}}{\\iiint_V dV} \\\\ \n&= \\frac{A_r(r+dr)(r+dr)\\,d\\theta\\, (r+dr)\\sin\\theta \\,d\\phi - A_r(r)r\\,d\\theta\\, r\\sin\\theta \\,d\\phi + A_\\theta(\\theta+d\\theta)\\sin(\\theta + d\\theta)r\\, dr\\, d\\phi - A_\\theta(\\theta)\\sin(\\theta)r \\,dr \\,d\\phi + A_\\phi(\\phi + d\\phi)r\\,dr\\, d\\theta - A_\\phi(\\phi)r\\,dr \\,d\\theta}{dr\\,r\\,d\\theta\\,r\\sin\\theta\\, d\\phi} \\\\\n&= \\frac{1}{r^2}\\frac{\\partial (r^2A_r)}{\\partial r} + \\frac{1}{r \\sin\\theta} \\frac{\\partial(A_\\theta\\sin\\theta)}{\\partial \\theta} + \\frac{1}{r \\sin\\theta} \\frac{\\partial A_\\phi}{\\partial \\phi}\n\\end{align}" }, { "math_id": 25, "text": "\\begin{align}\n(\\operatorname{curl} \\mathbf A)_r = \\lim_{S^{\\perp \\boldsymbol{\\hat r}}\\to 0} \\frac{\\int_{\\partial S} \\mathbf A \\cdot d\\mathbf{\\ell}}{\\iint_{S} dS}\n&= \\frac{A_\\theta(\\phi)r \\,d\\theta + A_\\phi(\\theta + d\\theta)r \\sin(\\theta + d\\theta)\\, d\\phi\n - A_\\theta(\\phi + d\\phi)r \\,d\\theta - A_\\phi(\\theta)r\\sin(\\theta)\\, d\\phi}{r\\, d\\theta\\,r\\sin\\theta \\,d\\phi} \\\\\n&= \\frac{1}{r\\sin\\theta}\\frac{\\partial(A_\\phi \\sin\\theta)}{\\partial \\theta}\n - \\frac{1}{r\\sin\\theta} \\frac{\\partial A_\\theta}{\\partial \\phi}\n\\end{align}" }, { "math_id": 26, "text": "\\begin{align}\n(\\operatorname{curl} \\mathbf A)_\\theta = \\lim_{S^{\\perp \\boldsymbol{\\hat \\theta}}\\to 0} \\frac{\\int_{\\partial S} \\mathbf A \\cdot d\\mathbf{\\ell}}{\\iint_{S} dS}\n&= \\frac{A_\\phi(r)r \\sin\\theta \\,d\\phi + A_r(\\phi + d\\phi)\\,dr\n - A_\\phi(r+dr)(r+dr)\\sin\\theta \\,d\\phi - A_r(\\phi)\\,dr}{dr \\, r \\sin \\theta \\,d\\phi} \\\\\n&= \\frac{1}{r\\sin\\theta}\\frac{\\partial A_r}{\\partial \\phi}\n - \\frac{1}{r} \\frac{\\partial (rA_\\phi)}{\\partial r}\n\\end{align}" }, { "math_id": 27, "text": "\\begin{align}\n(\\operatorname{curl} \\mathbf A)_\\phi = \\lim_{S^{\\perp \\boldsymbol{\\hat \\phi}}\\to 0} \\frac{\\int_{\\partial S} \\mathbf A \\cdot d\\mathbf{\\ell}}{\\iint_{S} dS}\n&= \\frac{A_r(\\theta)\\,dr + A_\\theta(r+dr)(r+dr)\\,d\\theta\n - A_r(\\theta+d\\theta)\\,dr - A_\\theta(r) r \\,d\\theta}{r\\,dr\\, d\\theta} \\\\\n&= \\frac{1}{r}\\frac{\\partial(rA_\\theta)}{\\partial r}\n - \\frac{1}{r} \\frac{\\partial A_r}{\\partial \\theta}\n\\end{align}" }, { "math_id": 28, "text": "\\begin{align}\n\\operatorname{curl} \\mathbf A\n&= (\\operatorname{curl} \\mathbf A)_r \\, \\hat{\\boldsymbol r} + (\\operatorname{curl} \\mathbf A)_\\theta \\, \\hat{\\boldsymbol \\theta} + (\\operatorname{curl} \\mathbf A)_\\phi \\, \\hat{\\boldsymbol \\phi} \\\\[1ex]\n&= \\frac{1}{r\\sin\\theta} \\left(\\frac{\\partial(A_\\phi \\sin\\theta)}{\\partial \\theta}-\\frac{\\partial A_\\theta}{\\partial \\phi} \\right) \\hat{\\boldsymbol r} +\\frac{1}{r} \\left(\\frac{1}{\\sin\\theta}\\frac{\\partial A_r}{\\partial \\phi} - \\frac{\\partial (rA_\\phi)}{\\partial r} \\right) \\hat{\\boldsymbol \\theta} + \\frac{1}{r}\\left(\\frac{\\partial(rA_\\theta)}{\\partial r} - \\frac{\\partial A_r}{\\partial \\theta} \\right) \\hat{\\boldsymbol \\phi}\n\\end{align}" }, { "math_id": 29, "text": "\\mathbf r" }, { "math_id": 30, "text": "\\mathbf u" }, { "math_id": 31, "text": "\\frac{\\partial {\\mathbf r}}{\\partial u} = \\frac{\\partial{s}}{\\partial u} \\mathbf u" }, { "math_id": 32, "text": "u_i" }, { "math_id": 33, "text": "v_j" }, { "math_id": 34, "text": "d\\mathbf r = \\sum_{i} \\frac{\\partial \\mathbf r}{\\partial u_i} \\, du_i = \\sum_{i} \\frac{\\partial s}{\\partial u_i} \\hat{\\mathbf u}_i du_i = \\sum_{j} \\frac{\\partial s}{\\partial v_j} \\hat{\\mathbf v}_j \\, dv_j = \\sum_{j}\\frac{\\partial s}{\\partial v_j} \\hat{\\mathbf v}_j \\sum_{i} \\frac{\\partial v_j}{\\partial u_i} \\, du_i = \\sum_{i} \\sum_{j} \\frac{\\partial s}{\\partial v_j} \\frac{\\partial v_j}{\\partial u_i} \\hat{\\mathbf v}_j \\, du_i." }, { "math_id": 35, "text": "i" }, { "math_id": 36, "text": "i{\\neq}k" }, { "math_id": 37, "text": "\\mathrm d u_k=0" }, { "math_id": 38, "text": "\\mathrm d u_i" }, { "math_id": 39, "text": "\\frac{\\partial s}{\\partial u_i} \\hat{\\mathbf u}_i = \\sum_{j} \\frac{\\partial s}{\\partial v_j} \\frac{\\partial v_j}{\\partial u_i} \\hat{\\mathbf v}_j." } ]
https://en.wikipedia.org/wiki?curid=753145
7532012
Bram van Leer
Dutch mathematician Bram van Leer is Arthur B. Modine Emeritus Professor of aerospace engineering at the University of Michigan, in Ann Arbor. He specializes in "Computational fluid dynamics (CFD)", "fluid dynamics", and "numerical analysis." His most influential work lies in CFD, a field he helped modernize from 1970 onwards. An appraisal of his early work has been given by C. Hirsch (1979) An astrophysicist by education, van Leer made lasting contributions to CFD in his five-part article series “Towards the Ultimate Conservative Difference Scheme (1972-1979),” where he extended Godunov's finite-volume scheme to the second order (MUSCL). Also in the series, he developed non-oscillatory interpolation using limiters, an approximate Riemann solver, and discontinuous-Galerkin schemes for unsteady advection. Since joining the University of Michigan's Aerospace Engineering Department (1986), he has worked on convergence acceleration by local preconditioning and multigrid relaxation for Euler and Navier-Stokes problems, unsteady adaptive grids, space-environment modeling, atmospheric flow modeling, extended hydrodynamics for rarefied flows, and discontinuous-Galerkin methods. He retired in 2012, forced to give up research because of progressive blindness. Throughout his career, van Leer's work has had interdisciplinary characteristic. Starting from astrophysics, he first made an impact on weapons research, followed by aeronautics, then space-weather modeling, atmospheric modeling, surface-water modeling and automotive engine modeling, to name the most important fields. Personal interests. Van Leer is also an accomplished musician, playing the piano at the age of 5 and composing at 7. His musical education includes two years at the Royal Conservatory for Music of The Hague, Netherlands. As a pianist he was featured in the Winter '96 issue of Michigan Engineering (Engineering and the Arts). As a carillonist, he has played the carillon of Burton Memorial Tower on many football Saturdays. He was the world's first and only CJ (carillon-jockey) based on the North Campus carillon, live streaming from the Lurie Tower. In 1993 he gave a full-hour recital on the carillon of the City Hall in Leiden, the town of his alma mater. Van Leer enjoys improvising in the Dutch carillon-playing style; one of his improvisations is included on a 1998 CD featuring both University of Michigan's carillons. His carillon composition "Lament" was published in the UM School of Music's carillon music series on the occasion of the annual congress of The Guild of Carillonneurs in North America, Ann Arbor, June 2002. A flute composition by van Leer was performed twice in 1997 by University of Michigan Professor Leone Buyse. Research work. Bram van Leer was a doctoral student in astrophysics at Leiden Observatory (1966–1970) when he got interested in Computational Fluid Dynamics (CFD) for the sake of solving cosmic flow problems. His first major result in CFD was the formulation of the upwind numerical flux function for a hyperbolic system of conservation laws: formula_0 Here the matrix formula_1 appears for the first time in CFD, defined as the matrix that has the same eigenvectors as the flux Jacobian formula_2, but the corresponding eigenvalues are the moduli of those of formula_2. The subscript formula_3 indicates a representative or average value on the interval formula_4; it was no less than 10 years later before Philip L. Roe first presented his much used averaging formulas. Next, van Leer succeeded in circumventing Godunov's barrier theorem (i.e., a monotonicity preserving advection scheme cannot be better than first-order accurate) by limiting the second-order term in the Lax-Wendroff scheme as a function of the non-smoothness of the numerical solution itself. This is a non-linear technique even for a linear equation. Having discovered this basic principle, he planned a series of three articles titled "Towards the ultimate conservative difference scheme", which advanced from scalar non-conservative but non-oscillatory (part I) via scalar conservative non-oscillatory (part II) to conservative non-oscillatory Euler (part III). The finite-difference schemes for the Euler equations turned out to be unattractive because of their many terms; a switch to the finite-volume formulation completely cleared this up and led to Part IV (finite-volume scalar) and, finally, Part V (finite-volume Lagrange and Euler) titled, "A second-order sequel to Godunov's method", which is his most cited article (approaching 6000 citations on November 1, 2017). This paper was reprinted in 1997 in the 30th anniversary issue of Journal Computational Physics with an introduction by Charles Hirsch. The series contains several original techniques that have found their way into the CFD community. In Part II two limiters are presented, later called by van Leer "double minmod" (after Osher's "minmod" limiter) and its smoothed version "harmonic"; the latter limiter is sometimes referred to in the literature as "van Leer's limiter." Part IV, "A new approach to numerical convection," describes a group of 6 second- and third-order schemes that includes two discontinuous-Galerkin schemes with exact time integration. Van Leer was not the only one to break Godunov's barrier using nonlinear limiting; similar techniques were developed independently around the same time by Boris and by V.P. Kolgan, a Russian researcher unknown in the West. In 2011, van Leer devoted an article to Kolgan's contributions and had Kolgan's 1972 TsAGI report reprinted in translation in the Journal of Computational Physics. After the publication of the series (1972–1979), van Leer spent two years at ICASE (NASA LaRC), where he was engaged by NASA engineers interested in his numerical expertise. This led to van Leer's differentiable flux-vector splitting and the development of the block-structured codes CFL2D and CFL3D which still are heavily used. Other contributions from these years are the review of upwind methods with Harten and Lax, the AMS workshop paper detailing the differences and resemblances between upwind fluxes and Jameson's flux formula, and the conference paper with Mulder on upwind relaxation methods; the latter includes the concept of Switched Evolution-Relaxation (SER) for automatically choosing the time step in an implicit marching scheme. After permanently moving to the U.S., van Leer's first influential paper was “A comparison of numerical flux formulas for the Euler and Navier-Stokes equations,” which analyzes numerical flux functions and their suitability for resolving boundary layers in Navier-Stokes calculations. In 1988, he embarked on a very large project, to achieve steady Euler solutions in O(N) operations by a purely explicit methodology. There were three crucial components to this strategy: 1. Optimally smoothing multistage single-grid schemes for advections 2. Local preconditioning of the Euler equations 3. Semi-coarsened multigrid relaxation The first subject was developed in collaboration with his doctoral student, C.H. Tai. The second subject was needed to make the Euler equations look as much scalar as possible. The preconditioning was developed with doctoral student W. -T. Lee. In order to apply this to the discrete scheme, crucial modification had to be made to the original discretization. It turned out that applying the preconditioning to an Euler discretization required a reformulation of the numerical flux function for the sake of preserving accuracy at low Mach numbers. Combining the optimal single grid schemes with the preconditioned Euler discretization was achieved by doctoral student J. F. Lynn. The same strategy for the Navier-Stokes discretization was pursued by D. Lee. The third component, semi-coarsened multigrid relaxation, was developed by van Leer's former student W. A. Mulder (Mulder 1989). This technique is needed to damp certain combinations of high- and low-frequency modes when the grid is aligned with the flow. In 1994, van Leer teamed up with Darmofal, a post-doctoral fellow at the University of Michigan at the time, to finish the project. The goal of the project was first reached by Darmofal and Siu (Darmofal, and Siu 1999), and later was done more efficiently by van Leer and Nishikawa. While the multi-grid project was going on, van Leer worked on two more subjects: multi-dimensional Riemann solvers, and time-dependent adaptive Cartesian grid. After conclusion of the multigrid project, van Leer continued to work on local preconditioning of the Navier-Stokes equations together with C. Depcik. A 1-D preconditioning was derived that is optimal for all Mach and Reynolds numbers. There is, however, a narrow domain in the (M, Re)-plane where the preconditioned equations admit a growing mode. In practice, such a mode, if it were to arise, should be damped by the time-marching scheme, e.g., an implicit scheme. In the last decade of his career, van Leer occupied himself with extended hydrodynamics and discontinuous-Galerkin method. The goal of the first project was to describe rarefied flow up to and including intermediate Knudsen numbers (Kn~1) by a hyperbolic-relaxation system. This works well for subsonic flows and weak shock waves, but stronger shock waves acquire the wrong internal structure. For low speed flow, van Leer's doctoral student H. L. Khieu tested the accuracy of the hyperbolic-relaxation formulation was tested by comparing simulations with the numerical results of a full-kinetic solver based on Boltzmann equation. Recent research has demonstrated that a system of second order PDEs derived from the hyperbolic relaxation systems can be entirely successful; for details see Myong Over-reach 2014. The second project was the development of discontinuous Galerkin (DG) methods for diffusion operators. It started with the discovery of the recovery method for representing the 1D diffusion operator. Starting in 2004, the recovery-based DG (RDG) has been shown an accuracy of the order 3p+1 or 3p+2 for even or odd polynomial-space degree p. This result holds for Cartesian grids in 1-, 2-, or 3-dimensions, for linear and non-linear diffusion equations that may or may not contain shear terms. On unstructured grids, the RDG was predicted to achieve the order of accuracy of 2p+2; this research unfortunately was not completed before van Leer retired. Van Leer's early work, especially the series “Towards the ultimate conservative difference scheme” motivated by the needs of astrophysical modeling, has influenced a wide range of other disciplines; such interdisciplinary knowledge transfer is not self-evident. Exporting scientific ideas from one discipline to another is best done through personal contact. For instance, Van Leer's presence at NASA Langley Research Center from 1979 to 1981 and then in the summers of '81 to '83 led to the development of NASA's CFL2D code and ultimately CFL3D. The transition of ideas between disciplines through publications is a much slower process, as most researchers do not read journals based in fields other than their own expertise. A case in point is the way Van Leer's ideas, contained in the series "Towards the Ultimate Conservative Difference Scheme," made their way into Atmospheric General Circulation Modeling (GCM).   Although published in the Journal of Computational Physics, which in its early years published key atmospheric research articles, it seems to have gone unnoticed by the GCM community. Thus, the second-order DG advection Scheme III from Towards IV was rediscovered by G.L. Russel and J.A. Lerner in 1981, while the third-order DG advection scheme VI was rediscovered by M.J. Prather in 1986. But Monotonicity-preserving limiters were not included in these works. It was not until the atmospheric scientist R.B. Rood from NASA's Goddard Space Flight Center published a comprehensive review of publications on advection schemes in 1987 that Van Leer's articles were unlocked to the GCM community. The first application of a monotonicity preserving advection scheme to atmospheric transport was due to D.J. Allen, A.R. Douglass, R.B. Rood, and P.D. Guthrie in 1991. Subsequently, in 1997, Shian-Jiann (S. J.) Lin and Rood, both at NASA Goddard, published a predictor-corrector version of the second-order Godunov method for use in atmospheric dynamics and implemented it in a shallow-water model. Finally, Lin, now at the Princeton Geophysical Fluid Dynamics Laboratory (GFDL),  put these ideas into a full non-hydrostatic atmospheric description with Eulerian horizontal and Lagrangian vertical discretizations, named FV3 (Finite-Volume Cubed-Sphere Dynamical Core). This dynamical core has found its way into the main national weather- and climate-prediction codes. Specifically, FV3 has been chosen as the dynamical core for the Next Generation Global Prediction System project (NGGPS), the latest NCAR Community Climate System Model CESM4, the NOAA-GFDL CM4.0 model, and NASA's GEOS5 model. In addition to the above narrative, we list some subjects and papers related to van Leer's interdisciplinary research efforts: Three significant review papers by van Leer are: In 2010, van Leer received AIAA Fluid Dynamics award for his lifetime achievement. On this occasion, van Leer presented a plenary lecture titled, “History of CFD Part II,” which covers the period from 1970 to 1995. Below is the poster van Leer and his doctoral student Lo designed for this occasion. Education and training. Source: "https://aero.engin.umich.edu/people/bram-van-leer/" Professional experience. Source: "https://aero.engin.umich.edu/people/bram-van-leer/" Honors and awards. Source: "https://aero.engin.umich.edu/people/bram-van-leer/" Recent publications. The following articles all relate to the discontinuous Galerkin method for diffusion equations: References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": " F^\\hbox{up} = \\frac{1}{2} (F_j + F_{j+1} ) - \\frac{1}{2}|A|_{j+\\frac{1}{2}} (u_{j+1} - u_j)." }, { "math_id": 1, "text": "|A|" }, { "math_id": 2, "text": "A" }, { "math_id": 3, "text": "j+\\frac{1}{2}" }, { "math_id": 4, "text": "(x_j,x_{j+1})" } ]
https://en.wikipedia.org/wiki?curid=7532012
753225
Uses of trigonometry
Applications of trigonometry Amongst the lay public of non-mathematicians and non-scientists, trigonometry is known chiefly for its application to measurement problems, yet is also often used in ways that are far more subtle, such as its place in the theory of music; still other uses are more technical, such as in number theory. The mathematical topics of Fourier series and Fourier transforms rely heavily on knowledge of trigonometric functions and find application in a number of areas, including statistics. Thomas Paine's statement. In Chapter XI of The Age of Reason, the American revolutionary and Enlightenment thinker Thomas Paine wrote: "The scientific principles that man employs to obtain the foreknowledge of an eclipse, or of any thing else relating to the motion of the heavenly bodies, are contained chiefly in that part of science that is called trigonometry, or the properties of a triangle, which, when applied to the study of the heavenly bodies, is called astronomy; when applied to direct the course of a ship on the ocean, it is called navigation; when applied to the construction of figures drawn by a ruler and compass, it is called geometry; when applied to the construction of plans of edifices, it is called architecture; when applied to the measurement of any portion of the surface of the earth, it is called land-surveying. In fine, it is the soul of science. It is an eternal truth: it contains the "mathematical demonstration" of which man speaks, and the extent of its uses are unknown. History. Great Trigonometrical Survey. From 1802 until 1871, the Great Trigonometrical Survey was a project to survey the Indian subcontinent with high precision. Starting from the coastal baseline, mathematicians and geographers triangulated vast distances across the country. One of the key achievements was measuring the height of Himalayan mountains, and determining that Mount Everest is the highest point on Earth. Historical use for multiplication. For the 25 years preceding the invention of the logarithm in 1614, prosthaphaeresis was the only known generally applicable way of approximating products quickly. It used the identities for the trigonometric functions of sums and differences of angles in terms of the products of trigonometric functions of those angles. Some modern uses. Scientific fields that make use of trigonometry include: acoustics, architecture, astronomy, cartography, civil engineering, geophysics, crystallography, electrical engineering, electronics, land surveying and geodesy, many physical sciences, mechanical engineering, machining, medical imaging, number theory, oceanography, optics, pharmacology, probability theory, seismology, statistics, and visual perception That these fields involve trigonometry does not mean knowledge of trigonometry is needed in order to learn anything about them. It "does" mean that "some" things in these fields cannot be understood without trigonometry. For example, a professor of music may perhaps know nothing of mathematics, but would probably know that Pythagoras was the earliest known contributor to the mathematical theory of music. In "some" of the fields of endeavor listed above it is easy to imagine how trigonometry could be used. For example, in navigation and land surveying, the occasions for the use of trigonometry are in at least some cases simple enough that they can be described in a beginning trigonometry textbook. In the case of music theory, the application of trigonometry is related to work begun by Pythagoras, who observed that the sounds made by plucking two strings of different lengths are consonant if both lengths are small integer multiples of a common length. The resemblance between the shape of a vibrating string and the graph of the sine function is no mere coincidence. In oceanography, the resemblance between the shapes of some waves and the graph of the sine function is also not coincidental. In some other fields, among them climatology, biology, and economics, there are seasonal periodicities. The study of these often involves the periodic nature of the sine and cosine functions. Fourier series. Many fields make use of trigonometry in more advanced ways than can be discussed in a single article. Often those involve what are called the Fourier series, after the 18th- and 19th-century French mathematician and physicist Joseph Fourier. Fourier series have a surprisingly diverse array of applications in many scientific fields, in particular in all of the phenomena involving seasonal periodicities mentioned above, and in wave motion, and hence in the study of radiation, of acoustics, of seismology, of modulation of radio waves in electronics, and of electric power engineering. A Fourier series is a sum of this form: formula_0 where each of the squares (formula_1) is a different number, and one is adding infinitely many terms. Fourier used these for studying heat flow and diffusion (diffusion is the process whereby, when you drop a sugar cube into a gallon of water, the sugar gradually spreads through the water, a pollutant spreads through the air, or any dissolved substance spreads through any fluid). Fourier series are also applicable to subjects whose connection with wave motion is far from obvious. One ubiquitous example is digital compression whereby images, audio and video data are compressed into a much smaller size which makes their transmission feasible over telephone, internet and broadcast networks. Another example, mentioned above, is diffusion. Among others are: the geometry of numbers, isoperimetric problems, recurrence of random walks, quadratic reciprocity, the central limit theorem, Heisenberg's inequality. Fourier transforms. A more abstract concept than Fourier series is the idea of Fourier transform. Fourier transforms involve integrals rather than sums, and are used in a similarly diverse array of scientific fields. Many natural laws are expressed by relating "rates of change" of quantities to the quantities themselves. For example: The rate population change is sometimes jointly proportional to (1) the present population and (2) the amount by which the present population falls short of the carrying capacity. This kind of relationship is called a differential equation. If, given this information, one tries to express population as a function of time, one is trying to "solve" the differential equation. Fourier transforms may be used to convert some differential equations to algebraic equations for which methods of solving them are known. Fourier transforms have many uses. In almost any scientific context in which the words spectrum, harmonic, or resonance are encountered, Fourier transforms or Fourier series are nearby. Statistics, including mathematical psychology. Intelligence quotients are sometimes held to be distributed according to a bell-shaped curve. About 40% of the area under the curve is in the interval from 100 to 120; correspondingly, about 40% of the population scores between 100 and 120 on IQ tests. Nearly 9% of the area under the curve is in the interval from 120 to 140; correspondingly, about 9% of the population scores between 120 and 140 on IQ tests, etc. Similarly many other things are distributed according to the "bell-shaped curve", including measurement errors in many physical measurements. Why the ubiquity of the "bell-shaped curve"? There is a theoretical reason for this, and it involves Fourier transforms and hence trigonometric functions. That is one of a variety of applications of Fourier transforms to statistics. Trigonometric functions are also applied when statisticians study seasonal periodicities, which are often represented by Fourier series. Number theory. There is a hint of a connection between trigonometry and number theory. Loosely speaking, one could say that number theory deals with qualitative properties rather than quantitative properties of numbers. formula_2 Discard the ones that are not in lowest terms; keep only those that are in lowest terms: formula_3 Then bring in trigonometry: formula_4 The value of the sum is −1, because 42 has an "odd" number of prime factors and none of them is repeated: 42 = 2 × 3 × 7. (If there had been an "even" number of non-repeated factors then the sum would have been 1; if there had been any repeated prime factors (e.g., 60 = 2 × 2 × 3 × 5) then the sum would have been 0; the sum is the Möbius function evaluated at 42.) This hints at the possibility of applying Fourier analysis to number theory. Solving non-trigonometric equations. Various types of equations can be solved using trigonometry. For example, a linear difference equation or linear differential equation with constant coefficients has solutions expressed in terms of the eigenvalues of its characteristic equation; if some of the eigenvalues are complex, the complex terms can be replaced by trigonometric functions of real terms, showing that the dynamic variable exhibits oscillations. Similarly, cubic equations with three real solutions have an algebraic solution that is unhelpful in that it contains cube roots of complex numbers; again an alternative solution exists in terms of trigonometric functions of real terms. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": " \\square + \\underbrace{\\square \\cos\\theta + \\square\\sin\\theta}_1 + \\underbrace{\\square \\cos(2\\theta) + \\square\\sin(2\\theta)}_2 + \\underbrace{\\square \\cos(3\\theta) + \\square\\sin(3\\theta)}_3 + \\cdots \\, " }, { "math_id": 1, "text": "\\square" }, { "math_id": 2, "text": "\n\\frac{1}{42}, \\qquad \\frac{2}{42}, \\qquad \\frac{3}{42}, \\qquad\n\\dots\\dots, \\qquad \\frac{39}{42}, \\qquad \\frac{40}{42}, \\qquad\n\\frac{41}{42}.\n" }, { "math_id": 3, "text": "\n\\frac{1}{42}, \\qquad \\frac{5}{42}, \\qquad \\frac{11}{42}, \\qquad\n\\dots, \\qquad \\frac{31}{42}, \\qquad \\frac{37}{42}, \\qquad\n\\frac{41}{42}.\n" }, { "math_id": 4, "text": "\n\\cos\\left(2\\pi\\cdot\\frac{1}{42}\\right)+\n\\cos\\left(2\\pi\\cdot\\frac{5}{42}\\right)+\n\\cdots+\n\\cos\\left(2\\pi\\cdot\\frac{37}{42}\\right)+\n\\cos\\left(2\\pi\\cdot\\frac{41}{42}\\right)\n" } ]
https://en.wikipedia.org/wiki?curid=753225
753230
Alternating Turing machine
Abstract computation model In computational complexity theory, an alternating Turing machine (ATM) is a non-deterministic Turing machine (NTM) with a rule for accepting computations that generalizes the rules used in the definition of the complexity classes NP and co-NP. The concept of an ATM was set forth by Chandra and Stockmeyer and independently by Kozen in 1976, with a joint journal publication in 1981. Definitions. Informal description. The definition of NP uses the "existential mode" of computation: if "any" choice leads to an accepting state, then the whole computation accepts. The definition of co-NP uses the "universal mode" of computation: only if "all" choices lead to an accepting state does the whole computation accept. An alternating Turing machine (or to be more precise, the definition of acceptance for such a machine) alternates between these modes. An alternating Turing machine is a non-deterministic Turing machine whose states are divided into two sets: existential states and universal states. An existential state is accepting if some transition leads to an accepting state; a universal state is accepting if every transition leads to an accepting state. (Thus a universal state with no transitions accepts unconditionally; an existential state with no transitions rejects unconditionally). The machine as a whole accepts if the initial state is accepting. Formal definition. Formally, a (one-tape) alternating Turing machine is a 5-tuple formula_0 where If "M" is in a state formula_6 with formula_7 then that configuration is said to be "accepting", and if formula_8 the configuration is said to be "rejecting". A configuration with formula_9 is said to be accepting if all configurations reachable in one step are accepting, and rejecting if some configuration reachable in one step is rejecting. A configuration with formula_10 is said to be accepting when there exists some configuration reachable in one step that is accepting and rejecting when all configurations reachable in one step are rejecting (this is the type of all states in a classical NTM except the final state). "M" is said to accept an input string "w" if the initial configuration of "M" (the state of "M" is formula_11, the head is at the left end of the tape, and the tape contains "w") is accepting, and to reject if the initial configuration is rejecting. Note that it is impossible for a configuration to be both accepting and rejecting, however, some configurations may be neither accepting or rejecting, due to the possibility of nonterminating computations. Resource bounds. When deciding if a configuration of an ATM is accepting or rejecting using the above definition, it is not always necessary to examine all configurations reachable from the current configuration. In particular, an existential configuration can be labelled as accepting if any successor configuration is found to be accepting, and a universal configuration can be labelled as rejecting if any successor configuration is found to be rejecting. An ATM decides a formal language in time formula_12 if, on any input of length n, examining configurations only up to formula_12 steps is sufficient to label the initial configuration as accepting or rejecting. An ATM decides a language in space formula_13 if examining configurations that do not modify tape cells beyond the formula_13 cell from the left is sufficient. A language that is decided by some ATM in time formula_14 for some constant formula_15 is said to be in the class formula_16, and a language decided in space formula_17 is said to be in the class formula_18. Example. Perhaps the most natural problem for alternating machines to solve is the quantified Boolean formula problem, which is a generalization of the Boolean satisfiability problem in which each variable can be bound by either an existential or a universal quantifier. The alternating machine branches existentially to try all possible values of an existentially quantified variable and universally to try all possible values of a universally quantified variable, in the left-to-right order in which they are bound. After deciding a value for all quantified variables, the machine accepts if the resulting Boolean formula evaluates to true, and rejects if it evaluates to false. Thus at an existentially quantified variable the machine is accepting if a value can be substituted for the variable that renders the remaining problem satisfiable, and at a universally quantified variable the machine is accepting if any value can be substituted and the remaining problem is satisfiable. Such a machine decides quantified Boolean formulas in time formula_19 and space formula_20. The Boolean satisfiability problem can be viewed as the special case where all variables are existentially quantified, allowing ordinary nondeterminism, which uses only existential branching, to solve it efficiently. Complexity classes and comparison to deterministic Turing machines. The following complexity classes are useful to define for ATMs: These are similar to the definitions of P, PSPACE, and EXPTIME, considering the resources used by an ATM rather than a deterministic Turing machine. Chandra, Kozen, and Stockmeyer proved the theorems when formula_27 and formula_28. A more general form of these relationships is expressed by the parallel computation thesis. Bounded alternation. Definition. An alternating Turing machine with "k" alternations is an alternating Turing machine that switches from an existential to a universal state or vice versa no more than "k"−1 times. (It is an alternating Turing machine whose states are divided into "k" sets. The states in even-numbered sets are universal and the states in odd-numbered sets are existential (or vice versa). The machine has no transitions between a state in set "i" and a state in set "j" &lt; "i".) formula_29 is the class of languages decidable in time formula_30 by a machine beginning in an existential state and alternating at most formula_31 times. It is called the jth level of the formula_32 hierarchy. formula_33 is defined in the same way, but beginning in a universal state; it consists of the complements of the languages in formula_34. formula_35 is defined similarly for space bounded computation. Example. Consider the circuit minimization problem: given a circuit "A" computing a Boolean function "f" and a number "n", determine if there is a circuit with at most "n" gates that computes the same function "f". An alternating Turing machine, with one alternation, starting in an existential state, can solve this problem in polynomial time (by guessing a circuit "B" with at most "n" gates, then switching to a universal state, guessing an input, and checking that the output of "B" on that input matches the output of "A" on that input). Collapsing classes. It is said that a hierarchy "collapses" to level j if every language in level formula_36 of the hierarchy is in its level j. As a corollary of the Immerman–Szelepcsényi theorem, the logarithmic space hierarchy collapses to its first level. As a corollary the formula_37 hierarchy collapses to its first level when formula_38 is space constructible. Special cases. An alternating Turing machine in polynomial time with "k" alternations, starting in an existential (respectively, universal) state can decide all the problems in the class formula_39 (respectively, formula_40). These classes are sometimes denoted formula_41 and formula_42, respectively. See the polynomial hierarchy article for details. Another special case of time hierarchies is the logarithmic hierarchy. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "M=(Q,\\Gamma,\\delta,q_0,g)" }, { "math_id": 1, "text": "Q" }, { "math_id": 2, "text": "\\Gamma" }, { "math_id": 3, "text": "\\delta:Q\\times\\Gamma\\rightarrow\\mathcal{P}(Q\\times\\Gamma\\times\\{L,R\\})" }, { "math_id": 4, "text": "q_0\\in Q" }, { "math_id": 5, "text": "g:Q\\rightarrow\\{\\wedge,\\vee,accept,reject\\}" }, { "math_id": 6, "text": "q\\in Q" }, { "math_id": 7, "text": "g(q)=accept" }, { "math_id": 8, "text": "g(q)=reject" }, { "math_id": 9, "text": "g(q)=\\wedge" }, { "math_id": 10, "text": "g(q)=\\vee" }, { "math_id": 11, "text": "q_0" }, { "math_id": 12, "text": "t(n)" }, { "math_id": 13, "text": "s(n)" }, { "math_id": 14, "text": "c\\cdot t(n)" }, { "math_id": 15, "text": "c>0" }, { "math_id": 16, "text": "\\mathsf{ATIME}(t(n))" }, { "math_id": 17, "text": "c\\cdot s(n)" }, { "math_id": 18, "text": "\\mathsf{ASPACE}(s(n))" }, { "math_id": 19, "text": "n^2" }, { "math_id": 20, "text": "n" }, { "math_id": 21, "text": "\\mathsf{AP}=\\bigcup_{k>0}\\mathsf{ATIME}(n^k)" }, { "math_id": 22, "text": "\\mathsf{APSPACE}=\\bigcup_{k>0}\\mathsf{ASPACE}(n^k)" }, { "math_id": 23, "text": "\\mathsf{AEXPTIME}=\\bigcup_{k>0}\\mathsf{ATIME}(2^{n^k})" }, { "math_id": 24, "text": "\\mathsf{ASPACE}(f(n))=\\bigcup_{c>0}\\mathsf{DTIME}(2^{cf(n)})=\\mathsf{DTIME}(2^{O(f(n))})" }, { "math_id": 25, "text": "\\mathsf{ATIME}(g(n))\\subseteq \\mathsf{DSPACE}(g(n))" }, { "math_id": 26, "text": "\\mathsf{NSPACE}(g(n))\\subseteq\\bigcup_{c>0}\\mathsf{ATIME}(c\\times g(n)^2)," }, { "math_id": 27, "text": "f(n)\\ge\\log(n)" }, { "math_id": 28, "text": "g(n)\\ge\\log(n)" }, { "math_id": 29, "text": "\\mathsf{ATIME}(C,j)=\\Sigma_j \\mathsf{TIME}(C)" }, { "math_id": 30, "text": "f\\in C" }, { "math_id": 31, "text": "j-1" }, { "math_id": 32, "text": "\\mathsf{TIME}(C)" }, { "math_id": 33, "text": "\\mathsf{coATIME}(C,j)=\\Pi_j \\mathsf{TIME}(C)" }, { "math_id": 34, "text": "\\mathsf{ATIME}(f,j)" }, { "math_id": 35, "text": "\\mathsf{ASPACE}(C,j)=\\Sigma_j \\mathsf{SPACE}(C)" }, { "math_id": 36, "text": "k\\ge j" }, { "math_id": 37, "text": "\\mathsf{SPACE}(f)" }, { "math_id": 38, "text": "f=\\Omega(\\log)" }, { "math_id": 39, "text": "\\Sigma_k^p" }, { "math_id": 40, "text": "\\Pi_k^p" }, { "math_id": 41, "text": "\\Sigma_k\\rm{P}" }, { "math_id": 42, "text": "\\Pi_k\\rm{P}" } ]
https://en.wikipedia.org/wiki?curid=753230
7532405
Nested intervals
In mathematics, a sequence of nested intervals can be intuitively understood as an ordered collection of intervals formula_0 on the real number line with natural numbers formula_1 as an index. In order for a sequence of intervals to be considered nested intervals, two conditions have to be met: In other words, the left bound of the interval formula_0 can only increase (formula_5), and the right bound can only decrease (formula_6). Historically - long before anyone defined nested intervals in a textbook - people implicitly constructed such nestings for concrete calculation purposes. For example, the ancient Babylonians discovered a method for computing square roots of numbers. In contrast, the famed Archimedes constructed sequences of polygons, that inscribed and circumscribed a unit circle, in order to get a lower and upper bound for the circles circumference - which is the circle number Pi (formula_7). The central question to be posed is the nature of the intersection over all the natural numbers, or, put differently, the set of numbers, that are found in every Interval formula_0 (thus, for all formula_8). In modern mathematics, nested intervals are used as a construction method for the real numbers (in order to complete the field of rational numbers). Historic motivation. As stated in the introduction, historic users of mathematics discovered the nesting of intervals and closely related algorithms as methods for specific calculations. Some variations and modern interpretations of these ancient techniques will be introduced here: Computation of square roots. When trying to find the square root of a number formula_9, one can be certain that formula_10, which gives the first interval formula_11, in which formula_12 has to be found. If one knows the next higher perfect square formula_13, one can get an even better candidate for the first interval: formula_14. The other intervals formula_15 can now be defined recursively by looking at the sequence of midpoints formula_16. Given the interval formula_0 is already known (starting at formula_17), one can define formula_18 To put this into words, one can compare the midpoint of formula_19 to formula_20 in order to determine whether the midpoint is smaller or larger than formula_20. If the midpoint is smaller, one can set it as the lower bound of the next interval formula_21, and if the midpoint is larger, one can set it as the upper bound of the next interval. This guarantees that formula_22. With this construction the intervals are nested and their length formula_23 get halved in every step of the recursion. Therefore, it is possible to get lower and upper bounds for formula_24 with arbitrarily good precision (given enough computational time). One can also compute formula_25, when formula_26. In this case formula_27, and the algorithm can be used by setting formula_28 and calculating the reciprocal after the desired level of precision has been acquired. Example. To demonstrate this algorithm, here is an example of how it can be used to find the value of formula_29. Note that sinceformula_30, the first interval for the algorithm can be defined asformula_31, since formula_29 must certainly found within this interval. Thus, using this interval, one can continue to the next step of the algorithm by calculating the midpoint of the interval, determining whether the square of the midpoint is greater than or less than 19, and setting the boundaries of the next interval accordingly before repeating the process: formula_32 Each time a new midpoint is calculated, the range of possible values for formula_29 is able to be constricted so that the values that remain within the interval are closer and closer to the actual value of formula_33. That is to say, each successive change in the bounds of the interval within which formula_29 must lie allows the value of formula_29 to be estimated with a greater precision, either by increasing the lower bounds of the interval or decreasing the upper bounds of the interval. This procedure can be repeated as many times as needed to attain the desired level of precision. Theoretically, by repeating the steps indefinitely, one can arrive at the true value of this square root. Herons method. The Babylonian method uses an even more efficient algorithm that yields accurate approximations of formula_20 for an formula_34 even faster. The modern description using nested intervals is similar to the algorithm above, but instead of using a sequence of midpoints, one uses a sequence formula_35 given by formula_36. This results in a sequence of intervals given by formula_37 and formula_38, where formula_39, will provide accurate upper and lower bounds for formula_20 very fast. In practice, only formula_40 has to be considered, which converges to formula_20 (as does of course the lower interval bound). This algorithm is a special case of Newton's method. Archimedes' circle measurement. As shown in the image, lower and upper bounds for the circumference of a circle can be obtained with inscribed and circumscribed regular polygons. When examining a circle with diameter formula_41, the circumference is (by definition of Pi) the circle number formula_7. Around 250 BCE Archimedes of Syracuse started with regular hexagons, whose side lengths (and therefore circumference) can be directly calculated from the circle diameter. Furthermore, a way to compute the side length of a regular formula_42-gon from the previous formula_43-gon can be found, starting at the regular hexagon (formula_44-gon). By successively doubling the number of edges until reaching 96-sided polygons, Archimedes reached an interval with formula_45. The upper bound formula_46 is still often used as a rough, but pragmatic approximation of formula_7. Around the year 1600 CE, Archimedes' method was still the gold standard for calculating Pi and was used by Dutch mathematician Ludolph van Ceulen, to compute more than thirty digits of formula_7, which took him decades. Soon after, more powerful methods for the computation were found. Other implementations. Early uses of sequences of nested intervals (or can be described as such with modern mathematics), can be found in the predecessors of calculus (differentiation and integration). In computer science, sequences of nested intervals is used in algorithms for numerical computation. I.e. the Bisection method can be used for calculating the roots of continuous functions. In contrast to mathematically infinite sequences, an applied computational algorithm terminates at some point, when the desired zero has been found or sufficiently well approximated. The construction of the real numbers. In mathematical analysis, nested intervals provide one method of axiomatically introducing the real numbers as the completion of the rational numbers, being a necessity for discussing the concepts of continuity and differentiability. Historically, Isaac Newton's and Gottfried Wilhelm Leibniz's discovery of differential and integral calculus from the late 1600s has posed a huge challenge for mathematicians trying to prove their methods rigorously; despite their success in physics, engineering and other sciences. The axiomatic description of nested intervals (or an equivalent axiom) has become an important foundation for the modern understanding of calculus. In the context of this article, formula_47 in conjunction with formula_48 and formula_49 is an Archimedean ordered field, meaning the axioms of order and the Archimedean property hold. Definition. Let formula_50 be a sequence of closed intervals of the type formula_51, where formula_52 denotes the length of such an interval. One can call formula_50 a sequence of nested intervals, if Put into words, property 1 means, that the intervals are nested according to their index. The second property formalizes the notion, that interval sizes get arbitrarily small; meaning, that for an arbitrary constant formula_55 one can always find an interval (with index formula_4) with a length strictly smaller than that number formula_3. It is also worth noting that property 1 immediately implies that every interval with an index formula_56 must also have a length formula_57. Remark. Note that some authors refer to such interval-sequences, satisfying both properties above, as "shrinking nested intervals". In this case a sequence of nested intervals refers to a sequence that only satisfies property 1. Axiom of completeness. If formula_50 is a sequence of nested intervals, there always exists a real number, that is contained in every interval formula_0. In formal notation this axiom guarantees, that formula_58. Theorem. The intersection of each sequence formula_50 of nested intervals contains exactly one real number formula_59. "Proof:" This statement can easily be verified by contradiction. Assume that there exist two different numbers formula_60. From formula_61 it follows that they differ by formula_62 Since both numbers have to be contained in every interval, it follows that formula_63 for all formula_8. This contradicts property 2 from the definition of nested intervals; therefore, the intersection can contain at most one number formula_59. The completeness axiom guarantees that such a real number formula_59 exists. formula_64 Direct consequences of the axiom. Existence of roots. By generalizing the algorithm shown above for square roots, one can prove that in the real numbers, the equation formula_67 can always be solved for formula_68. This means there exists a unique real number formula_69, such that formula_70. Comparing to the section above, one achieves a sequence of nested intervals for the formula_71-th root of formula_59, namely formula_72, by looking at whether the midpoint formula_73 of the formula_43-th interval is lower or equal or greater than formula_74. Existence of infimum and supremum in bounded Sets. Definition. If formula_75 has an upper bound, i.e. there exists a number formula_76, such that formula_77 for all formula_78, one can call the number formula_79 the supremum of formula_80, if Only one such number formula_81 can exist. Analogously one can define the infimum (formula_84) of a set formula_85, that is bounded from below, as the greatest lower bound of that set. Theorem. Each set formula_75 has a supremum (infimum), if it is bounded from above (below). "Proof:" Without loss of generality one can look at a set formula_75 that has an upper bound. One can now construct a sequence formula_50 of nested intervals formula_51, that has the following two properties: The construction follows a recursion by starting with any number formula_88, that is not an upper bound (e.g. formula_89, where formula_90 and an arbitrary upper bound formula_91 of formula_80). Given formula_51 for some formula_8 one can compute the midpoint formula_92 and define formula_93 Note that this interval sequence is well defined and obviously a sequence of nested intervals by construction. Now let formula_94 be the number in every interval (whose existence is guaranteed by the axiom). formula_81 is an upper bound of formula_80, otherwise there exists a number formula_78, such that formula_95. Furthermore, this would imply the existence of an interval formula_96 with formula_97, from which formula_98 follows, due to formula_99 also being an element of formula_100. But this is a contradiction to property 1 of the supremum (meaning formula_101 for all formula_102). Therefore formula_81 is in fact an upper bound of formula_80. Assume that there exists a lower upper bound formula_103 of formula_80. Since formula_50 is a sequence of nested intervals, the interval lengths get arbitrarily small; in particular, there exists an interval with a length smaller than formula_104. But from formula_105 one gets formula_106 and therefore formula_107. Following the rules of this construction, formula_87 would have to be an upper bound of formula_80, contradicting property 2 of all sequences of nested intervals. In two steps, it has been shown that formula_81 is an upper bound of formula_80 and that a lower upper bound cannot exist. Therefore formula_81 is the supremum of formula_80 by definition. Remark. As was seen, the existence of suprema and infima of bounded sets is a consequence of the completeness of formula_47. In effect the two are actually equivalent, meaning that either of the two can be introduced axiomatically. "Proof:" Let formula_50 with formula_51 be a sequence of nested intervals. Then the set formula_108 is bounded from above, where every formula_86 is an upper bound. This implies, that the least upper bound formula_79 fulfills formula_109 for all formula_8. Therefore formula_105 for all formula_8, respectively formula_110. Further consequences. After formally defining the convergence of sequences and accumulation points of sequences, one can also prove the Bolzano–Weierstrass theorem using nested intervals. In a follow-up, the fact, that Cauchy sequences are convergent (and that all convergent sequences are Cauchy sequences) can be proven. This in turn allows for a proof of the completeness property above, showing their equivalence. Further discussion of related aspects. Without any specifying what is meant by interval, all that can be said about the intersection formula_111 over all the naturals (i.e. the set of all points common to each interval) is that it is either the empty set formula_66, a point on the number line (called a singleton formula_112), or some interval. The possibility of an empty intersection can be illustrated by looking at a sequence of open intervals formula_113. In this case, the empty set formula_66 results from the intersection formula_111. This result comes from the fact that, for any number formula_114 there exists some value of formula_8 (namely any formula_115), such that formula_116. This is given by the Archimedean property of the real numbers. Therefore, no matter how small formula_117, one can always find intervals formula_118 in the sequence, such that formula_119 implying that the intersection has to be empty. The situation is different for closed intervals. If one changes the situation above by looking at closed intervals of the type formula_120, one can see this very clearly. Now for each formula_114 one still can always find intervals not containing said formula_59, but for formula_121, the property formula_122 holds true for any formula_8. One can conclude that, in this case, formula_123. One can also consider the complement of each interval, written as formula_124 - which, in our last example, is formula_125. By De Morgan's laws, the complement of the intersection is a union of two disjoint open sets. By the connectedness of the real line there must be something between them. This shows that the intersection of (even an uncountable number of) nested, closed, and bounded intervals is nonempty. Higher dimensions. In two dimensions there is a similar result: nested closed disks in the plane must have a common intersection. This result was shown by Hermann Weyl to classify the singular behaviour of certain differential equations. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "I_n" }, { "math_id": 1, "text": "n=1,2,3,\\dots" }, { "math_id": 2, "text": "I_{n+1}" }, { "math_id": 3, "text": "\\varepsilon" }, { "math_id": 4, "text": "N" }, { "math_id": 5, "text": "a_{n+1}\\geq a_n" }, { "math_id": 6, "text": "b_{n+1}\\leq b_n" }, { "math_id": 7, "text": "\\pi" }, { "math_id": 8, "text": "n\\in\\mathbb{N}" }, { "math_id": 9, "text": " x>1 " }, { "math_id": 10, "text": "1\\leq \\sqrt{x} \\leq x " }, { "math_id": 11, "text": "I_1=[1, x]" }, { "math_id": 12, "text": " x " }, { "math_id": 13, "text": "k^2 > x " }, { "math_id": 14, "text": "I_1=[1, k]" }, { "math_id": 15, "text": "I_n=[a_n, b_n], n\\in\\mathbb{N}" }, { "math_id": 16, "text": "m_n=\\frac{a_n + b_n}{2}" }, { "math_id": 17, "text": "I_1" }, { "math_id": 18, "text": "I_{n+1} := \\left\\{\\begin{matrix}\n\\left[m_n, b_n\\right] && \\text{if}\\;\\; m_n^2 \\leq x \\\\\n\\left[a_n, m_n\\right] && \\text{if}\\;\\; m_n^2 > x\n\\end{matrix}\\right." }, { "math_id": 19, "text": "I_{n} " }, { "math_id": 20, "text": "\\sqrt{x}" }, { "math_id": 21, "text": "I_{n+1} " }, { "math_id": 22, "text": " \\sqrt{x}\\in I_{n+1} " }, { "math_id": 23, "text": "|I_n|" }, { "math_id": 24, "text": "\\sqrt{x} " }, { "math_id": 25, "text": "\\sqrt{y}" }, { "math_id": 26, "text": "0<y<1" }, { "math_id": 27, "text": "1/y>1" }, { "math_id": 28, "text": "x:=1/y" }, { "math_id": 29, "text": "\\sqrt{19}" }, { "math_id": 30, "text": "1^2<19<5^2" }, { "math_id": 31, "text": "I_1:=[1,5]" }, { "math_id": 32, "text": "\\begin{aligned}\nm_1&=\\dfrac{1+5}{2}=3 &&\\Rightarrow\\; m_1^2=9 \\leq 19 &&\\Rightarrow\\; I_2=[3, 5]\\\\\nm_2&=\\dfrac{3+5}{2}=4 &&\\Rightarrow\\; m_2^2=16 \\leq 19 &&\\Rightarrow\\; I_3=[4, 5]\\\\\nm_3&=\\dfrac{4+5}{2}=4.5 &&\\Rightarrow\\; m_3^2=20.25 > 19 &&\\Rightarrow\\; I_4=[4, 4.5]\\\\\nm_4&=\\dfrac{4+4.5}{2}=4.25 &&\\Rightarrow\\; m_4^2=18.0625 \\leq 19 &&\\Rightarrow\\; I_5=[4.25, 4.5]\\\\\nm_5&=\\dfrac{4.25+4.5}{2}=4.375 &&\\Rightarrow\\; m_5^2=19.140625 > 19 &&\\Rightarrow\\; I_5=[4.25, 4.375]\\\\\n&\\vdots & &\n\\end{aligned}" }, { "math_id": 33, "text": "\\sqrt{19}=4.35889894\\dots" }, { "math_id": 34, "text": "x>0" }, { "math_id": 35, "text": "(c_n)_{n\\in\\mathbb{N}}" }, { "math_id": 36, "text": "c_{n+1}:=\\frac{1}{2}\\cdot\\left(c_n + \\frac{x}{c_n}\\right)" }, { "math_id": 37, "text": "I_{n+1}:=\\left[\\frac{x}{c_n}, c_n\\right]" }, { "math_id": 38, "text": "I_1=[0, k]" }, { "math_id": 39, "text": " k^2>x " }, { "math_id": 40, "text": "c_n" }, { "math_id": 41, "text": "1" }, { "math_id": 42, "text": "2n" }, { "math_id": 43, "text": "n" }, { "math_id": 44, "text": "6" }, { "math_id": 45, "text": "\\tfrac{223}{71}< \\pi < \\tfrac{22}{7} " }, { "math_id": 46, "text": "22/7 \\approx 3.143 " }, { "math_id": 47, "text": "\\mathbb{R}" }, { "math_id": 48, "text": "+" }, { "math_id": 49, "text": "\\cdot" }, { "math_id": 50, "text": "(I_n)_{n\\in\\mathbb{N}}" }, { "math_id": 51, "text": "I_n=[a_n, b_n]" }, { "math_id": 52, "text": "|I_n|:=b_n - a_n" }, { "math_id": 53, "text": "\\quad \\forall n \\in \\mathbb{N}: \\;\\; I_{n+1} \\subseteq I_n" }, { "math_id": 54, "text": "\\quad \\forall \\varepsilon > 0 \\; \\exists N\\in\\mathbb{N}: \\;\\; |I_N| < \\varepsilon " }, { "math_id": 55, "text": "\\varepsilon > 0" }, { "math_id": 56, "text": "n \\geq N" }, { "math_id": 57, "text": "|I_n| < \\varepsilon" }, { "math_id": 58, "text": "\\exists x\\in\\mathbb{R}: \\;x\\in\\bigcap_{n\\in\\mathbb{N}} I_n" }, { "math_id": 59, "text": "x" }, { "math_id": 60, "text": "x,y\\in\\cap_{n\\in\\mathbb{N}} I_n" }, { "math_id": 61, "text": "x\\neq y" }, { "math_id": 62, "text": "|x-y|>0." }, { "math_id": 63, "text": "|I_n|\\geq |x-y|" }, { "math_id": 64, "text": "\\; \\square" }, { "math_id": 65, "text": "\\cap_{n\\in\\mathbb{N}}I_n" }, { "math_id": 66, "text": "\\emptyset" }, { "math_id": 67, "text": "x=y^j,\\; j\\in\\mathbb{N}, x>0" }, { "math_id": 68, "text": "y=\\sqrt[j]{x}=x^{1/j}" }, { "math_id": 69, "text": "y>0 " }, { "math_id": 70, "text": "x=y^k" }, { "math_id": 71, "text": "k" }, { "math_id": 72, "text": "y" }, { "math_id": 73, "text": "m_n" }, { "math_id": 74, "text": "m_n^k" }, { "math_id": 75, "text": "A\\subset \\mathbb{R}" }, { "math_id": 76, "text": "b" }, { "math_id": 77, "text": "x\\leq b" }, { "math_id": 78, "text": "x\\in A" }, { "math_id": 79, "text": "s=\\sup(A)" }, { "math_id": 80, "text": "A" }, { "math_id": 81, "text": "s" }, { "math_id": 82, "text": "\\forall x \\in A: \\; x\\leq s" }, { "math_id": 83, "text": "\\forall \\sigma < s : \\; \\exists x\\in A: \\; x >\\sigma" }, { "math_id": 84, "text": "\\inf(B)" }, { "math_id": 85, "text": "B\\subset \\mathbb{R} " }, { "math_id": 86, "text": "b_n" }, { "math_id": 87, "text": "a_n" }, { "math_id": 88, "text": "a_1" }, { "math_id": 89, "text": "a_1=c - 1" }, { "math_id": 90, "text": "c\\in A" }, { "math_id": 91, "text": "b_1" }, { "math_id": 92, "text": "m_n:= \\frac{a_n+b_n}{2}" }, { "math_id": 93, "text": "I_{n+1} := \\left\\{\\begin{matrix}\n\\left[a_n, m_n\\right] && \\text{if}\\; m_n \\;\\text{is an upper bound of}\\; A \\\\\n\\left[m_n, b_n\\right] && \\text{if}\\; m_n \\;\\text{is not an upper bound}\n\\end{matrix}\\right." }, { "math_id": 94, "text": " s " }, { "math_id": 95, "text": "x>s" }, { "math_id": 96, "text": "I_m=[a_m, b_m]" }, { "math_id": 97, "text": "b_m - a_m < x-s" }, { "math_id": 98, "text": "b_m - s < x-s" }, { "math_id": 99, "text": " s" }, { "math_id": 100, "text": "I_m" }, { "math_id": 101, "text": "b_m<s" }, { "math_id": 102, "text": "m\\in\\mathbb{N}" }, { "math_id": 103, "text": "\\sigma < s" }, { "math_id": 104, "text": "s-\\sigma" }, { "math_id": 105, "text": "s\\in I_n" }, { "math_id": 106, "text": "s-a_n<s-\\sigma" }, { "math_id": 107, "text": "a_n>\\sigma" }, { "math_id": 108, "text": "A:=\\{a_1, a_2,\\dots\\}" }, { "math_id": 109, "text": "a_n\\leq s\\leq b_n" }, { "math_id": 110, "text": "s\\in\\cap_{n\\in\\mathbb{N}} I_n" }, { "math_id": 111, "text": "\\cap_{n\\in\\mathbb{N}} I_n" }, { "math_id": 112, "text": "\\{x\\}" }, { "math_id": 113, "text": "I_n=\\left(0, \\frac{1}{n}\\right) = \\left\\{x\\in\\mathbb{R}:0<x<\\frac{1}{n}\\right\\}" }, { "math_id": 114, "text": "x>0 " }, { "math_id": 115, "text": "n>1/x" }, { "math_id": 116, "text": " 1/n<x " }, { "math_id": 117, "text": " x > 0 " }, { "math_id": 118, "text": " I_n " }, { "math_id": 119, "text": " x\\notin I_n, " }, { "math_id": 120, "text": "I_n=\\left[0, \\frac{1}{n}\\right] = \\left\\{x\\in\\mathbb{R}:0 \\leq x \\leq \\frac{1}{n}\\right\\}" }, { "math_id": 121, "text": "x=0" }, { "math_id": 122, "text": "0\\leq x \\leq 1/n" }, { "math_id": 123, "text": "\\cap_{n\\in\\mathbb{N}} I_n = \\{0\\}" }, { "math_id": 124, "text": "(-\\infty,a_n) \\cup (b_n, \\infty)" }, { "math_id": 125, "text": "(-\\infty,0) \\cup (1/n, \\infty)" } ]
https://en.wikipedia.org/wiki?curid=7532405
75324191
Epstein drag
Equation for the drag on a sphere in high Knudsen-number flow In fluid dynamics, Epstein drag is a theoretical result, for the drag force exerted on spheres in high Knudsen number flow (i.e., rarefied gas flow). This may apply, for example, to sub-micron droplets in air, or to larger spherical objects moving in gases more rarefied than air at standard temperature and pressure. Note that while they may be small by some criteria, the spheres must nevertheless be much more massive than the species (molecules, atoms) in the gas that are colliding with the sphere, in order for Epstein drag to apply. The reason for this is to ensure that the change in the sphere's momentum due to individual collisions with gas species is not large enough to substantially alter the sphere's motion, such as occurs in Brownian motion. The result was obtained by Paul Sophus Epstein in 1924. His result was used for. high-precision measurements of the charge on the electron in the oil drop experiment performed by Robert A. Millikan, as cited by Millikan in his 1930 review paper on the subject. For the early work on that experiment, the drag was assumed to follow Stokes' law. However, for droplets substantially below the submicron scale, the drag approaches Epstein drag instead of Stokes drag, since the mean free path of air species (atoms and molecules) is roughly of order of a tenth of a micron. Statement of the law. The magnitude of the force on a sphere moving through a rarefied gas, in which the diameter of the sphere is of order or less than the collisional mean free path in the gas, is formula_0 where &amp;NoBreak;&amp;NoBreak; is the radius of the spherical particle, &amp;NoBreak;&amp;NoBreak; is the number density of gas species, &amp;NoBreak;&amp;NoBreak; is their mass, formula_1 is the arithmetic mean speed of gas species, and &amp;NoBreak;&amp;NoBreak; is the relative speed of the sphere with respect to the rest frame of the gas. The factor formula_2 encompasses the microphysics of the gas-sphere interaction and the resultant distribution of velocities of the reflected particles, which is not a trivial problem. It is not uncommon to assume formula_3 (see below) presumably in part because empirically formula_2 is found to be close to 1 numerically, and in part because in many applications, the uncertainty due to formula_2 is dwarfed by other uncertainties in the problem. For this reason, one sometimes encounters Epstein drag written with the factor formula_2 left absent. The force acts in a direction opposite to the direction of motion of the sphere. Forces acting normal to the direction of motion are known as "lift", not "drag", and in any case are not present in the stated problem when the sphere is not rotating. For mixtures of gases (e.g. air), the total force is simply the sum of the forces due to each component of the gas, noting with care that each component (species) will have a different formula_4, a different formula_5 and a different formula_1. Note that formula_6 where formula_7 is the gas density, noting again, with care, that in the case of multiple species, there are multiple different such densities contributing to the overall force. The net force is due both to momentum transfer to the sphere due to species impinging on it, and momentum transfer due to species leaving, due either to reflection, evaporation, or some combination of the two. Additionally, the force due to reflection depends upon whether the reflection is purely specular or, by contrast, partly or fully diffuse, and the force also depends upon whether the reflection is purely elastic, or inelastic, or some other assumption regarding the velocity distribution of reflecting particles, since the particles are, after all, in thermal contact - albeit briefly - with the surface. All of these effects are combined in Epstein's work in an overall prefactor "formula_2". Theoretically, formula_8 for purely elastic specular reflection, but may be less than or greater than unity in other circumstances. For reference, note that kinetic theory gives formula_9 For the specific cases considered by Epstein, formula_2 ranges from a minimum value of 1 up to a maximum value of 1.444. For example, Epstein predicts formula_10 for diffuse elastic collisions. One may sometimes encounter formula_11 where formula_12 is the accommodation coefficient, which appears in the Maxwell model for the interaction of gas species with surfaces, characterizing the fraction of reflection events that are diffuse (as opposed to specular). (There are other accommodation coefficients that describe thermal energy transfer as well, but are beyond the scope of this article.) In-line with theory, an empirical measurement, for example, for melamine-formaldehyde spheres in argon gas, gives formula_13 as measured by one method, and formula_14 by another method, as reported by the same authors in the same paper. According to Epstein himself, Millikan found formula_15 for oil drops, whereas Knudsen found formula_16 for glass spheres. In his paper, Epstein also considered modifications to allow for nontrivial formula_17. That is, he treated the leading terms in what happens if the flow is not fully in the rarefied regime. Also, he considered the effects due to rotation of the sphere. Normally, by "Epstein drag," one does not include such effects. As noted by Epstein himself, previous work on this problem had been performed by Langevin by Cunningham, and by Lenard. These previous results were in error, however, as shown by Epstein; as such, Epstein's work is viewed as definitive, and the result goes by his name. Applications. As mentioned above, the original practical application of Epstein drag was to refined estimates of the charge on the electron in the Millikan oil-drop experiment. Several substantive practical applications have ensued. One application among many in astrophysics is the problem of gas-dust coupling in protostellar disks. See also section 4.1.1, "Epstein drag," page 110-111 of. Another application is the drag on stellar dust in red giant atmospheres, which counteracts the acceleration due to radiation pressure Another application is to dusty plasmas. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "F = \\delta \\frac{4 \\pi}{3} a^2 n m \\bar c u " }, { "math_id": 1, "text": "{\\bar c}" }, { "math_id": 2, "text": "\\delta" }, { "math_id": 3, "text": "\\delta = 1" }, { "math_id": 4, "text": "n" }, { "math_id": 5, "text": "m" }, { "math_id": 6, "text": " n m = \\rho " }, { "math_id": 7, "text": "\\rho" }, { "math_id": 8, "text": "\\delta=1" }, { "math_id": 9, "text": " \\bar c = \\sqrt{\\frac {8}{\\pi} \\cdot \\frac{k_\\mathrm{B} T}{m}}." }, { "math_id": 10, "text": "\\delta=1+\\pi/8\\simeq 1.39" }, { "math_id": 11, "text": "\n\\delta = 1+ \\alpha \\frac{\\pi}{8}\n" }, { "math_id": 12, "text": "\\alpha" }, { "math_id": 13, "text": "\\delta=1.26\\pm0.13" }, { "math_id": 14, "text": "\\delta=1.44\\pm0.19" }, { "math_id": 15, "text": "\\delta=1.154" }, { "math_id": 16, "text": "\\delta=1.164" }, { "math_id": 17, "text": "{\\rm Kn}^{-1}" } ]
https://en.wikipedia.org/wiki?curid=75324191
75328204
Nilsequence
In mathematics, a nilsequence is a type of numerical sequence playing a role in ergodic theory and additive combinatorics. The concept is related to nilpotent Lie groups and almost periodicity. The name arises from the part played in the theory by compact nilmanifolds of the type formula_0 where formula_1 is a nilpotent Lie group and formula_2 a lattice in it. The idea of a basic nilsequence defined by an element formula_3 of formula_1 and continuous function formula_4 on formula_0 is to take formula_5, for formula_6 an integer, as formula_7. General nilsequences are then uniform limits of basic nilsequences. For the statement of conjectures and theorems, technical side conditions and quantifications of complexity are introduced. Much of the combinatorial importance of nilsequences reflects their close connection with the Gowers norm. As explained by Host and Kra, nilsequences originate in evaluating functions on orbits in a "nilsystem"; and nilsystems are "characteristic for multiple correlations". Case of the circle group. The circle group arises as the special case of the real line and its subgroup of the integers. It has nilpotency class equal to 1, being abelian, and the requirements of the general theory are to generalise to nilpotency class formula_8 The semi-open unit interval [0,1) is a fundamental domain, and for that reason the fractional part function is involved in the theory. Functions involving the fractional part formula_9 of the variable in the circle group occur, under the name "bracket polynomials". Since the theory is in the setting of Lipschitz functions, which are "a fortiori" continuous, the discontinuity of the fractional part at 0 has to be managed. That said, the sequences formula_10, where formula_11 is a given irrational real number, and formula_6 an integer, and studied in diophantine approximation, are simple examples for the theory. Their construction can be thought of in terms of the skew product construction in ergodic theory, adding one dimension. Polynomial sequences. The imaginary exponential function formula_12 maps the real numbers to the circle group (see Euler's formula#Topological interpretation). A numerical sequence formula_13 where formula_14 is a polynomial function with real coefficients, and formula_6 is an integer variable, is a type of trigonometric polynomial, called a "polynomial sequence" for the purposes of the nilsequence theory. The generalisation to nilpotent groups that are not abelian relies on the Hall–Petresco identity from group theory for a workable theory of polynomials. In particular the polynomial sequence comes with a definite degree. Möbius function and nilsequences. A family of conjectures formula_15 was made by Ben Green and Terence Tao, concerning the Möbius function of prime number theory and formula_16-step nilsequences. Here the underlying Lie group formula_1 is assumed simply connected and nilpotent with length at most formula_16. The nilsequences considered are of type formula_17 with some fixed formula_18 in formula_1, and the function formula_4 continuous and taking values in [-1,1]. The form of the conjecture, which requires a stated metric on the nilmanifold and Lipschitz bound in the implied constant, is that the average of formula_19 up to formula_20 is smaller asymptotically than any fixed inverse power of formula_21 As a subsequent paper published in 2012 proving the conjectures put it, "The Möbius function is strongly orthogonal to nilsequences". Subsequently Green, Tao and Tamar Ziegler also proved a family formula_22 of inverse theorems for the Gowers norm, stated in terms of nilsequences. This completed a program of proving asymptotics for simultaneous prime values of linear forms. Tao has commented in his book "Higher Order Fourier Analysis" on the role of nilsequences in the inverse theorem proof. The issue being to extend IG results from the finite field case to general finite cyclic groups, the "classical phases"—essentially the exponentials of polynomials natural for the circle group—had proved inadequate. There were options other than nilsequences, in particular direct use of bracket polynomials. But Tao writes that he prefers nilsequences for the underlying Lie theory structure. Equivalent form for averaged Chowla and Sarnak conjectures. Tao has proved that a conjecture on nilsequences is an equivalent of an averaged form of a noted conjecture of Sarvadaman Chowla involving only the Möbius function, and the way it self-correlates. Peter Sarnak made a conjecture on the non-correlation of the Möbius function with more general sequences from ergodic theory, which is a consequence of Chowla's conjecture. Tao's result on averaged forms showed all three conjectures are equivalent. The 2018 paper "The logarithmic Sarnak conjecture for ergodic weights" by Frantzikinakis and Host used this approach to prove unconditional results on the Liouville function. Notes. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "G/ \\Gamma" }, { "math_id": 1, "text": "G" }, { "math_id": 2, "text": "\\Gamma" }, { "math_id": 3, "text": "g" }, { "math_id": 4, "text": "f" }, { "math_id": 5, "text": "b(n)" }, { "math_id": 6, "text": "n" }, { "math_id": 7, "text": "f(g^n \\Gamma)" }, { "math_id": 8, "text": "s > 1." }, { "math_id": 9, "text": "\\{\\{x\\}\\}" }, { "math_id": 10, "text": "\\{\\{\\alpha n\\}\\}" }, { "math_id": 11, "text": "\\alpha" }, { "math_id": 12, "text": "e(x)" }, { "math_id": 13, "text": "e(P(n))" }, { "math_id": 14, "text": "P" }, { "math_id": 15, "text": "MN(s)" }, { "math_id": 16, "text": "s" }, { "math_id": 17, "text": "f(g^n x\\Gamma)" }, { "math_id": 18, "text": "x" }, { "math_id": 19, "text": "\\mu (n) f(g^n x\\Gamma)" }, { "math_id": 20, "text": "N" }, { "math_id": 21, "text": "log N." }, { "math_id": 22, "text": "IG(s)" } ]
https://en.wikipedia.org/wiki?curid=75328204
75329877
Pedal circle
The pedal circle of the a triangle formula_5 and a point formula_0 in the plane is a special circle determined by those two entities. More specifically for the three perpendiculars through the point formula_6 onto the three (extended) triangle sides formula_7 you get three points of intersection formula_8 and the circle defined by those three points is the pedal circle. By definition the pedal circle is the circumcircle of the pedal triangle. For radius formula_9 of the pedal circle the following formula holds with formula_10 being the radius and formula_2 being the center of the circumcircle: formula_11 Note that the denominator in the formula turns 0 if the point formula_0 lies on the circumcircle. In this case the three points formula_1 determine a degenerated circle with an infinite radius, that is a line. This is the Simson line. If formula_0 is the incenter of the triangle then the pedal circle is the incircle of the triangle and if formula_0 is the orthocenter of the triangle the pedal circle is the nine-point circle. If formula_0 does not lie on the circumcircle then its isogonal conjugate formula_3 yields the same pedal circle, that is the six points formula_1 and formula_12 lie on the same circle. Moreover, the midpoint of the line segment formula_4 is the center of that pedal circle. Griffiths' theorem states that all the pedal circles for a points located on a line through the center of the triangle's circumcircle share a common (fixed) point. Consider four points with no three of them being on a common line. Then you can build four different subsets of three points. Take the points of such a subset as the vertices of a triangle formula_5 and the fourth point as the point formula_0, then they define a pedal circle. The four pedal circles you get this way intersect in a common point.
[ { "math_id": 0, "text": "P" }, { "math_id": 1, "text": "P_a, P_b, P_c " }, { "math_id": 2, "text": "O" }, { "math_id": 3, "text": "Q" }, { "math_id": 4, "text": "PQ" }, { "math_id": 5, "text": "ABC" }, { "math_id": 6, "text": " P" }, { "math_id": 7, "text": " a,b,c" }, { "math_id": 8, "text": "P_a, P_b, P_c" }, { "math_id": 9, "text": "r" }, { "math_id": 10, "text": "R" }, { "math_id": 11, "text": "r=\\frac{|PA| \\cdot |PB| \\cdot |PC|}{2\\cdot (R^2-|PO|^2)} " }, { "math_id": 12, "text": "Q_a, Q_b, Q_c " } ]
https://en.wikipedia.org/wiki?curid=75329877
75329920
Fubini's nightmare
Apparent violation of Fubini's theorem Fubini's nightmare is a seeming violation of Fubini's theorem, where a nice space, such as the square formula_0 is foliated by smooth fibers, but there exists a set of positive measure whose intersection with each fiber is singular (at most a single point in Katok's example). There is no real contradiction to Fubini's theorem because despite smoothness of the fibers, the foliation is not absolutely continuous, and neither are the conditional measures on fibers. Existence of Fubini's nightmare complicates fiber-wise proofs for center foliations of partially hyperbolic dynamical systems: these foliations are typically Hölder but not absolutely continuous. A hands-on example of Fubuni's nightmare was suggested by Anatole Katok and published by John Milnor. A dynamical version for center foliation was constructed by Amie Wilkinson and Michael Shub. Katok's construction. Foliation. For a formula_1 consider the coding of points of the interval formula_2 by sequences of zeros and ones, similar to the binary coding, but splitting the intervals in the ratio formula_3. (As for the binary coding, we identify formula_4 with formula_5) The point, corresponding to a sequence formula_6 is given explicitly by formula_7 where formula_8 is the length of the interval after first formula_9 splits. For a fixed sequence formula_10 the map formula_11 is analytic. This follows from the Weierstrass M-test: the series for formula_11 converges uniformly on compact subsets of the intersection formula_12 In particular, formula_13 is an analytic curve. Now, the square formula_14 is foliated by analytic curves formula_15 Set. For a fixed formula_16 and random formula_17 sampled according to the Lebesgue measure, the coding digits formula_18 are independent Bernoulli random variables with parameter formula_16, namely formula_19 and formula_20 By the law of large numbers, for each formula_16 and almost every formula_21 formula_22 By Fubini's theorem, the set formula_23 has full Lebesgue measure in the square formula_14. However, for each fixed sequence formula_24 the limit of its Cesàro averages formula_25 is unique, if it exists. Thus every curve formula_26 either does not intersect formula_27 at all (if there is no limit), or intersects it at the single point formula_28 where formula_29 Therefore, for the above foliation and set formula_27, we observe a Fubini's nightmare. Wilkinson–Shub construction. Wilkinson and Shub considered diffeomorphisms which are small perturbations of the diffeomorphism formula_30 of the three dimensional torus formula_31 where formula_32 is the Arnold's cat map. This map and its small perturbations are partially hyperbolic. Moreover, the center fibers of the perturbed maps are smooth circles, close to those for the original map. The Wilkinson and Shub perturbation is designed to preserve the Lebesgue measure and to make the diffeomorphism ergodic with the central Lyapunov exponent formula_33 Suppose that formula_34 is positive (otherwise invert the map). Then the set of points, for which the central Lyapunov exponent is positive, has full Lebesgue measure in formula_35 On the other hand, the length of the circles of the central foliation is bounded above. Therefore, on each circle, the set of points with positive central Lyapunov exponent has to have zero measure. More delicate arguments show that this set is finite, and we have the Fubini's nightmare. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "[0,1]\\times[0,1] ," }, { "math_id": 1, "text": "p \\in (0,1)" }, { "math_id": 2, "text": "[0,1]" }, { "math_id": 3, "text": "(1-p):p" }, { "math_id": 4, "text": "0111\\ldots" }, { "math_id": 5, "text": "1000\\ldots" }, { "math_id": 6, "text": "(a_1, a_2, ...) \\in \\{0,1\\}^{\\N}, " }, { "math_id": 7, "text": "\nF_p(a_1,a_2,\\dots)\n= \\sum_{n: a_n=1} a_n (1-p) \\ell_{n-1}\n= \\sum_{n=1}^{\\infty} a_n p^{\\# \\{j\\le n-1: \\, a_j=1\\}} (1-p)^{1+\\,\\# \\{j\\le n-1: \\, a_j=0\\}},\n" }, { "math_id": 8, "text": "\n\\ell_n= p^{\\# \\{j\\le n: a_j=1\\}} (1-p)^{\\# \\{j\\le n: a_j=0\\}}\n" }, { "math_id": 9, "text": "n" }, { "math_id": 10, "text": "a \\in \\{0,1\\}^{\\N}," }, { "math_id": 11, "text": "p\\mapsto F_p(a)" }, { "math_id": 12, "text": " \\{|p| <1 \\} \\cap \\{|1-p| <1 \\} \\subset \\mathbb{C} ." }, { "math_id": 13, "text": "\\gamma_a = \\{ (p, F_p(a)) : p\\in (0,1) \\}" }, { "math_id": 14, "text": "(0,1)\\times [0,1]" }, { "math_id": 15, "text": "\\gamma_a, a \\in \\{0,1\\}^{\\N}." }, { "math_id": 16, "text": "p" }, { "math_id": 17, "text": "x\\in [0,1]," }, { "math_id": 18, "text": "a_1=a_1(x;p), a_2=a_2(x;p), ..." }, { "math_id": 19, "text": "P (a_n = 1) = p " }, { "math_id": 20, "text": "P (a_n = 0) = 1 - p ." }, { "math_id": 21, "text": "x," }, { "math_id": 22, "text": "\n\\frac{1}{n} \\sum_{j=1}^n a_j(x;p) \\to p, \\quad n\\to\\infty.\n" }, { "math_id": 23, "text": "\nM = \\left\\{ (p,x) : \\frac{1}{n} \\sum_{j=1}^n a_j(x;p) \\, \\xrightarrow[n\\to\\infty]{} \\, p \\right\\}\n" }, { "math_id": 24, "text": "(a_n)," }, { "math_id": 25, "text": "(a_1 + \\cdots + a_n) / n" }, { "math_id": 26, "text": "\\gamma_a" }, { "math_id": 27, "text": "M" }, { "math_id": 28, "text": "(p,F_p(a))," }, { "math_id": 29, "text": "\np=\\lim_{n\\to\\infty} \\frac{a_1+\\dots+ a_n}{n}.\n" }, { "math_id": 30, "text": "A\\times id" }, { "math_id": 31, "text": "T^3=T^2\\times S^1 ," }, { "math_id": 32, "text": "A=\\left(\\begin{smallmatrix} 2& 1 \\\\ 1 &1\\end{smallmatrix}\\right):T^2\\to T^2" }, { "math_id": 33, "text": "\\lambda_c \\neq 0 ." }, { "math_id": 34, "text": "\\lambda_c" }, { "math_id": 35, "text": "T^3." } ]
https://en.wikipedia.org/wiki?curid=75329920
753349
Boolean function
Function returning one of only two values In mathematics, a Boolean function is a function whose arguments and result assume values from a two-element set (usually {true, false}, {0,1} or {-1,1}). Alternative names are switching function, used especially in older computer science literature, and truth function (or logical function), used in logic. Boolean functions are the subject of Boolean algebra and switching theory. A Boolean function takes the form formula_0, where formula_1 is known as the Boolean domain and formula_2 is a non-negative integer called the arity of the function. In the case where formula_3, the function is a constant element of formula_1. A Boolean function with multiple outputs, formula_4 with formula_5 is a vectorial or "vector-valued" Boolean function (an S-box in symmetric cryptography). There are formula_6 different Boolean functions with formula_2 arguments; equal to the number of different truth tables with formula_7 entries. Every formula_2-ary Boolean function can be expressed as a propositional formula in formula_2 variables formula_8, and two propositional formulas are logically equivalent if and only if they express the same Boolean function. Examples. The rudimentary symmetric Boolean functions (logical connectives or logic gates) are: An example of a more complicated function is the majority function (of an odd number of inputs). Representation. A Boolean function may be specified in a variety of ways: Algebraically, as a propositional formula using rudimentary Boolean functions: Boolean formulas can also be displayed as a graph: In order to optimize electronic circuits, Boolean formulas can be minimized using the Quine–McCluskey algorithm or Karnaugh map. Analysis. Properties. A Boolean function can have a variety of properties: Circuit complexity attempts to classify Boolean functions with respect to the size or depth of circuits that can compute them. Derived functions. A Boolean function may be decomposed using Boole's expansion theorem in positive and negative "Shannon" "cofactors" (Shannon expansion), which are the (k-1)-ary functions resulting from fixing one of the arguments (to zero or one). The general (k-ary) functions obtained by imposing a linear constraint on a set of inputs (a linear subspace) are known as "subfunctions". The "Boolean derivative" of the function to one of the arguments is a (k-1)-ary function that is true when the output of the function is sensitive to the chosen input variable; it is the XOR of the two corresponding cofactors. A derivative and a cofactor are used in a Reed–Muller expansion. The concept can be generalized as a k-ary derivative in the direction dx, obtained as the difference (XOR) of the function at x and x + dx. The "Möbius transform" (or "Boole-Möbius transform") of a Boolean function is the set of coefficients of its polynomial (algebraic normal form), as a function of the monomial exponent vectors. It is a self-inverse transform. It can be calculated efficiently using a butterfly algorithm ("Fast Möbius Transform"), analogous to the Fast Fourier Transform. "Coincident" Boolean functions are equal to their Möbius transform, i.e. their truth table (minterm) values equal their algebraic (monomial) coefficients. There are 2^2^("k"−1) coincident functions of "k" arguments. Cryptographic analysis. The "Walsh transform" of a Boolean function is a k-ary integer-valued function giving the coefficients of a decomposition into linear functions (Walsh functions), analogous to the decomposition of real-valued functions into harmonics by the Fourier transform. Its square is the "power spectrum" or "Walsh spectrum". The Walsh coefficient of a single bit vector is a measure for the correlation of that bit with the output of the Boolean function. The maximum (in absolute value) Walsh coefficient is known as the "linearity" of the function. The highest number of bits (order) for which all Walsh coefficients are 0 (i.e. the subfunctions are balanced) is known as "resiliency", and the function is said to be correlation immune to that order. The Walsh coefficients play a key role in linear cryptanalysis. The "autocorrelation" of a Boolean function is a k-ary integer-valued function giving the correlation between a certain set of changes in the inputs and the function output. For a given bit vector it is related to the Hamming weight of the derivative in that direction. The maximal autocorrelation coefficient (in absolute value) is known as the "absolute indicator". If all autocorrelation coefficients are 0 (i.e. the derivatives are balanced) for a certain number of bits then the function is said to satisfy the "propagation criterion" to that order; if they are all zero then the function is a bent function. The autocorrelation coefficients play a key role in differential cryptanalysis. The Walsh coefficients of a Boolean function and its autocorrelation coefficients are related by the equivalent of the Wiener–Khinchin theorem, which states that the autocorrelation and the power spectrum are a Walsh transform pair. Linear approximation table. These concepts can be extended naturally to "vectorial" Boolean functions by considering their output bits ("coordinates") individually, or more thoroughly, by looking at the set of all linear functions of output bits, known as its "components". The set of Walsh transforms of the components is known as a Linear Approximation Table (LAT) or "correlation matrix"; it describes the correlation between different linear combinations of input and output bits. The set of autocorrelation coefficients of the components is the "autocorrelation table", related by a Walsh transform of the components to the more widely used "Difference Distribution Table" (DDT) which lists the correlations between differences in input and output bits (see also: S-box). Real polynomial form. On the unit hypercube. Any Boolean function formula_9 can be uniquely extended (interpolated) to the real domain by a multilinear polynomial in formula_10, constructed by summing the truth table values multiplied by indicator polynomials:formula_11For example, the extension of the binary XOR function formula_12 isformula_13which equalsformula_14Some other examples are negation (formula_15), AND (formula_16) and OR (formula_17). When all operands are independent (share no variables) a function's polynomial form can be found by repeatedly applying the polynomials of the operators in a Boolean formula. When the coefficients are calculated modulo 2 one obtains the algebraic normal form (Zhegalkin polynomial). Direct expressions for the coefficients of the polynomial can be derived by taking an appropriate derivative:formula_18this generalizes as the Möbius inversion of the partially ordered set of bit vectors:formula_19where formula_20 denotes the weight of the bit vector formula_21. Taken modulo 2, this is the Boolean "Möbius transform", giving the algebraic normal form coefficients:formula_22In both cases, the sum is taken over all bit-vectors "a" covered by "m", i.e. the "one" bits of "a" form a subset of the one bits of "m". When the domain is restricted to the n-dimensional hypercube formula_23, the polynomial formula_24 gives the probability of a positive outcome when the Boolean function "f" is applied to "n" independent random (Bernoulli) variables, with individual probabilities "x". A special case of this fact is the piling-up lemma for parity functions. The polynomial form of a Boolean function can also be used as its natural extension to fuzzy logic. On the symmetric hypercube. Often, the Boolean domain is taken as formula_25, with false ("0") mapping to 1 and true ("1") to -1 (see Analysis of Boolean functions). The polynomial corresponding to formula_26 is then given by:formula_27Using the symmetric Boolean domain simplifies certain aspects of the analysis, since negation corresponds to multiplying by -1 and linear functions are monomials (XOR is multiplication). This polynomial form thus corresponds to the "Walsh transform" (in this context also known as "Fourier transform") of the function (see above). The polynomial also has the same statistical interpretation as the one in the standard Boolean domain, except that it now deals with the expected values formula_28 (see piling-up lemma for an example). Applications. Boolean functions play a basic role in questions of complexity theory as well as the design of processors for digital computers, where they are implemented in electronic circuits using logic gates. The properties of Boolean functions are critical in cryptography, particularly in the design of symmetric key algorithms (see substitution box). In cooperative game theory, monotone Boolean functions are called simple games (voting games); this notion is applied to solve problems in social choice theory. See also. &lt;templatestyles src="Div col/styles.css"/&gt; References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "f:\\{0,1\\}^k \\to \\{0,1\\}" }, { "math_id": 1, "text": "\\{0,1\\}" }, { "math_id": 2, "text": "k" }, { "math_id": 3, "text": "k=0" }, { "math_id": 4, "text": "f:\\{0,1\\}^k \\to \\{0,1\\}^m" }, { "math_id": 5, "text": "m>1" }, { "math_id": 6, "text": "2^{2^k}" }, { "math_id": 7, "text": "2^k" }, { "math_id": 8, "text": "x_1,...,x_k" }, { "math_id": 9, "text": "f(x): \\{0,1\\}^n \\rightarrow \\{0,1\\}" }, { "math_id": 10, "text": "\\mathbb{R}^n" }, { "math_id": 11, "text": "f^*(x) = \\sum_{a \\in {\\{0,1\\}}^n} f(a) \\prod_{i:a_i=1} x_i \\prod_{i:a_i=0} (1-x_i)" }, { "math_id": 12, "text": "x \\oplus y" }, { "math_id": 13, "text": "0(1-x)(1-y) + 1x(1-y) + 1(1-x)y + 0xy" }, { "math_id": 14, "text": "x + y -2xy" }, { "math_id": 15, "text": "1-x" }, { "math_id": 16, "text": "xy" }, { "math_id": 17, "text": "x + y - xy" }, { "math_id": 18, "text": "\\begin{array}{lcl} f^*(00) & = & (f^*)(00) & = & f(00) \\\\ \nf^*(01) & = & (\\partial_1f^*)(00) & = & -f(00) + f(01) \\\\\nf^*(10) & = & (\\partial_2f^*)(00) & = & -f(00) + f(10) \\\\\nf^*(11) & = & (\\partial_1\\partial_2f^*)(00) & = & f(00) -f(01)-f(10)+f(11) \\\\\n\\end{array}" }, { "math_id": 19, "text": "f^*(m) = \\sum_{a \\subseteq m} (-1)^{|a|+|m|} f(a)" }, { "math_id": 20, "text": "|a|" }, { "math_id": 21, "text": "a" }, { "math_id": 22, "text": "\\hat f(m) = \\bigoplus_{a \\subseteq m} f(a)" }, { "math_id": 23, "text": "[0,1]^n" }, { "math_id": 24, "text": "f^*(x): [0,1]^n \\rightarrow [0,1]" }, { "math_id": 25, "text": "\\{-1, 1\\}" }, { "math_id": 26, "text": "g(x): \\{-1,1\\}^n \\rightarrow \\{-1,1\\}" }, { "math_id": 27, "text": "g^*(x) = \\sum_{a \\in {\\{-1,1\\}}^n} g(a) \\prod_{i:a_i=-1} \\frac{1-x_i}{2} \\prod_{i:a_i=1} \\frac{1+x_i}{2}" }, { "math_id": 28, "text": "E(X) = P(X=1) - P(X=-1) \\in [-1, 1]" } ]
https://en.wikipedia.org/wiki?curid=753349
75336162
Expanding approvals rule
The expanding approvals rule (EAR) is a rule for multi-winner elections that guarantees a form of proportional representation called proportionality for solid coalitions. It is a generalization of the highest median rules to include multiwinner elections and participatory budgeting. When working with ranked ballots, it is sometimes called the Bucklin transferable vote. However, the rule can be more effectively implemented using rated ballots, which are easier to use and provide additional cardinal utility information that can be used for better decision-making. Procedure. Say there are "n" voters and "k" seats to be filled. Each voter has one vote. Groups of voters can expend their vote to elect a candidate, where the cost to elect a candidate is given by an electoral quota, most often assumed to be the Hare quota of formula_0. EAR sets an approval threshold that advances grade by grade, starting at the highest grade and lowering the bar at each iteration. As this bar is lowered, the number of approved candidates expands. When advancing to a new rating of formula_1: Properties. EAR satisfies generalized proportionality for solid coalitions (GPSC): a property for ordinal weak preferences that generalizes both proportionality for solid coalitions (for strict preferences) and proportional justified representation (for dichotomous preferences). Further, EAR satisfies several weak candidate monotonicity properties. Related rules. The method of equal shares (MES) can be seen as a special case of EAR, in which, in step 1, the elected candidate is a candidate that can be purchased in the smallest price (in general, it is the candidate supported by the largest number of voters with remaining funds), and in step 2, the price is deducted as equally as possible (those who have insufficient budget pay all their remaining budget, and the others pay equally). Single transferable vote (STV) can also be seen as a variant of EAR, in which voters always approve only their top candidate ("r"=1); however, if no candidate can be "purchased" by voters ranking it first, the candidate whose supporters have the fewest leftover votes is removed, bringing a new candidate to the top position of these voters. Like EAR, STV satisfies proportionality for solid coalitions. However, EAR has better candidate monotonicity properties. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "n/k" }, { "math_id": 1, "text": "r" } ]
https://en.wikipedia.org/wiki?curid=75336162
7533645
Cone (category theory)
In category theory, a branch of mathematics, the cone of a functor is an abstract notion used to define the limit of that functor. Cones make other appearances in category theory as well. Definition. Let "F" : "J" → "C" be a diagram in "C". Formally, a diagram is nothing more than a functor from "J" to "C". The change in terminology reflects the fact that we think of "F" as indexing a family of objects and morphisms in "C". The category "J" is thought of as an "index category". One should consider this in analogy with the concept of an indexed family of objects in set theory. The primary difference is that here we have morphisms as well. Thus, for example, when "J" is a discrete category, it corresponds most closely to the idea of an indexed family in set theory. Another common and more interesting example takes "J" to be a span. "J" can also be taken to be the empty category, leading to the simplest cones. Let "N" be an object of "C". A cone from "N" to "F" is a family of morphisms formula_0 for each object "X" of "J", such that for every morphism "f" : "X" → "Y" in "J" the following diagram commutes: The (usually infinite) collection of all these triangles can be (partially) depicted in the shape of a cone with the apex "N". The cone ψ is sometimes said to have vertex "N" and base "F". One can also define the dual notion of a cone from "F" to "N" (also called a co-cone) by reversing all the arrows above. Explicitly, a co-cone from "F" to "N" is a family of morphisms formula_1 for each object "X" of "J", such that for every morphism "f" : "X" → "Y" in "J" the following diagram commutes: Equivalent formulations. At first glance cones seem to be slightly abnormal constructions in category theory. They are maps from an "object" to a "functor" (or vice versa). In keeping with the spirit of category theory we would like to define them as morphisms or objects in some suitable category. In fact, we can do both. Let "J" be a small category and let "C""J" be the category of diagrams of type "J" in "C" (this is nothing more than a functor category). Define the diagonal functor Δ : "C" → "C""J" as follows: Δ("N") : "J" → "C" is the constant functor to "N" for all "N" in "C". If "F" is a diagram of type "J" in "C", the following statements are equivalent: The dual statements are also equivalent: These statements can all be verified by a straightforward application of the definitions. Thinking of cones as natural transformations we see that they are just morphisms in "C""J" with source (or target) a constant functor. Category of cones. By the above, we can define the category of cones to "F" as the comma category (Δ ↓ "F"). Morphisms of cones are then just morphisms in this category. This equivalence is rooted in the observation that a natural map between constant functors Δ("N"), Δ("M") corresponds to a morphism between "N" and "M". In this sense, the diagonal functor acts trivially on arrows. In similar vein, writing down the definition of a natural map from a constant functor Δ("N") to "F" yields the same diagram as the above. As one might expect, a morphism from a cone ("N", ψ) to a cone ("L", φ) is just a morphism "N" → "L" such that all the "obvious" diagrams commute (see the first diagram in the next section). Likewise, the category of co-cones from "F" is the comma category ("F" ↓ Δ). Universal cones. Limits and colimits are defined as universal cones. That is, cones through which all other cones factor. A cone φ from "L" to "F" is a universal cone if for any other cone ψ from "N" to "F" there is a unique morphism from ψ to φ. Equivalently, a universal cone to "F" is a universal morphism from Δ to "F" (thought of as an object in "C""J"), or a terminal object in (Δ ↓ "F"). Dually, a cone φ from "F" to "L" is a universal cone if for any other cone ψ from "F" to "N" there is a unique morphism from φ to ψ. Equivalently, a universal cone from "F" is a universal morphism from "F" to Δ, or an initial object in ("F" ↓ Δ). The limit of "F" is a universal cone to "F", and the colimit is a universal cone from "F". As with all universal constructions, universal cones are not guaranteed to exist for all diagrams "F", but if they do exist they are unique up to a unique isomorphism (in the comma category (Δ ↓ "F")). References. &lt;templatestyles src="Refbegin/styles.css" /&gt;
[ { "math_id": 0, "text": "\\psi_X\\colon N \\to F(X)\\," }, { "math_id": 1, "text": "\\psi_X\\colon F(X)\\to N\\," } ]
https://en.wikipedia.org/wiki?curid=7533645
7534
Centripetal force
Force directed to the center of rotation &lt;templatestyles src="Hlist/styles.css"/&gt; A centripetal force (from Latin "centrum", "center" and "petere", "to seek") is a force that makes a body follow a curved path. The direction of the centripetal force is always orthogonal to the motion of the body and towards the fixed point of the instantaneous center of curvature of the path. Isaac Newton described it as "a force by which bodies are drawn or impelled, or in any way tend, towards a point as to a centre". In Newtonian mechanics, gravity provides the centripetal force causing astronomical orbits. One common example involving centripetal force is the case in which a body moves with uniform speed along a circular path. The centripetal force is directed at right angles to the motion and also along the radius towards the centre of the circular path. The mathematical description was derived in 1659 by the Dutch physicist Christiaan Huygens. Formula. From the kinematics of curved motion it is known that an object moving at tangential speed "v" along a path with radius of curvature "r" accelerates toward the center of curvature at a rate formula_0 Here, formula_1 is the centripetal acceleration and formula_2 is the difference between the velocity vectors at formula_3 and formula_4. By Newton's second law, the cause of acceleration is a net force acting on the object, which is proportional to its mass "m" and its acceleration. The force, usually referred to as a "centripetal force", has a magnitude formula_5 and is, like centripetal acceleration, directed toward the center of curvature of the object's trajectory. Derivation. The centripetal acceleration can be inferred from the diagram of the velocity vectors at two instances. In the case of uniform circular motion the velocities have constant magnitude. Because each one is perpendicular to its respective position vector, simple vector subtraction implies two similar isosceles triangles with congruent angles – one comprising a base of formula_2 and a leg length of formula_6, and the other a base of formula_7 (position vector difference) and a leg length of formula_8: formula_9 formula_10 Therefore, formula_11 can be substituted with formula_12: formula_13 The direction of the force is toward the center of the circle in which the object is moving, or the osculating circle (the circle that best fits the local path of the object, if the path is not circular). The speed in the formula is squared, so twice the speed needs four times the force, at a given radius. This force is also sometimes written in terms of the angular velocity "ω" of the object about the center of the circle, related to the tangential velocity by the formula formula_14 so that formula_15 Expressed using the orbital period "T" for one revolution of the circle, formula_16 the equation becomes formula_17 In particle accelerators, velocity can be very high (close to the speed of light in vacuum) so the same rest mass now exerts greater inertia (relativistic mass) thereby requiring greater force for the same centripetal acceleration, so the equation becomes: formula_18 where formula_19 is the Lorentz factor. Thus the centripetal force is given by: formula_20 which is the rate of change of relativistic momentum formula_21. Sources. In the case of an object that is swinging around on the end of a rope in a horizontal plane, the centripetal force on the object is supplied by the tension of the rope. The rope example is an example involving a 'pull' force. The centripetal force can also be supplied as a 'push' force, such as in the case where the normal reaction of a wall supplies the centripetal force for a wall of death or a Rotor rider. Newton's idea of a centripetal force corresponds to what is nowadays referred to as a central force. When a satellite is in orbit around a planet, gravity is considered to be a centripetal force even though in the case of eccentric orbits, the gravitational force is directed towards the focus, and not towards the instantaneous center of curvature. Another example of centripetal force arises in the helix that is traced out when a charged particle moves in a uniform magnetic field in the absence of other external forces. In this case, the magnetic force is the centripetal force that acts towards the helix axis. Analysis of several cases. Below are three examples of increasing complexity, with derivations of the formulas governing velocity and acceleration. Uniform circular motion. Uniform circular motion refers to the case of constant rate of rotation. Here are two approaches to describing this case. Calculus derivation. In two dimensions, the position vector formula_22, which has magnitude (length) formula_8 and directed at an angle formula_23 above the x-axis, can be expressed in Cartesian coordinates using the unit vectors formula_24 and formula_25: formula_26 The assumption of uniform circular motion requires three things: The velocity formula_29 and acceleration formula_30 of the motion are the first and second derivatives of position with respect to time: formula_31 formula_32 formula_33 The term in parentheses is the original expression of formula_22 in Cartesian coordinates. Consequently, formula_34 negative shows that the acceleration is pointed towards the center of the circle (opposite the radius), hence it is called "centripetal" (i.e. "center-seeking"). While objects naturally follow a straight path (due to inertia), this centripetal acceleration describes the circular motion path caused by a centripetal force. Derivation using vectors. The image at right shows the vector relationships for uniform circular motion. The rotation itself is represented by the angular velocity vector Ω, which is normal to the plane of the orbit (using the right-hand rule) and has magnitude given by: formula_35 with "θ" the angular position at time "t". In this subsection, d"θ"/d"t" is assumed constant, independent of time. The distance traveled dℓ of the particle in time d"t" along the circular path is formula_36 which, by properties of the vector cross product, has magnitude "r"d"θ" and is in the direction tangent to the circular path. Consequently, formula_37 In other words, formula_38 Differentiating with respect to time, formula_39 Lagrange's formula states: formula_40 Applying Lagrange's formula with the observation that Ω • r("t") = 0 at all times, formula_41 In words, the acceleration is pointing directly opposite to the radial displacement r at all times, and has a magnitude: formula_42 where vertical bars |...| denote the vector magnitude, which in the case of r("t") is simply the radius "r" of the path. This result agrees with the previous section, though the notation is slightly different. When the rate of rotation is made constant in the analysis of nonuniform circular motion, that analysis agrees with this one. A merit of the vector approach is that it is manifestly independent of any coordinate system. Example: The banked turn. The upper panel in the image at right shows a ball in circular motion on a banked curve. The curve is banked at an angle "θ" from the horizontal, and the surface of the road is considered to be slippery. The objective is to find what angle the bank must have so the ball does not slide off the road. Intuition tells us that, on a flat curve with no banking at all, the ball will simply slide off the road; while with a very steep banking, the ball will slide to the center unless it travels the curve rapidly. Apart from any acceleration that might occur in the direction of the path, the lower panel of the image above indicates the forces on the ball. There are "two" forces; one is the force of gravity vertically downward through the center of mass of the ball "mg, where "m" is the mass of the ball and g is the gravitational acceleration; the second is the upward normal force exerted by the road at a right angle to the road surface "man. The centripetal force demanded by the curved motion is also shown above. This centripetal force is not a third force applied to the ball, but rather must be provided by the net force on the ball resulting from vector addition of the normal force and the force of gravity. The resultant or net force on the ball found by vector addition of the normal force exerted by the road and vertical force due to gravity must equal the centripetal force dictated by the need to travel a circular path. The curved motion is maintained so long as this net force provides the centripetal force requisite to the motion. The horizontal net force on the ball is the horizontal component of the force from the road, which has magnitude |Fh| = "m"|an| sin "θ". The vertical component of the force from the road must counteract the gravitational force: |Fv| = "m"|an| cos "θ" = "m"|g|, which implies |an| = |g| / cos "θ". Substituting into the above formula for |Fh| yields a horizontal force to be: formula_43 On the other hand, at velocity |v| on a circular path of radius "r", kinematics says that the force needed to turn the ball continuously into the turn is the radially inward centripetal force Fc of magnitude: formula_44 Consequently, the ball is in a stable path when the angle of the road is set to satisfy the condition: formula_45 or, formula_46 As the angle of bank "θ" approaches 90°, the tangent function approaches infinity, allowing larger values for |v|2/"r". In words, this equation states that for greater speeds (bigger |v|) the road must be banked more steeply (a larger value for "θ"), and for sharper turns (smaller "r") the road also must be banked more steeply, which accords with intuition. When the angle "θ" does not satisfy the above condition, the horizontal component of force exerted by the road does not provide the correct centripetal force, and an additional frictional force tangential to the road surface is called upon to provide the difference. If friction cannot do this (that is, the coefficient of friction is exceeded), the ball slides to a different radius where the balance can be realized. These ideas apply to air flight as well. See the FAA pilot's manual. Nonuniform circular motion. As a generalization of the uniform circular motion case, suppose the angular rate of rotation is not constant. The acceleration now has a tangential component, as shown the image at right. This case is used to demonstrate a derivation strategy based on a polar coordinate system. Let r("t") be a vector that describes the position of a point mass as a function of time. Since we are assuming circular motion, let r("t") = "R"·u"r", where "R" is a constant (the radius of the circle) and ur is the unit vector pointing from the origin to the point mass. The direction of u"r" is described by "θ", the angle between the x-axis and the unit vector, measured counterclockwise from the x-axis. The other unit vector for polar coordinates, uθ is perpendicular to u"r" and points in the direction of increasing "θ". These polar unit vectors can be expressed in terms of Cartesian unit vectors in the "x" and "y" directions, denoted formula_47 and formula_48 respectively: formula_49 and formula_50 One can differentiate to find velocity: formula_51 where ω is the angular velocity "dθ"/"dt". This result for the velocity matches expectations that the velocity should be directed tangentially to the circle, and that the magnitude of the velocity should be "rω". Differentiating again, and noting that formula_52 we find that the acceleration, a is: formula_53 Thus, the radial and tangential components of the acceleration are: formula_54 and formula_55 where |v| = "r" "ω" is the magnitude of the velocity (the speed). These equations express mathematically that, in the case of an object that moves along a circular path with a changing speed, the acceleration of the body may be decomposed into a perpendicular component that changes the direction of motion (the centripetal acceleration), and a parallel, or tangential component, that changes the speed. General planar motion. Polar coordinates. The above results can be derived perhaps more simply in polar coordinates, and at the same time extended to general motion within a plane, as shown next. Polar coordinates in the plane employ a radial unit vector uρ and an angular unit vector uθ, as shown above. A particle at position r is described by: formula_56 where the notation "ρ" is used to describe the distance of the path from the origin instead of "R" to emphasize that this distance is not fixed, but varies with time. The unit vector uρ travels with the particle and always points in the same direction as r("t"). Unit vector uθ also travels with the particle and stays orthogonal to uρ. Thus, uρ and uθ form a local Cartesian coordinate system attached to the particle, and tied to the path travelled by the particle. By moving the unit vectors so their tails coincide, as seen in the circle at the left of the image above, it is seen that uρ and uθ form a right-angled pair with tips on the unit circle that trace back and forth on the perimeter of this circle with the same angle "θ"("t") as r("t"). When the particle moves, its velocity is formula_57 To evaluate the velocity, the derivative of the unit vector uρ is needed. Because uρ is a unit vector, its magnitude is fixed, and it can change only in direction, that is, its change duρ has a component only perpendicular to uρ. When the trajectory r("t") rotates an amount d"θ", uρ, which points in the same direction as r("t"), also rotates by d"θ". See image above. Therefore, the change in uρ is formula_58 or formula_59 In a similar fashion, the rate of change of uθ is found. As with uρ, uθ is a unit vector and can only rotate without changing size. To remain orthogonal to uρ while the trajectory r("t") rotates an amount d"θ", uθ, which is orthogonal to r("t"), also rotates by d"θ". See image above. Therefore, the change duθ is orthogonal to uθ and proportional to d"θ" (see image above): formula_60 The equation above shows the sign to be negative: to maintain orthogonality, if duρ is positive with d"θ", then duθ must decrease. Substituting the derivative of uρ into the expression for velocity: formula_61 To obtain the acceleration, another time differentiation is done: formula_62 Substituting the derivatives of uρ and uθ, the acceleration of the particle is: formula_63 As a particular example, if the particle moves in a circle of constant radius "R", then d"ρ"/d"t" = 0, v = vθ, and: formula_64 where formula_65 These results agree with those above for nonuniform circular motion. See also the article on non-uniform circular motion. If this acceleration is multiplied by the particle mass, the leading term is the centripetal force and the negative of the second term related to angular acceleration is sometimes called the Euler force. For trajectories other than circular motion, for example, the more general trajectory envisioned in the image above, the instantaneous center of rotation and radius of curvature of the trajectory are related only indirectly to the coordinate system defined by uρ and uθ and to the length |r("t")| = "ρ". Consequently, in the general case, it is not straightforward to disentangle the centripetal and Euler terms from the above general acceleration equation. To deal directly with this issue, local coordinates are preferable, as discussed next. Local coordinates. Local coordinates mean a set of coordinates that travel with the particle, and have orientation determined by the path of the particle. Unit vectors are formed as shown in the image at right, both tangential and normal to the path. This coordinate system sometimes is referred to as "intrinsic" or "path coordinates" or "nt-coordinates", for "normal-tangential", referring to these unit vectors. These coordinates are a very special example of a more general concept of local coordinates from the theory of differential forms. Distance along the path of the particle is the arc length "s", considered to be a known function of time. formula_66 A center of curvature is defined at each position "s" located a distance "ρ" (the radius of curvature) from the curve on a line along the normal un ("s"). The required distance "ρ"("s") at arc length "s" is defined in terms of the rate of rotation of the tangent to the curve, which in turn is determined by the path itself. If the orientation of the tangent relative to some starting position is "θ"("s"), then "ρ"("s") is defined by the derivative d"θ"/d"s": formula_67 The radius of curvature usually is taken as positive (that is, as an absolute value), while the "curvature" "κ" is a signed quantity. A geometric approach to finding the center of curvature and the radius of curvature uses a limiting process leading to the osculating circle. See image above. Using these coordinates, the motion along the path is viewed as a succession of circular paths of ever-changing center, and at each position "s" constitutes non-uniform circular motion at that position with radius "ρ". The local value of the angular rate of rotation then is given by: formula_68 with the local speed "v" given by: formula_69 As for the other examples above, because unit vectors cannot change magnitude, their rate of change is always perpendicular to their direction (see the left-hand insert in the image above): formula_70 formula_71 Consequently, the velocity and acceleration are: formula_72 and using the chain-rule of differentiation: formula_73 with the tangential acceleration formula_74 In this local coordinate system, the acceleration resembles the expression for nonuniform circular motion with the local radius "ρ"("s"), and the centripetal acceleration is identified as the second term. Extending this approach to three dimensional space curves leads to the Frenet–Serret formulas. Alternative approach. Looking at the image above, one might wonder whether adequate account has been taken of the difference in curvature between "ρ"("s") and "ρ"("s" + d"s") in computing the arc length as d"s" = "ρ"("s")d"θ". Reassurance on this point can be found using a more formal approach outlined below. This approach also makes connection with the article on curvature. To introduce the unit vectors of the local coordinate system, one approach is to begin in Cartesian coordinates and describe the local coordinates in terms of these Cartesian coordinates. In terms of arc length "s", let the path be described as: formula_75 Then an incremental displacement along the path d"s" is described by: formula_76 where primes are introduced to denote derivatives with respect to "s". The magnitude of this displacement is d"s", showing that: formula_77 (Eq. 1) This displacement is necessarily a tangent to the curve at "s", showing that the unit vector tangent to the curve is: formula_78 while the outward unit vector normal to the curve is formula_79 Orthogonality can be verified by showing that the vector dot product is zero. The unit magnitude of these vectors is a consequence of Eq. 1. Using the tangent vector, the angle "θ" of the tangent to the curve is given by: formula_80 and formula_81 The radius of curvature is introduced completely formally (without need for geometric interpretation) as: formula_82 The derivative of "θ" can be found from that for sin"θ": formula_83 Now: formula_84 in which the denominator is unity. With this formula for the derivative of the sine, the radius of curvature becomes: formula_85 where the equivalence of the forms stems from differentiation of Eq. 1: formula_86 With these results, the acceleration can be found: formula_87 as can be verified by taking the dot product with the unit vectors ut("s") and un("s"). This result for acceleration is the same as that for circular motion based on the radius "ρ". Using this coordinate system in the inertial frame, it is easy to identify the force normal to the trajectory as the centripetal force and that parallel to the trajectory as the tangential force. From a qualitative standpoint, the path can be approximated by an arc of a circle for a limited time, and for the limited time a particular radius of curvature applies, the centrifugal and Euler forces can be analyzed on the basis of circular motion with that radius. This result for acceleration agrees with that found earlier. However, in this approach, the question of the change in radius of curvature with "s" is handled completely formally, consistent with a geometric interpretation, but not relying upon it, thereby avoiding any questions the image above might suggest about neglecting the variation in "ρ". Example: circular motion. To illustrate the above formulas, let "x", "y" be given as: formula_88 Then: formula_89 which can be recognized as a circular path around the origin with radius "α". The position "s" = 0 corresponds to ["α", 0], or 3 o'clock. To use the above formalism, the derivatives are needed: formula_90 formula_91 With these results, one can verify that: formula_92 The unit vectors can also be found: formula_93 which serve to show that "s" = 0 is located at position ["ρ", 0] and "s" = "ρ"π/2 at [0, "ρ"], which agrees with the original expressions for "x" and "y". In other words, "s" is measured counterclockwise around the circle from 3 o'clock. Also, the derivatives of these vectors can be found: formula_94 formula_95 To obtain velocity and acceleration, a time-dependence for "s" is necessary. For counterclockwise motion at variable speed "v"("t"): formula_96 where "v"("t") is the speed and "t" is time, and "s"("t" = 0) = 0. Then: formula_97 formula_98 formula_99 where it already is established that α = ρ. This acceleration is the standard result for non-uniform circular motion. See also. &lt;templatestyles src="Div col/styles.css"/&gt; Notes and references. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\textbf{a}_c = \\lim_{\\Delta t \\to 0} \\frac{\\Delta \\textbf{v}}{\\Delta t}, \\quad a_c = \\frac{v^2}{r}" }, { "math_id": 1, "text": "a_c" }, { "math_id": 2, "text": "\\Delta \\textbf{v}" }, { "math_id": 3, "text": "t+\\Delta{t}" }, { "math_id": 4, "text": "t" }, { "math_id": 5, "text": "F_c = ma_c = m\\frac{v^2}{r}" }, { "math_id": 6, "text": "v" }, { "math_id": 7, "text": "\\Delta \\textbf{r}" }, { "math_id": 8, "text": "r" }, { "math_id": 9, "text": "\\frac{|\\Delta \\textbf{v}|}{v} = \\frac{|\\Delta \\textbf{r}|}{r}" }, { "math_id": 10, "text": "|\\Delta \\textbf{v}| = \\frac{v}{r}|\\Delta \\textbf{r}|" }, { "math_id": 11, "text": "|\\Delta\\textbf{v}|" }, { "math_id": 12, "text": "\\frac{v}{r} |\\Delta \\textbf{r}|" }, { "math_id": 13, "text": "a_c = \\lim_{\\Delta t \\to 0} \\frac{|\\Delta \\textbf{v}|}{\\Delta t} = \\frac{v}{r} \\lim_{\\Delta t \\to 0} \\frac{|\\Delta \\textbf{r}|}{\\Delta t} = \\omega\\lim_{\\Delta t \\to 0} \\frac{|\\Delta \\textbf{r}|}{\\Delta t} = v\\omega = \\frac{v^2}{r}" }, { "math_id": 14, "text": "v = \\omega r" }, { "math_id": 15, "text": "F_c = m r \\omega^2 \\,." }, { "math_id": 16, "text": "\\omega = \\frac{2\\pi}{T} " }, { "math_id": 17, "text": "F_c = m r \\left(\\frac{2\\pi}{T}\\right)^2." }, { "math_id": 18, "text": "F_c = \\frac{\\gamma m v^2}{r}" }, { "math_id": 19, "text": "\\gamma = \\frac{1}{\\sqrt{1-\\frac{v^2}{c^2}}}" }, { "math_id": 20, "text": "F_c = \\gamma m v \\omega" }, { "math_id": 21, "text": "\\gamma m v" }, { "math_id": 22, "text": "\\textbf{r}" }, { "math_id": 23, "text": "\\theta" }, { "math_id": 24, "text": "\\hat\\mathbf x" }, { "math_id": 25, "text": "\\hat\\mathbf y" }, { "math_id": 26, "text": " \\textbf{r} = r \\cos(\\theta) \\hat\\mathbf x + r \\sin(\\theta) \\hat\\mathbf y. " }, { "math_id": 27, "text": "\\omega" }, { "math_id": 28, "text": "\\theta = \\omega t" }, { "math_id": 29, "text": "\\textbf{v}" }, { "math_id": 30, "text": "\\textbf{a}" }, { "math_id": 31, "text": " \\textbf{r} = r \\cos(\\omega t) \\hat\\mathbf x + r \\sin(\\omega t) \\hat\\mathbf y, " }, { "math_id": 32, "text": " \\textbf{v} = \\dot{\\textbf{r}} = - r \\omega \\sin(\\omega t) \\hat\\mathbf x + r \\omega \\cos(\\omega t) \\hat\\mathbf y, " }, { "math_id": 33, "text": " \\textbf{a} = \\ddot{\\textbf{r}} = - \\omega^2 (r \\cos(\\omega t) \\hat\\mathbf x + r \\sin(\\omega t) \\hat\\mathbf y). " }, { "math_id": 34, "text": " \\textbf{a} = - \\omega^2 \\textbf{r}. " }, { "math_id": 35, "text": " |\\mathbf{\\Omega}| = \\frac {\\mathrm{d} \\theta } {\\mathrm{d}t} = \\omega \\ , " }, { "math_id": 36, "text": " \\mathrm{d}\\boldsymbol{\\ell} = \\mathbf {\\Omega} \\times \\mathbf{r}(t) \\mathrm{d}t \\ , " }, { "math_id": 37, "text": "\\frac {\\mathrm{d} \\mathbf{r}}{\\mathrm{d}t} = \\lim_{{\\Delta}t \\to 0} \\frac {\\mathbf{r}(t + {\\Delta}t)-\\mathbf{r}(t)}{{\\Delta}t} = \\frac{\\mathrm{d} \\boldsymbol{\\ell}}{\\mathrm{d}t} \\ ." }, { "math_id": 38, "text": " \\mathbf{v}\\ \\stackrel{\\mathrm{def}}{ = }\\ \\frac {\\mathrm{d} \\mathbf{r}}{\\mathrm{d}t} = \\frac {\\mathrm{d}\\mathbf{\\boldsymbol{\\ell}}}{\\mathrm{d}t} = \\mathbf {\\Omega} \\times \\mathbf{r}(t)\\ . " }, { "math_id": 39, "text": " \\mathbf{a}\\ \\stackrel{\\mathrm{def}}{ = }\\ \\frac {\\mathrm{d} \\mathbf{v}} {d\\mathrm{t}} = \\mathbf {\\Omega} \\times \\frac{\\mathrm{d} \\mathbf{r}(t)}{\\mathrm{d}t} = \\mathbf{\\Omega} \\times \\left[ \\mathbf {\\Omega} \\times \\mathbf{r}(t)\\right] \\ ." }, { "math_id": 40, "text": " \\mathbf{a} \\times \\left ( \\mathbf{b} \\times \\mathbf{c} \\right ) = \\mathbf{b} \\left ( \\mathbf{a} \\cdot \\mathbf{c} \\right ) - \\mathbf{c} \\left ( \\mathbf{a} \\cdot \\mathbf{b} \\right ) \\ ." }, { "math_id": 41, "text": " \\mathbf{a} = - {|\\mathbf{\\Omega|}}^2 \\mathbf{r}(t) \\ ." }, { "math_id": 42, "text": " |\\mathbf{a}| = |\\mathbf{r}(t)| \\left ( \\frac {\\mathrm{d} \\theta}{\\mathrm{d}t} \\right) ^2 = r {\\omega}^2 " }, { "math_id": 43, "text": " |\\mathbf{F}_\\mathrm{h}| = m |\\mathbf{g}| \\frac { \\sin \\theta}{ \\cos \\theta} = m|\\mathbf{g}| \\tan \\theta \\, . " }, { "math_id": 44, "text": "|\\mathbf{F}_\\mathrm{c}| = m |\\mathbf{a}_\\mathrm{c}| = \\frac{m|\\mathbf{v}|^2}{r} \\, . " }, { "math_id": 45, "text": "m |\\mathbf{g}| \\tan \\theta = \\frac{m|\\mathbf{v}|^2}{r} \\, ," }, { "math_id": 46, "text": " \\tan \\theta = \\frac {|\\mathbf{v}|^2} {|\\mathbf{g}|r} \\, ." }, { "math_id": 47, "text": "\\hat\\mathbf i" }, { "math_id": 48, "text": "\\hat\\mathbf j" }, { "math_id": 49, "text": "\\mathbf u_r = \\cos \\theta \\ \\hat\\mathbf i + \\sin \\theta \\ \\hat\\mathbf j" }, { "math_id": 50, "text": "\\mathbf u_\\theta = - \\sin \\theta \\ \\hat\\mathbf i + \\cos \\theta \\ \\hat\\mathbf j." }, { "math_id": 51, "text": "\\begin{align}\n\\mathbf{v} &= r \\frac {d \\mathbf{u}_r}{dt} \\\\\n&= r \\frac {d}{dt} \\left( \\cos \\theta \\ \\hat\\mathbf{i} + \\sin \\theta \\ \\hat\\mathbf{j}\\right) \\\\\n&= r \\frac {d \\theta}{dt} \\frac{d}{d \\theta} \\left( \\cos \\theta \\ \\hat\\mathbf{i} + \\sin \\theta \\ \\hat\\mathbf{j}\\right) \\\\\n& = r \\frac {d \\theta} {dt} \\left( -\\sin \\theta \\ \\hat\\mathbf{i} + \\cos \\theta \\ \\hat\\mathbf{j}\\right)\\\\\n& = r \\frac{d\\theta}{dt} \\mathbf{u}_\\theta \\\\\n& = \\omega r \\mathbf{u}_\\theta\n\\end{align}" }, { "math_id": 52, "text": "\\frac {d\\mathbf{u}_\\theta}{dt} = -\\frac{d\\theta}{dt} \\mathbf{u}_r = - \\omega \\mathbf{u}_r \\ , " }, { "math_id": 53, "text": "\\mathbf{a} = r \\left( \\frac {d\\omega}{dt} \\mathbf{u}_\\theta - \\omega^2 \\mathbf{u}_r \\right) \\ . " }, { "math_id": 54, "text": "\\mathbf{a}_{r} = - \\omega^{2} r \\ \\mathbf{u}_r = - \\frac{|\\mathbf{v}|^2}{r} \\ \\mathbf{u}_r " }, { "math_id": 55, "text": "\\mathbf{a}_\\theta = r \\ \\frac {d\\omega}{dt} \\ \\mathbf{u}_\\theta = \\frac {d | \\mathbf{v} | }{dt} \\ \\mathbf{u}_\\theta \\ , " }, { "math_id": 56, "text": "\\mathbf{r} = \\rho \\mathbf{u}_{\\rho} \\ , " }, { "math_id": 57, "text": " \\mathbf{v} = \\frac {\\mathrm{d} \\rho }{\\mathrm{d}t} \\mathbf{u}_{\\rho} + \\rho \\frac {\\mathrm{d} \\mathbf{u}_{\\rho}}{\\mathrm{d}t} \\, . " }, { "math_id": 58, "text": " \\mathrm{d} \\mathbf{u}_{\\rho} = \\mathbf{u}_{\\theta} \\mathrm{d}\\theta \\, , " }, { "math_id": 59, "text": " \\frac {\\mathrm{d} \\mathbf{u}_{\\rho}}{\\mathrm{d}t} = \\mathbf{u}_{\\theta} \\frac {\\mathrm{d}\\theta}{\\mathrm{d}t} \\, . " }, { "math_id": 60, "text": " \\frac{\\mathrm{d} \\mathbf{u}_{\\theta}}{\\mathrm{d}t} = -\\frac {\\mathrm{d} \\theta} {\\mathrm{d}t} \\mathbf{u}_{\\rho} \\, . " }, { "math_id": 61, "text": " \\mathbf{v} = \\frac {\\mathrm{d} \\rho }{\\mathrm{d}t} \\mathbf{u}_{\\rho} + \\rho \\mathbf{u}_{\\theta} \\frac {\\mathrm{d} \\theta} {\\mathrm{d}t} = v_{\\rho} \\mathbf{u}_{\\rho} + v_{\\theta} \\mathbf{u}_{\\theta} = \\mathbf{v}_{\\rho} + \\mathbf{v}_{\\theta} \\, . " }, { "math_id": 62, "text": " \\mathbf{a} = \\frac {\\mathrm{d}^2 \\rho }{\\mathrm{d}t^2} \\mathbf{u}_{\\rho} + \\frac {\\mathrm{d} \\rho }{\\mathrm{d}t} \\frac{\\mathrm{d} \\mathbf{u}_{\\rho}}{\\mathrm{d}t} + \\frac {\\mathrm{d} \\rho}{\\mathrm{d}t} \\mathbf{u}_{\\theta} \\frac {\\mathrm{d} \\theta} {\\mathrm{d}t} + \\rho \\frac{\\mathrm{d} \\mathbf{u}_{\\theta}}{\\mathrm{d}t} \\frac {\\mathrm{d} \\theta} {\\mathrm{d}t} + \\rho \\mathbf{u}_{\\theta} \\frac {\\mathrm{d}^2 \\theta} {\\mathrm{d}t^2} \\, . " }, { "math_id": 63, "text": "\\begin{align}\n\\mathbf{a} & = \\frac {\\mathrm{d}^2 \\rho }{\\mathrm{d}t^2} \\mathbf{u}_{\\rho} + 2\\frac {\\mathrm{d} \\rho}{\\mathrm{d}t} \\mathbf{u}_{\\theta} \\frac {\\mathrm{d} \\theta} {\\mathrm{d}t} - \\rho \\mathbf{u}_{\\rho} \\left( \\frac {\\mathrm{d} \\theta} {\\mathrm{d}t}\\right)^2 + \\rho \\mathbf{u}_{\\theta} \\frac {\\mathrm{d}^2 \\theta} {\\mathrm{d}t^2} \\ , \\\\\n& = \\mathbf{u}_{\\rho} \\left[ \\frac {\\mathrm{d}^2 \\rho }{\\mathrm{d}t^2}-\\rho\\left( \\frac {\\mathrm{d} \\theta} {\\mathrm{d}t}\\right)^2 \\right] + \\mathbf{u}_{\\theta}\\left[ 2\\frac {\\mathrm{d} \\rho}{\\mathrm{d}t} \\frac {\\mathrm{d} \\theta} {\\mathrm{d}t} + \\rho \\frac {\\mathrm{d}^2 \\theta} {\\mathrm{d}t^2}\\right] \\\\\n& = \\mathbf{u}_{\\rho} \\left[ \\frac {\\mathrm{d}v_{\\rho}}{\\mathrm{d}t}-\\frac{v_{\\theta}^2}{\\rho}\\right] + \\mathbf{u}_{\\theta}\\left[ \\frac{2}{\\rho}v_{\\rho} v_{\\theta} + \\rho\\frac{\\mathrm{d}}{\\mathrm{d}t}\\frac{v_{\\theta}}{\\rho}\\right] \\, .\n\\end{align}" }, { "math_id": 64, "text": "\\mathbf{a} = \\mathbf{u}_{\\rho} \\left[ -\\rho\\left( \\frac {\\mathrm{d} \\theta} {\\mathrm{d}t}\\right)^2 \\right] + \\mathbf{u}_{\\theta}\\left[ \\rho \\frac {\\mathrm{d}^2 \\theta} {\\mathrm{d}t^2}\\right] = \\mathbf{u}_{\\rho} \\left[ -\\frac{v^2}{r}\\right] + \\mathbf{u}_{\\theta}\\left[ \\frac {\\mathrm{d} v} {\\mathrm{d}t}\\right] \\ " }, { "math_id": 65, "text": " v = v_{\\theta}. " }, { "math_id": 66, "text": " s = s(t) \\ . " }, { "math_id": 67, "text": "\\frac{1} {\\rho (s)} = \\kappa (s) = \\frac {\\mathrm{d}\\theta}{\\mathrm{d}s}\\ . " }, { "math_id": 68, "text": " \\omega(s) = \\frac{\\mathrm{d}\\theta}{\\mathrm{d}t} = \\frac{\\mathrm{d}\\theta}{\\mathrm{d}s} \\frac {\\mathrm{d}s}{\\mathrm{d}t} = \\frac{1}{\\rho(s)}\\ \\frac {\\mathrm{d}s}{\\mathrm{d}t} = \\frac{v(s)}{\\rho(s)}\\ ," }, { "math_id": 69, "text": " v(s) = \\frac {\\mathrm{d}s}{\\mathrm{d}t}\\ . " }, { "math_id": 70, "text": "\\frac{d\\mathbf{u}_\\mathrm{n}(s)}{ds} = \\mathbf{u}_\\mathrm{t}(s)\\frac{d\\theta}{ds} = \\mathbf{u}_\\mathrm{t}(s)\\frac{1}{\\rho} \\ ; " }, { "math_id": 71, "text": "\\frac{d\\mathbf{u}_\\mathrm{t}(s)}{\\mathrm{d}s} = -\\mathbf{u}_\\mathrm{n}(s)\\frac{\\mathrm{d}\\theta}{\\mathrm{d}s} = - \\mathbf{u}_\\mathrm{n}(s)\\frac{1}{\\rho} \\ . " }, { "math_id": 72, "text": " \\mathbf{v}(t) = v \\mathbf{u}_\\mathrm{t}(s)\\ ; " }, { "math_id": 73, "text": " \\mathbf{a}(t) = \\frac{\\mathrm{d}v}{\\mathrm{d}t} \\mathbf{u}_\\mathrm{t}(s) - \\frac{v^2}{\\rho}\\mathbf{u}_\\mathrm{n}(s) \\ ; " }, { "math_id": 74, "text": "\\frac{\\mathrm{\\mathrm{d}}v}{\\mathrm{\\mathrm{d}}t} = \\frac{\\mathrm{d}v}{\\mathrm{d}s}\\ \\frac{\\mathrm{d}s}{\\mathrm{d}t} = \\frac{\\mathrm{d}v}{\\mathrm{d}s}\\ v \\ . " }, { "math_id": 75, "text": "\\mathbf{r}(s) = \\left[ x(s),\\ y(s) \\right] . " }, { "math_id": 76, "text": "\\mathrm{d}\\mathbf{r}(s) = \\left[ \\mathrm{d}x(s),\\ \\mathrm{d}y(s) \\right] = \\left[ x'(s),\\ y'(s) \\right] \\mathrm{d}s \\ , " }, { "math_id": 77, "text": "\\left[ x'(s)^2 + y'(s)^2 \\right] = 1 \\ . " }, { "math_id": 78, "text": "\\mathbf{u}_\\mathrm{t}(s) = \\left[ x'(s), \\ y'(s) \\right] , " }, { "math_id": 79, "text": "\\mathbf{u}_\\mathrm{n}(s) = \\left[ y'(s),\\ -x'(s) \\right] , " }, { "math_id": 80, "text": "\\sin \\theta = \\frac{y'(s)}{\\sqrt{x'(s)^2 + y'(s)^2}} = y'(s) \\ ;" }, { "math_id": 81, "text": "\\cos \\theta = \\frac{x'(s)}{\\sqrt{x'(s)^2 + y'(s)^2}} = x'(s) \\ ." }, { "math_id": 82, "text": "\\frac{1}{\\rho} = \\frac{\\mathrm{d}\\theta}{\\mathrm{d}s}\\ . " }, { "math_id": 83, "text": "\\frac{\\mathrm{d} \\sin\\theta}{\\mathrm{d}s} = \\cos \\theta \\frac {\\mathrm{d}\\theta}{\\mathrm{d}s} = \\frac{1}{\\rho} \\cos \\theta \\ = \\frac{1}{\\rho} x'(s)\\ . " }, { "math_id": 84, "text": "\\frac{\\mathrm{d} \\sin \\theta }{\\mathrm{d}s} = \\frac{\\mathrm{d}}{\\mathrm{d}s} \\frac{y'(s)}{\\sqrt{x'(s)^2 + y'(s)^2}} = \\frac{y''(s)x'(s)^2-y'(s)x'(s)x''(s)} {\\left(x'(s)^2 + y'(s)^2\\right)^{3/2}}\\ , " }, { "math_id": 85, "text": "\\frac {\\mathrm{d}\\theta}{\\mathrm{d}s} = \\frac{1}{\\rho} = y''(s)x'(s) - y'(s)x''(s) = \\frac{y''(s)}{x'(s)} = -\\frac{x''(s)}{y'(s)} \\ ," }, { "math_id": 86, "text": "x'(s)x''(s) + y'(s)y''(s) = 0 \\ . " }, { "math_id": 87, "text": "\\begin{align}\n\\mathbf{a}(s) &= \\frac{\\mathrm{d}}{\\mathrm{d}t}\\mathbf{v}(s) = \\frac{\\mathrm{d}}{\\mathrm{d}t}\\left[\\frac{\\mathrm{d}s}{\\mathrm{d}t} \\left( x'(s), \\ y'(s) \\right) \\right] \\\\\n& = \\left(\\frac{\\mathrm{d}^2s}{\\mathrm{d}t^2}\\right)\\mathbf{u}_\\mathrm{t}(s) + \\left(\\frac{\\mathrm{d}s}{\\mathrm{d}t}\\right) ^2 \\left(x''(s),\\ y''(s) \\right) \\\\\n& = \\left(\\frac{\\mathrm{d}^2s}{\\mathrm{d}t^2}\\right)\\mathbf{u}_\\mathrm{t}(s) - \\left(\\frac{\\mathrm{d}s}{\\mathrm{d}t}\\right) ^2 \\frac{1}{\\rho} \\mathbf{u}_\\mathrm{n}(s)\n\\end{align}" }, { "math_id": 88, "text": "x = \\alpha \\cos \\frac{s}{\\alpha} \\ ; \\ y = \\alpha \\sin\\frac{s}{\\alpha} \\ ." }, { "math_id": 89, "text": "x^2 + y^2 = \\alpha^2 \\ , " }, { "math_id": 90, "text": "y^{\\prime}(s) = \\cos \\frac{s}{\\alpha} \\ ; \\ x^{\\prime}(s) = -\\sin \\frac{s}{\\alpha} \\ , " }, { "math_id": 91, "text": "y^{\\prime\\prime}(s) = -\\frac{1}{\\alpha}\\sin\\frac{s}{\\alpha} \\ ; \\ x^{\\prime\\prime}(s) = -\\frac{1}{\\alpha}\\cos \\frac{s}{\\alpha} \\ . " }, { "math_id": 92, "text": " x^{\\prime}(s)^2 + y^{\\prime}(s)^2 = 1 \\ ; \\ \\frac{1}{\\rho} = y^{\\prime\\prime}(s)x^{\\prime}(s)-y^{\\prime}(s)x^{\\prime\\prime}(s) = \\frac{1}{\\alpha} \\ . " }, { "math_id": 93, "text": "\\mathbf{u}_\\mathrm{t}(s) = \\left[-\\sin\\frac{s}{\\alpha} \\ , \\ \\cos\\frac{s}{\\alpha} \\right] \\ ; \\ \\mathbf{u}_\\mathrm{n}(s) = \\left[\\cos\\frac{s}{\\alpha} \\ , \\ \\sin\\frac{s}{\\alpha} \\right] \\ , " }, { "math_id": 94, "text": "\\frac{\\mathrm{d}}{\\mathrm{d}s}\\mathbf{u}_\\mathrm{t}(s) = -\\frac{1}{\\alpha} \\left[\\cos\\frac{s}{\\alpha} \\ , \\ \\sin\\frac{s}{\\alpha} \\right] = -\\frac{1}{\\alpha}\\mathbf{u}_\\mathrm{n}(s) \\ ; " }, { "math_id": 95, "text": " \\ \\frac{\\mathrm{d}}{\\mathrm{d}s}\\mathbf{u}_\\mathrm{n}(s) = \\frac{1}{\\alpha} \\left[-\\sin\\frac{s}{\\alpha} \\ , \\ \\cos\\frac{s}{\\alpha} \\right] = \\frac{1}{\\alpha}\\mathbf{u}_\\mathrm{t}(s) \\ . " }, { "math_id": 96, "text": "s(t) = \\int_0^t \\ dt^{\\prime} \\ v(t^{\\prime}) \\ , " }, { "math_id": 97, "text": "\\mathbf{v} = v(t)\\mathbf{u}_\\mathrm{t}(s) \\ ," }, { "math_id": 98, "text": "\\mathbf{a} = \\frac{\\mathrm{d}v}{\\mathrm{d}t}\\mathbf{u}_\\mathrm{t}(s) + v\\frac{\\mathrm{d}}{\\mathrm{d}t}\\mathbf{u}_\\mathrm{t}(s) = \\frac{\\mathrm{d}v}{\\mathrm{d}t}\\mathbf{u}_\\mathrm{t}(s)-v\\frac{1}{\\alpha}\\mathbf{u}_\\mathrm{n}(s)\\frac{\\mathrm{d}s}{\\mathrm{d}t} " }, { "math_id": 99, "text": "\\mathbf{a} = \\frac{\\mathrm{d}v}{\\mathrm{d}t}\\mathbf{u}_\\mathrm{t}(s)-\\frac{v^2}{\\alpha}\\mathbf{u}_\\mathrm{n}(s) \\ , " } ]
https://en.wikipedia.org/wiki?curid=7534
753436
QT interval
Measurement made on an electrocardiogram The QT interval is a measurement made on an electrocardiogram used to assess some of the electrical properties of the heart. It is calculated as the time from the start of the Q wave to the end of the T wave, and approximates to the time taken from when the cardiac ventricles start to contract to when they finish relaxing. An abnormally long or abnormally short QT interval is associated with an increased risk of developing abnormal heart rhythms and sudden cardiac death. Abnormalities in the QT interval can be caused by genetic conditions such as long QT syndrome, by certain medications such as sotalol or pitolisant, by disturbances in the concentrations of certain salts within the blood such as hypokalaemia, or by hormonal imbalances such as hypothyroidism. Measurement. The QT interval is most commonly measured in lead II for evaluation of serial ECGs, with leads I and V5 being comparable alternatives to lead II. Leads III, aVL and V1 are generally avoided for measurement of QT interval. The accurate measurement of the QT interval is subjective because the end of the T wave is not always clearly defined and usually merges gradually with the baseline. QT interval in an ECG complex can be measured manually by different methods, such as the threshold method, in which the end of the T wave is determined by the point at which the component of the T wave merges with the isoelectric baseline, or the tangent method, in which the end of the T wave is determined by the intersection of a tangent line extrapolated from the T wave at the point of maximum downslope to the isoelectric baseline. With the increased availability of digital ECGs with simultaneous 12-channel recording, QT measurement may also be done by the 'superimposed median beat' method. In the superimposed median beat method, a median ECG complex is constructed for each of the 12 leads. The 12 median beats are superimposed on each other and the QT interval is measured either from the earliest onset of the Q wave to the latest offset of the T wave or from the point of maximum convergence for the Q wave onset to the T wave offset. Correction for heart rate. The QT interval changes in response to the heart rate - as heart rate increase the QT interval shortens. These changes make it harder to compare QT intervals measured at different heart rates. To account for this, and thereby improve the reliability of QT measurement, the QT interval can be corrected for heart rate (QTc) using a variety of mathematical formulae, a process often performed automatically by modern ECG recorders. Bazett's formula. The most commonly used QT correction formula is the "Bazett's formula", named after physiologist Henry Cuthbert Bazett (1885–1950), calculating the heart rate-corrected QT interval (QTcB). Bazett's formula is based on observations from a study in 1920. Bazett's formula is often given in a form that returns QTc in dimensionally suspect units, square root of seconds. The mathematically correct form of Bazett's formula is: formula_0 where QTcB is the QT interval corrected for heart rate, and RR is the interval from the onset of one QRS complex to the onset of the next QRS complex. This mathematically correct formula returns the QTc in the same units as QT, generally milliseconds. In some popular forms of this formula, it is assumed that QT is measured in milliseconds and that RR is measured in seconds, often derived from the heart rate (HR) as 60/HR. Therefore, the result will be given in seconds per square root of milliseconds. However, reporting QTc using this formula creates a "requirement regarding the units in which the original QT and RR are measured." In either form, Bazett's non-linear QT correction formula is generally not considered accurate, as it over-corrects at high heart rates and under-corrects at low heart rates. Bazett's correction formula is one of the most suitable QT correction formulae for neonates. Fridericia's formula. Fridericia had proposed an alternative correction formula (QTcF) using the cube-root of RR. formula_1 Sagie's formula. The Framingham correction, also called as Sagie's formula based on the Framingham Heart Study, which used long-term cohort data of over 5,000 subjects, is considered a better method. formula_2 Again, here "QT" and "QTlc" are in milliseconds and "RR" is measured in seconds. Comparison of corrections. A retrospective study suggests that Fridericia's method and the Framingham method may produce results most useful for stratifying the 30-day and 1-year risks of mortality. Definitions of normal QTc vary from being equal to or less than 0.40 s (≤ 400 ms), 0.41 s (≤ 410 ms), 0.42 s (≤ 420 ms) or 0.44 s (≤ 440 ms). For risk of sudden cardiac death, "borderline QTc" in males is 431–450 ms; and, in females, 451–470 ms. An "abnormal" QTc in males is a QTc above 450 ms; and, in females, above 470 ms. If there is not a very high or low heart rate, the upper limits of QT can roughly be estimated by taking QT = QTc at a heart rate of 60 beats per minute (bpm), and subtracting 0.02 s from QT for every 10 bpm increase in heart rate. For example, taking normal QTc ≤ 0.42 s, QT would be expected to be 0.42 s or less at a heart rate of 60 bpm. For a heart rate of 70 bpm, QT would roughly be expected to be equal to or below 0.40 s. Likewise, for 80 bpm, QT would roughly be expected to be equal to or below 0.38 s. Abnormal intervals. Prolonged QTc causes premature action potentials during the late phases of depolarization. This increases the risk of developing ventricular arrhythmias, including fatal ventricular fibrillation. Higher rates of prolonged QTc are seen in females, older patients, high systolic blood pressure or heart rate, and short stature. Prolonged QTc is also associated with ECG findings called Torsades de Pointes, which are known to degenerate into ventricular fibrillation, associated with higher mortality rates. There are many causes of prolonged QT intervals, acquired causes being more common than genetic. Genetic causes. An abnormally prolonged QT interval could be due to long QT syndrome, whereas an abnormally shortened QT interval could be due to short QT syndrome. The QTc length is associated with variations in the NOS1AP gene. The autosomal recessive syndrome of Jervell and Lange-Nielsen is characterized by a prolonged QTc interval in conjunction with sensorineural hearing loss. Due to adverse drug reactions. Prolongation of the QT interval may be due to an adverse drug reaction. Antipsychotics (especially first generation/"typical") DMARDs and antimalarial drugs Antibiotics Other drugs Some second-generation antihistamines, such as astemizole, have this effect. The mechanism of action of certain antiarrhythmic drugs, like amiodarone or sotalol, involve intentional pharmacological QT prolongation. In addition, high blood alcohol concentrations prolong the QT interval. A possible interaction between selective serotonin reuptake inhibitors and thiazide diuretics is associated with QT prolongation. Due to pathological conditions. Hypothyroidism, a condition of low function of the thyroid gland, can cause QT prolongation at the electrocardiogram. Acute hypocalcemia causes prolongation of the QT interval, which may lead to ventricular dysrhythmias. A shortened QT can be associated with hypercalcemia. Use in drug approval studies. Since 2005, the FDA and European regulators have required that nearly all new molecular entities be evaluated in a Thorough QT (TQT) or similar study to determine a drug's effect on the QT interval. The TQT study serves to assess the potential arrhythmia liability of a drug. Traditionally, the QT interval had been evaluated by having an individual human reader measure approximately nine cardiac beats per clinical timepoint. However, a substantial portion of drug approvals after 2010 have incorporated a partially automated approach, blending automated software algorithms with expert human readers reviewing a portion of the cardiac beats, to enable the assessment of significantly more beats in order to improve precision and reduce cost. In 2014, an industrywide consortium consisting of the FDA, iCardiac Technologies and other organizations released the results of a seminal study indicating how waivers from TQT studies can be obtained by the assessment of early phase data. As the pharmaceutical industry has gained experience in performing TQT studies, it has also become evident that traditional QT correction formulas such as QTcF, QTcB, and QTcLC may not always be suitable for evaluation of drugs impacting autonomic tone. As a predictor of mortality. Electrocardiography is a safe and noninvasive tool that can be used to identify those with a higher risk of mortality. In the general population, there has been no consistent evidence that prolonged QTc interval in isolation is associated with an increase in mortality from cardiovascular disease. However, several studies have examined prolonged QT interval as a predictor of mortality for diseased subsets of the population. Rheumatoid arthritis. Rheumatoid arthritis is the most common inflammatory arthritis. Studies have linked rheumatoid arthritis with increased death from cardiovascular disease. In a 2014 study, Panoulas et al. found a 50 ms increase in QTc interval increased the odds of all-cause mortality by 2.17 in patients with rheumatoid arthritis. Patients with the highest QTc interval (&gt; 424 ms) had higher mortality than those with a lower QTc interval. The association was lost when calculations were adjusted for C-reactive protein levels. The researchers proposed that inflammation prolonged the QTc interval and created arrhythmias that were associated with higher mortality rates. However, the mechanism by which C-reactive protein is associated with the QTc interval is still not understood. Type 1 diabetes. Compared to the general population, type 1 diabetes may increase the risk of mortality, due largely to an increased risk of cardiovascular disease. Almost half of patients with type 1 diabetes have a prolonged QTc interval (&gt; 440 ms). Diabetes with a prolonged QTc interval was associated with a 29% mortality over 10 years in comparison to 19% with a normal QTc interval. Anti-hypertensive drugs increased the QTc interval, but were not an independent predictor of mortality. Type 2 diabetes. QT interval dispersion (QTd) is the maximum QT interval minus the minimum QT interval, and is linked with ventricular repolarization. A QTd over 80 ms is considered abnormally prolonged. Increased QTd is associated with mortality in type 2 diabetes. QTd is a better predictor of cardiovascular death than QTc, which was unassociated with mortality in type 2 diabetes. QTd higher than 80 ms had a relative risk of 1.26 of dying from cardiovascular disease compared to a normal QTd. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "QTc_B = {QT \\over \\sqrt{RR\\over 1\\text{ s}}}" }, { "math_id": 1, "text": "QTc_F = {QT \\over \\sqrt[3]{RR \\over 1\\text{ s}}}" }, { "math_id": 2, "text": "QTlc = 1000\\left(\\frac{QT}{1000} + 0.154(1 - RR)\\right)" } ]
https://en.wikipedia.org/wiki?curid=753436
7536770
Dorian M. Goldfeld
American mathematician (born 1947) Dorian Morris Goldfeld (born January 21, 1947) is an American mathematician working in analytic number theory and automorphic forms at Columbia University. Professional career. Goldfeld received his B.S. degree in 1967 from Columbia University. His doctoral dissertation, entitled "Some Methods of Averaging in the Analytical Theory of Numbers", was completed under the supervision of Patrick X. Gallagher in 1969, also at Columbia. He has held positions at the University of California at Berkeley (Miller Fellow, 1969–1971), Hebrew University (1971–1972), Tel Aviv University (1972–1973), Institute for Advanced Study (1973–1974), in Italy (1974–1976), at MIT (1976–1982), University of Texas at Austin (1983–1985) and Harvard (1982–1985). Since 1985, he has been a professor at Columbia University. He is a member of the editorial board of "Acta Arithmetica" and of "The Ramanujan Journal". On January 1, 2018 he became the Editor-in-Chief of the Journal of Number Theory. He is a co-founder and board member of Veridify Security, formerly SecureRF, a corporation that has developed the world's first linear-based security solutions. Goldfeld advised several doctoral students including M. Ram Murty. In 1986, he brought Shou-Wu Zhang to the United States to study at Columbia. Research interests. Goldfeld's research interests include various topics in number theory. In his thesis, he proved a version of Artin's conjecture on primitive roots on the average without the use of the Riemann Hypothesis. In 1976, Goldfeld provided an ingredient for the effective solution of Gauss's class number problem for imaginary quadratic fields. Specifically, he proved an effective lower bound for the class number of an imaginary quadratic field assuming the existence of an elliptic curve whose L-function had a zero of order at least 3 at formula_0. (Such a curve was found soon after by Gross and Zagier). This effective lower bound then allows the determination of all imaginary fields with a given class number after a finite number of computations. His work on the Birch and Swinnerton-Dyer conjecture includes the proof of an estimate for a partial Euler product associated to an elliptic curve, bounds for the order of the Tate–Shafarevich group. Together with his collaborators, Dorian Goldfeld has introduced the theory of multiple Dirichlet series, objects that extend the fundamental Dirichlet series in one variable. He has also made contributions to the understanding of Siegel zeroes, to the ABC conjecture, to modular forms on formula_1, and to cryptography (Arithmetica cipher, Anshel–Anshel–Goldfeld key exchange). Together with his wife, Dr. Iris Anshel, and father-in-law, Dr. Michael Anshel, both mathematicians, Dorian Goldfeld founded the field of braid group cryptography. Awards and honors. In 1987 he received the Frank Nelson Cole Prize in Number Theory, one of the prizes in Number Theory, for his solution of Gauss's class number problem for imaginary quadratic fields. He has also held the Sloan Fellowship (1977–1979) and in 1985 he received the Vaughan prize. In 1986 he was an invited speaker at the International Congress of Mathematicians in Berkeley. In April 2009 he was elected a Fellow of the American Academy of Arts and Sciences. In 2012 he became a fellow of the American Mathematical Society. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "s=1/2" }, { "math_id": 1, "text": "\\operatorname{GL}(n)" } ]
https://en.wikipedia.org/wiki?curid=7536770
753756
Lissajous curve
Mathematical curve outputted from a specific pair of parametric equations A Lissajous curve , also known as Lissajous figure or Bowditch curve , is the graph of a system of parametric equations formula_0 which describe the superposition of two perpendicular oscillations in x and y directions of different angular frequency ("a" and "b)." The resulting family of curves was investigated by Nathaniel Bowditch in 1815, and later in more detail in 1857 by Jules Antoine Lissajous (for whom it has been named). Such motions may be considered as a particular kind of complex harmonic motion. The appearance of the figure is sensitive to the ratio . For a ratio of 1, when the frequencies match a=b, the figure is an ellipse, with special cases including circles ("A" = "B", "δ" = radians) and lines ("δ" = 0). A small change to one of the frequencies will mean the x oscillation after one cycle will be slightly out of synchronization with the y motion and so the ellipse will fail to close and trace a curve slightly adjacent during the next orbit showing as a precession of the ellipse. The pattern closes if the frequencies are whole number ratios i.e. is rational. Another simple Lissajous figure is the parabola (= 2, "δ" =). Again a small shift of one frequency from the ratio 2 will result in the trace not closing but performing multiple loops successively shifted only closing if the ratio is rational as before. A complex dense pattern may form see below. The visual form of such curves is often suggestive of a three-dimensional knot, and indeed many kinds of knots, including those known as Lissajous knots, project to the plane as Lissajous figures. Visually, the ratio determines the number of "lobes" of the figure. For example, a ratio of or produces a figure with three major lobes (see image). Similarly, a ratio of produces a figure with five horizontal lobes and four vertical lobes. Rational ratios produce closed (connected) or "still" figures, while irrational ratios produce figures that appear to rotate. The ratio determines the relative width-to-height ratio of the curve. For example, a ratio of produces a figure that is twice as wide as it is high. Finally, the value of "δ" determines the apparent "rotation" angle of the figure, viewed as if it were actually a three-dimensional curve. For example, "δ" = 0 produces "x" and "y" components that are exactly in phase, so the resulting figure appears as an apparent three-dimensional figure viewed from straight on (0°). In contrast, any non-zero "δ" produces a figure that appears to be rotated, either as a left–right or an up–down rotation (depending on the ratio ). Lissajous figures where "a" = 1, "b" = "N" ("N" is a natural number) and formula_1 are Chebyshev polynomials of the first kind of degree "N". This property is exploited to produce a set of points, called Padua points, at which a function may be sampled in order to compute either a bivariate interpolation or quadrature of the function over the domain [−1,1] × [−1,1]. The relation of some Lissajous curves to Chebyshev polynomials is clearer to understand if the Lissajous curve which generates each of them is expressed using cosine functions rather than sine functions. formula_2 Examples. The animation shows the curve adaptation with continuously increasing fraction from 0 to 1 in steps of 0.01 ("δ" = 0). Below are examples of Lissajous figures with an odd natural number "a", an even natural number "b", and |"a" − "b"| = 1. Generation. Prior to modern electronic equipment, Lissajous curves could be generated mechanically by means of a harmonograph. Practical application. Lissajous curves can also be generated using an oscilloscope (as illustrated). An octopus circuit can be used to demonstrate the waveform images on an oscilloscope. Two phase-shifted sinusoid inputs are applied to the oscilloscope in X-Y mode and the phase relationship between the signals is presented as a Lissajous figure. In the professional audio world, this method is used for realtime analysis of the phase relationship between the left and right channels of a stereo audio signal. On larger, more sophisticated audio mixing consoles an oscilloscope may be built-in for this purpose. On an oscilloscope, we suppose "x" is CH1 and "y" is CH2, "A" is the amplitude of CH1 and "B" is the amplitude of CH2, "a" is the frequency of CH1 and "b" is the frequency of CH2, so is the ratio of frequencies of the two channels, and "δ" is the phase shift of CH1. A purely mechanical application of a Lissajous curve with "a" = 1, "b" = 2 is in the driving mechanism of the Mars Light type of oscillating beam lamps popular with railroads in the mid-1900s. The beam in some versions traces out a lopsided figure-8 pattern on its side. Application for the case of "a" = "b". When the input to an LTI system is sinusoidal, the output is sinusoidal with the same frequency, but it may have a different amplitude and some phase shift. Using an oscilloscope that can plot one signal against another (as opposed to one signal against time) to plot the output of an LTI system against the input to the LTI system produces an ellipse that is a Lissajous figure for the special case of "a" = "b". The aspect ratio of the resulting ellipse is a function of the phase shift between the input and output, with an aspect ratio of 1 (perfect circle) corresponding to a phase shift of ±90° and an aspect ratio of ∞ (a line) corresponding to a phase shift of 0° or 180°. The figure below summarizes how the Lissajous figure changes over different phase shifts. The phase shifts are all negative so that delay semantics can be used with a causal LTI system (note that −270° is equivalent to +90°). The arrows show the direction of rotation of the Lissajous figure. In engineering. A Lissajous curve is used in experimental tests to determine if a device may be properly categorized as a memristor. It is also used to compare two different electrical signals: a known reference signal and a signal to be tested. In popular culture. Company logos. Lissajous figures are sometimes used in graphic design as logos. Examples include: In music education. Lissajous curves have been used in the past to graphically represent musical intervals through the use of the Harmonograph, a devise that consists of pendulums oscillating at different frequency ratios. Because different tuning systems employ different frequency ratios to define intervals, these can be compared using Lissajous curves to observe their differences. Therefore, Lissajous curves have applications in music education by graphically representing differences between intervals and among tuning systems. Notes. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "x=A\\sin(at+\\delta),\\quad y=B\\sin(bt)," }, { "math_id": 1, "text": "\\delta=\\frac{N-1}{N}\\frac{\\pi}{2} " }, { "math_id": 2, "text": "x=\\cos(t),\\quad y=\\cos(Nt)" } ]
https://en.wikipedia.org/wiki?curid=753756
7537723
Tacnode
Point on a curve at which two or more osculating circles are tangent In classical algebraic geometry, a tacnode (also called a point of osculation or double cusp) is a kind of singular point of a curve. It is defined as a point where two (or more) osculating circles to the curve at that point are tangent. This means that two branches of the curve have ordinary tangency at the double point. The canonical example is formula_1 A tacnode of an arbitrary curve may then be defined from this example, as a point of self-tangency locally diffeomorphic to the point at the origin of this curve. Another example of a tacnode is given by the links curve shown in the figure, with equation formula_0 More general background. Consider a smooth real-valued function of two variables, say "f" ("x", "y") where x and y are real numbers. So f is a function from the plane to the line. The space of all such smooth functions is acted upon by the group of diffeomorphisms of the plane and the diffeomorphisms of the line, i.e. diffeomorphic changes of coordinate in both the source and the target. This action splits the whole function space up into equivalence classes, i.e. orbits of the group action. One such family of equivalence classes is denoted by &amp;NoBreak;&amp;NoBreak; where k is a non-negative integer. This notation was introduced by V. I. Arnold. A function f is said to be of type &amp;NoBreak;&amp;NoBreak; if it lies in the orbit of formula_2 i.e. there exists a diffeomorphic change of coordinate in source and target which takes f into one of these forms. These simple forms formula_3 are said to give normal forms for the type &amp;NoBreak;&amp;NoBreak;-singularities. A curve with equation "f" = 0 will have a tacnode, say at the origin, if and only if f has a type &amp;NoBreak;&amp;NoBreak;-singularity at the origin. Notice that a node formula_4 corresponds to a type &amp;NoBreak;&amp;NoBreak;-singularity. A tacnode corresponds to a type &amp;NoBreak;&amp;NoBreak;-singularity. In fact each type &amp;NoBreak;&amp;NoBreak;-singularity, where "n" ≥ 0 is an integer, corresponds to a curve with self-intersection. As n increases, the order of self-intersection increases: transverse crossing, ordinary tangency, etc. The type &amp;NoBreak;&amp;NoBreak;-singularities are of no interest over the real numbers: they all give an isolated point. Over the complex numbers, type &amp;NoBreak;&amp;NoBreak;-singularities and type &amp;NoBreak;&amp;NoBreak;-singularities are equivalent: ("x", "y") → ("x", "iy") gives the required diffeomorphism of the normal forms. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "(x^2+y^2-3x)^2 - 4x^2(2-x) = 0." }, { "math_id": 1, "text": "y^2-x^4= 0." }, { "math_id": 2, "text": "x^2 \\pm y^{k+1}," }, { "math_id": 3, "text": "x^2 \\pm y^{k+1}" }, { "math_id": 4, "text": "(x^2-y^2=0)" } ]
https://en.wikipedia.org/wiki?curid=7537723
75378790
Thiele's voting rules
Multiwinner voting rules Thiele's voting rules are rules for multiwinner voting. They allow voters to vote for individual candidates rather than parties, but still guarantee proportional representation. They were published by Thorvald Thiele in Danish in 1895, and translated to English by Svante Janson in 2016. They were used in Swedish parliamentary elections to distribute seats within parties, and are still used in city council elections. Background. In multiwinner approval voting, each voter can vote for one or more candidates, and the goal is to select a fixed number "k" of winners (where "k" may be, for example, the number of parliament members). The question is how to determine the set of winners? Thiele wanted to keep the vote for individual candidates, so that voters can approve candidates based on their personal merits. However, Thiele's methods can handle more general situations, in which voters may vote for candidates from different parties (in fact, the method ignores the information on which candidate belongs to which party).Sec.1 Thiele's rules for approval ballots. We denote the number of voters by "n", the number of candidates by "m", and the required number of committee members "k". With approval ballots, each voter "i" has an "approval set" "Ai", containing the subset of candidates that "i" approves. The goal is: given the sets "Ai", select a subset "W" of "winning candidates", such that |"W"|="k". This subset represents the elected committee. Thiele's rules are based on the concept of "satisfaction function". It is a function "f" that maps the number of committee-members approved by a voter, to a numeric amount representing the satisfaction of this voter from the committee. So if voter "i" approves a set of candidates "Ai", and the set of elected candidates is "W", then the voter's satisfaction is formula_0. The goal of Thiele's methods is to find a committee "W" that maximizes the total satisfaction (following the utilitarian rule). The results obviously depend on the function "f". Without loss of generality, we can normalize "f" such that f(0)=0 and f(1)=1. Thiele claims that the selection of "f" should depend on the purpose of the elections:Sec.4 For each choice of "f", Thiele suggested three methods. Optimization methods: find the committee that maximizes the total satisfaction. In general, solving the global optimization problem is an NP-hard computational problem, except when f("r")="r". Therefore, Thiele suggested two greedy approximation algorithms: Addition methods: Candidates are elected one by one; at each round, the elected candidate is one that maximizes the increase in the total satisfaction. This is equivalent to weighted voting where each voter "i," with "ri" approved winners so far, has a weight of f("ri"+1)-f("ri"). Elimination methods work in the opposite direction to addition methods: starting with the set of all "m" candidates, candidates are removed one by one, until only "k" remain; at each round, the removed candidate is one that minimizes the decrease in the total satisfaction. Thiele's rules for ranked ballots. There is a ranked ballot version for Thiele's addition method. At each round, each voter "i", with "ri" approved winners so far, has a voting weight of f("ri"+1)-f("ri"). Each voter's weight is counted "only for his top remaining candidate". The candidate with the highest total weight is elected. It was proposed in the Swedish parliament in 1912 and rejected; but was later adopted for elections inside city and county councils, and is still used for that purpose. Properties. Homogeneity. For each possible ballot "b", let "vb" be the number of voters who voted exactly "b" (for example: approved exactly the same set of candidates). Let "pb" be fraction of voters who voted exactly "b" (= "vb" / the total number of votes). A voting method is called "homogeneous" if it depends only on the fractions "pb". So if the numbers of votes are all multiplied by the same constant, the method returns the same outcome. Thiele's methods are homogeneous in that sense. Monotonicity. Thiele's addition method satisfies a property known as house monotonicity: when the number of committee members increases, all the previously elected members are still elected. This follows immediately from the method description. Thiele's elimination method is house-monotone too. But Thiele's optimization method generally violates house monotonicity, as noted by Thiele himself. In fact, Thiele's optimization method satisfies house-monotonicity only for the (normalized) satisfaction function f("r")="r". Here is an example: This also implies that Thiele's optimization method coincides with the addition method iff f("r")="r". Proportionality. Lackner and Skowron show that Thiele's voting rules can be used to interpolate between regressive and degressive proportionality: PAV is proportional; rules in which the slope of the score function is above that of PAV satisfy regressive proportionality; and rules in which the slope of the score function is below that of PAV satisfy degressive proportionality. Moreover, If the satisfaction-score of the "i"-th approved candidate is (1/"p")"i", for various values of "p", we get the entire spectrum between CC and AV.
[ { "math_id": 0, "text": "f(|A_i \\cap W|)" } ]
https://en.wikipedia.org/wiki?curid=75378790
75381527
Egan conjecture
Conjecture in geometry In geometry, the Egan conjecture gives a sufficient and necessary condition for the radii of two spheres and the distance of their centers, so that a simplex exists, which is completely contained inside the larger sphere and completely encloses the smaller sphere. The conjecture generalizes an equality discovered by William Chapple (and later independently by Leonhard Euler), which is a special case of Poncelet's closure theorem, as well as the Grace–Danielsson inequality in one dimension higher. The conjecture was proposed in 2014 by the Australian mathematician and science-fiction author Greg Egan. The "sufficient" part was proved in 2018, and the "necessary" part was proved in 2023. Basics. For an arbitrary triangle (formula_0-simplex), the radius formula_1 of its inscribed circle, the radius formula_2 of its circumcircle and the distance formula_3 of their centers are related through Euler's theorem in geometry: formula_4, which was published by William Chapple in 1746 and by Leonhard Euler in 1765. For two spheres (formula_0-spheres) with respective radii formula_1 and formula_2, fulfilling formula_5, there exists a (non-regular) tetrahedron (formula_6-simplex), which is completely contained inside the larger sphere and completely encloses the smaller sphere, if and only if the distance formula_3 of their centers fulfills the Grace–Danielsson inequality: formula_7. This result was independently proven by John Hilton Grace in 1917 and G. Danielsson in 1949. A connection of the inequality with quantum information theory was described by Anthony Milne. Conjecture. Consider formula_8-dimensional euclidean space formula_9 for formula_10. For two formula_11-spheres with respective radii formula_1 and formula_2, fulfilling formula_5, there exists a formula_8-simplex, which is completely contained inside the larger sphere and completely encloses the smaller sphere, if and only if the distance formula_3 of their centers fulfills: formula_12. The conjecture was proposed by Greg Egan in 2014. For the case formula_13, where the inequality reduces to formula_14, the conjecture is true as well, but trivial. A formula_15-sphere is just composed of two points and a formula_16-simplex is just a closed interval. The desired formula_16-simplex of two given formula_15-spheres can simply be chosen as the closed interval between the two points of the larger sphere, which contains the smaller sphere if and only if it contains both of its points with respective distance formula_17 and formula_18 from the center of the larger sphere, hence if and only if the above inequality is satisfied. Status. Greg Egan showed that the condition is sufficient under a blog post by John Baez in 2014. They were lost due to a rearrangement of the website, but the central parts were copied into the original blog post. Further comments by Greg Egan on 16 April 2018 concern the search for a generalized conjecture involving ellipsoids. Sergei Drozdov published a paper on ArXiv showing that the condition is also necessary in October 2023. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "2" }, { "math_id": 1, "text": "r" }, { "math_id": 2, "text": "R" }, { "math_id": 3, "text": "d" }, { "math_id": 4, "text": "d^2=R(R-2r)" }, { "math_id": 5, "text": "r<R" }, { "math_id": 6, "text": "3" }, { "math_id": 7, "text": "d^2\\leq(R+r)(R-3r)" }, { "math_id": 8, "text": "n" }, { "math_id": 9, "text": "\\mathbb R^n" }, { "math_id": 10, "text": "n\\geq 2" }, { "math_id": 11, "text": "n-1" }, { "math_id": 12, "text": "d^2\\leq(R+(n-2)r)(R-nr)" }, { "math_id": 13, "text": "n=1" }, { "math_id": 14, "text": "d\\leq R-r" }, { "math_id": 15, "text": "0" }, { "math_id": 16, "text": "1" }, { "math_id": 17, "text": "|d-r|" }, { "math_id": 18, "text": "d+r" } ]
https://en.wikipedia.org/wiki?curid=75381527
75381757
Abess
Machine learning algorithm abess (Adaptive Best Subset Selection, also ABESS) is a machine learning method designed to address the problem of best subset selection. It aims to determine which features or variables are crucial for optimal model performance when provided with a dataset and a prediction task. abess was introduced by Zhu in 2020 and it dynamically selects the appropriate model size adaptively, eliminating the need for selecting regularization parameters. abess is applicable in various statistical and machine learning tasks, including linear regression, the Single-index model, and other common predictive models. abess can also be applied in biostatistics. Basic Form. The basic form of abess is employed to address the optimal subset selection problem in general linear regression. abess is an formula_0 method, it is characterized by its polynomial time complexity and the property of providing both unbiased and consistent estimates. In the context of linear regression, assuming we have knowledge of formula_1 independent samples formula_2, where formula_3 and formula_4, we define formula_5 and formula_6. The following equation represents the general linear regression model: formula_7 To obtain appropriate parameters formula_8, one can consider the loss function for linear regression: formula_9 In abess, the initial focus is on optimizing the loss function under the formula_0 constraint. That is, we consider the following problem: formula_10 where formula_11 represents the desired size of the support set, and formula_12 is the formula_0 norm of the vector. To address the optimization problem described above, abess iteratively exchanges an equal number of variables between the active set and the inactive set. In each iteration, the concept of sacrifice is introduced as follows: formula_14 formula_16 Here are the key elements in the above equations: The iterative process involves exchanging variables, with the aim of minimizing the sacrifices in the active set while maximizing the sacrifices in the inactive set during each iteration. This approach allows abess to efficiently search for the optimal feature subset. In abess, select an appropriate formula_23 and optimize the above problem for active sets size formula_24 using the information criterion formula_25 to adaptively choose the appropriate active set size formula_11 and obtain its corresponding abess estimator. Generalizations. The splicing algorithm in abess can be employed for subset selection in other models. Distribution-Free Location-Scale Regression. In 2023, Siegfried extends abess to the case of Distribution-Free and Location-Scale. Specifically, it considers the optimization problem formula_26 subject to formula_27 where formula_28 is a loss function, formula_29 is a parameter vector, formula_30 and formula_31 are vectors, and formula_32 is a data vector. This approach, demonstrated across various applications, enables parsimonious regression modeling for arbitrary outcomes while maintaining interpretability through innovative subset selection procedures. Groups Selection. In 2023, Zhang applied the splicing algorithm to group selection, optimizing the following model: formula_33 Here are the symbols involved: Regression with Corrupted Data. Zhang applied the splicing algorithm to handle corrupted data. Corrupted data refers to information that has been disrupted or contains errors during the data collection or recording process. This interference may include sensor inaccuracies, recording errors, communication issues, or other external disturbances, leading to inaccurate or distorted observations within the dataset. Single Index Models. In 2023, Tang applied the splicing algorithm to optimal subset selection in the Single-index model. The form of the Single Index Model (SIM) is given by formula_37 where formula_38 is the parameter vector, formula_39 is the error term. The corresponding loss function is defined as formula_40 where formula_41 is the rank vector, formula_42 is the rank of formula_43 in formula_44. The Estimation Problem addressed by this algorithm is formula_45 Eographically Weighted Regression Model. In 2023, Wu applied the splicing algorithm to geographically weighted regression (GWR). GWR is a spatial analysis method, and Wu's research focuses on improving GWR performance in handling geographical data regression modeling. This is achieved through the application of an l0-norm variable adaptive selection method, which simultaneously performs model selection and coefficient optimization, enhancing the accuracy of regression modeling for geographic spatial data. Distributed Systems. In 2023, Chen introduced an innovative method addressing challenges in high-dimensional distributed systems, proposing an efficient algorithm for abess. A distributed system is a computational model that distributes computing tasks across multiple independent nodes to achieve more efficient, reliable, and scalable data processing. In a distributed system, individual computing nodes can work simultaneously, collaboratively completing the overall tasks, thereby enhancing system performance and processing capabilities. However, within distributed systems, there is a lack of efficient algorithms for optimal subset selection. To address this gap, Chen introduces a novel communication-efficient approach for handling optimal subset selection in distributed systems. Software Package. The abess library. (version 0.4.5) is an R package and python package based on C++ algorithms. It is open-source on GitHub. The library can be used for optimal subset selection in linear regression, (multi-)classification, and censored-response modeling models. The abess package allows for parameters to be chosen in a grouped format. Information and tutorials are available on the abess homepage. Application. abess can be applied in biostatistics, such as assessing the robust severity of COVID-19 patients, conducting antibiotic resistance in Mycobacterium tuberculosis, exploring prognostic factors in neck pain, and developing prediction models for severe pain in patients after percutaneous nephrolithotomy. abess can also be applied to gene selection. In the field of data-driven partial differential equation (PDE) discovery, Thanasutives applied abess to automatically identify parsimonious governing PDEs. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "l_0" }, { "math_id": 1, "text": "n" }, { "math_id": 2, "text": "(x_i, y_i), i=1, \\ldots, n" }, { "math_id": 3, "text": "x_i \\in \\mathbb{R}^{p \\times 1}" }, { "math_id": 4, "text": " y_i \\in \\mathbb{R} " }, { "math_id": 5, "text": "X = (x_1, \\ldots, x_n)^{\\top}" }, { "math_id": 6, "text": "y = (y_1, \\ldots, y_n)^{\\top}" }, { "math_id": 7, "text": "\n y = X\\beta + \\varepsilon.\n" }, { "math_id": 8, "text": "\\beta" }, { "math_id": 9, "text": "\n \\mathcal{L}_n^{\\text{LR}}(\\beta; X, y) = \\frac{1}{2n}\\|y - X\\beta\\|_2^2.\n" }, { "math_id": 10, "text": "\n \\min_{\\beta \\in \\mathbb{R}^{p \\times 1}} \\mathcal{L}_n^{\\text{LR}}(\\beta; X, y), \\text{ subject to } \\|\\beta\\|_0 \\leq s,\n" }, { "math_id": 11, "text": "s" }, { "math_id": 12, "text": "\\|\\beta\\|_0 = \\sum_{i=1}^p \\mathcal{I}_{(\\beta_i \\neq 0)}" }, { "math_id": 13, "text": " j\\in\\hat{\\mathcal A} " }, { "math_id": 14, "text": "\n\\xi_j = \\mathcal L_n^{\\text{LR}}\\left(\\hat{\\boldsymbol{\\beta}}^{\\mathcal{A} \\backslash\\{j\\}}\\right) - \\mathcal L_n^{\\text{LR}}\\left(\\hat{\\boldsymbol{\\beta}}^{\\mathcal{A}}\\right) = \\frac{\\boldsymbol{X}_j^{\\top} \\boldsymbol{X}_j}{2 n}\\left(\\hat{\\beta}_j\\right)^2\n" }, { "math_id": 15, "text": " j\\notin\\hat{\\mathcal A} " }, { "math_id": 16, "text": "\n\\xi_j = \\mathcal L_n^{\\text{LR}}\\left(\\hat{\\boldsymbol{\\beta}}^{\\mathcal{A}}\\right) - \\mathcal L_n^{\\text{LR}}\\left(\\hat{\\boldsymbol{\\beta}}^{\\mathcal{A}}+\\hat{\\boldsymbol{t}}^{\\{j\\}}\\right) = \\frac{\\boldsymbol{X}_j^{\\top} \\boldsymbol{X}_j}{2 n}\\left(\\frac{\\hat{\\mathrm{d}}_j}{\\boldsymbol{X}_j^{\\top} \\boldsymbol{X}_j / n}\\right)^2\n" }, { "math_id": 17, "text": "\\hat{\\beta}^{\\mathcal A}" }, { "math_id": 18, "text": "\\hat{\\mathcal A}" }, { "math_id": 19, "text": "\\hat{\\boldsymbol{\\beta}}^{\\mathcal{A} \\backslash\\{j\\}}" }, { "math_id": 20, "text": "\\hat{\\boldsymbol{t}}^{\\{j\\}}=\\arg \\min _t \\mathcal L_n^{\\text{LR}}\\left(\\hat{\\boldsymbol{\\beta}}^{\\mathcal{A}}+\\boldsymbol{t}^{\\{j\\}}\\right)" }, { "math_id": 21, "text": "t^{\\{j\\}}" }, { "math_id": 22, "text": "\\hat{d}_j=\\boldsymbol{X}_j^{\\top}(\\boldsymbol{y}-\\boldsymbol{X} \\hat{\\boldsymbol{\\beta}}) / n" }, { "math_id": 23, "text": "s_{\\max}" }, { "math_id": 24, "text": "s = 1, \\ldots, s_{\\max}" }, { "math_id": 25, "text": " \\text{GIC} = n \\log \\mathcal{L}_n^{\\text{LR}} + s \\log p \\log \\log n," }, { "math_id": 26, "text": "\\max_{\\boldsymbol{\\vartheta} \\in \\mathbb{R}^P, \\boldsymbol{\\beta} \\in \\mathbb{R}^J, \\boldsymbol{\\gamma} \\in \\mathbb{R}^J} \\sum_{i=1}^N \\ell_i\\left(\\boldsymbol{\\vartheta}, \\boldsymbol{x}_i^{\\top} \\boldsymbol{\\beta},{\\sqrt{\\exp \\left(\\boldsymbol{x}_i^{\\top} \\boldsymbol{\\gamma}\\right)}}^{-1}\\right)," }, { "math_id": 27, "text": "\\left\\|\\left(\\boldsymbol{\\beta}^{\\top}, \\boldsymbol{\\gamma}^{\\top}\\right)^{\\top}\\right\\|_0 \\leq s," }, { "math_id": 28, "text": "\\ell_i" }, { "math_id": 29, "text": "\\boldsymbol{\\vartheta}" }, { "math_id": 30, "text": "\\boldsymbol{\\beta}" }, { "math_id": 31, "text": "\\boldsymbol{\\gamma}" }, { "math_id": 32, "text": "\\boldsymbol{x}_i" }, { "math_id": 33, "text": "\n\\min_{\\boldsymbol{\\beta} \\in \\mathbb{R}^p} \\mathcal L_n^{\\text{LR}}(\\beta;X,y) \\text{ subject to } \\sum_{j=1}^J I\\left(\\|\\boldsymbol{\\beta}_{G_j}\\|_2 \\neq 0\\right) \\leq s\n" }, { "math_id": 34, "text": "J" }, { "math_id": 35, "text": "G_j" }, { "math_id": 36, "text": "j" }, { "math_id": 37, "text": "\ny_i = g(\\boldsymbol{b}^{\\top} \\boldsymbol{x}_i, e_i), \\quad i = 1, \\ldots, n,\n" }, { "math_id": 38, "text": "\\boldsymbol{b}" }, { "math_id": 39, "text": "e_i" }, { "math_id": 40, "text": "\nl_n(\\boldsymbol{\\beta}) = \\sum_{i=1}^n \\left(\\frac{r_i}{n} - \\frac{1}{2} - \\boldsymbol{x}_i^{\\top} \\boldsymbol{\\beta}\\right)^2,\n" }, { "math_id": 41, "text": "\\boldsymbol{r}" }, { "math_id": 42, "text": "r_i" }, { "math_id": 43, "text": "y_i" }, { "math_id": 44, "text": "\\boldsymbol{y}" }, { "math_id": 45, "text": "\n\\min_{\\boldsymbol{\\beta}} l_n(\\boldsymbol{\\beta}), \\text { s.t. } \\|\\boldsymbol{\\beta}\\|_0 \\leq s.\n" } ]
https://en.wikipedia.org/wiki?curid=75381757
75383618
Summation theorems (biochemistry)
In metabolic control analysis, a variety of theorems have been discovered and discussed in the literature. The most well known of these are flux and concentration control coefficient summation relationships. These theorems are the result of the stoichiometric structure and mass conservation properties of biochemical networks. Equivalent theorems have not been found, for example, in electrical or economic systems. The summation of the flux and concentration control coefficients were discovered independently by the Kacser/Burns group and the Heinrich/Rapoport group in the early 1970s and late 1960s. If we define the control coefficients using enzyme concentration, then the summation theorems are written as: formula_0 formula_1 However these theorems depend on the assumption that reaction rates are proportional to enzyme concentration. An alternative way to write the theorems is to use control coefficients that are defined with respect to the local rates which is therefore independent of how rates respond to changes in enzyme concentration: formula_2 formula_3 Although originally derived for simple linear chains of enzyme catalyzed reactions, it became apparent that the theorems applied to pathways of any structure including pathways with complex regulation involving feedback control. Derivation. There are different ways to derive the summation theorems. One is analytical and rigorous using a combination of linear algebra and calculus. The other is less rigorous, but more operational and intuitive. The latter derivation is shown here. Consider the two-step pathway: formula_4 where formula_5 and formula_6 are fixed species so that the system can achieve a steady-state. Let the pathway be at steady-state and imagine increasing the concentration of enzyme, formula_7, catalyzing the first step, formula_8, by an amount, formula_9. The effect of this is to increase the steady-state levels of S and flux, J. Let us now increase the level of formula_10 by formula_11 such that the change in S is restored to the original value it had at steady-state. The net effect of these two changes is by definition, formula_12. There are two ways to look at this thought experiment, from the perspective of the system and from the perspective of local changes. For the system we can compute the overall change in flux or species concentration by adding the two control coefficient terms, thus: formula_13 formula_14 We can also look at what is happening locally at every reaction step for which there will be two: one for formula_15, and another for formula_16. Since the thought experiment guarantees that formula_12, the local equations are quite simple: formula_17 formula_18 where the formula_19 terms are the elasticities. However, because the enzyme elasticity is equal to one, these reduce to: formula_20 formula_21 Because the pathway is linear, at steady-state, formula_22. We can substitute these expressions into the system equations to give: formula_23 formula_24 Note that at steady state the change in formula_15 and formula_16 must be the same, therefore formula_25. Setting formula_26, we can rewrite the above equations as: formula_27 formula_28 We then conclude through cancelation of formula_29 since formula_30, that: formula_31 formula_32 Interpretation. The summation theorems can be interpreted in various ways. The first is that the influence enzymes have over steady-state fluxes and concentrations is not necessarily concentrated at one location. In the past, control of a pathway was considered to be located at one point only, called the master reaction or rate limiting step. The summation theorem suggests this does not necessarily have to be the case. The flux summation theorem also suggests that there is a total amount of flux control in a pathway such that if one step gains control another step most lose control. Although flux control is shared, this doesn't imply that control is evenly distributed. For a large network, the average flux control will, according to the flux summation theorem, be equal to formula_33, that is a small number. In order for a biological cell to have any appreciable control over a pathway via changes in gene expression, some concentration of flux control at a small number of sites will be necessary. For example, in mammalian cancer cell lines, it has been shown that flux control is concentrated at four sites: glucose import, hexokinase, phosphofructokinase, and lactate export. Moreover, Kacser and Burns suggested that since the flux–enzyme relationship is somewhat hyperbolic, and that for most enzymes, the wild-type diploid level of enzyme activity occurs where the curve is reaching a point in the curve where changes have little effect, then since a heterozygote of the wild-type with a null mutant will have half the enzyme activity it will not exhibit a noticeably reduced flux. Therefore, the wild type appears dominant and the mutant recessive because of the system characteristics of a metabolic pathway. Although originally suggested by Sewall Wright, the development of metabolic control analysis put the idea on a more sound theoretical footing. The flux summation theorem in particular is consistent with the flux summation theorem for large systems. Not all dominance properties can be explained in this way but it does offers an explanation for dominance at least at the metabolic level. Concentration summation theorem. In contrast to the flux summation theorem, the concentration summation theorem sums to zero. The implications of this are that some enzymes will cause a given metabolite to increase while others, in order to satisfy the summation to zero, must cause the same metabolite to decrease. This is particularly noticeable in a linear chain of enzyme reactions where, given a metabolite located in the center of the pathway, an increase in expression of any enzyme upstream of the metabolite will cause the metabolite to increase in concentration. In contrast, an increase in expression of any enzyme downstream of the metabolite will cause the given metabolite to decrease in concentration. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": " \\sum_i C^J_{e_i} = 1 " }, { "math_id": 1, "text": " \\sum_i C^s_{e_i} = 0 " }, { "math_id": 2, "text": " \\sum_i C^J_{v_i} = 1 " }, { "math_id": 3, "text": " \\sum_i C^s_{v_i} = 0 " }, { "math_id": 4, "text": " \\text{X}_o \\stackrel{v_1}{\\longrightarrow} \\text{S} \\stackrel{v_2}{\\longrightarrow} \\text{X}_1 " }, { "math_id": 5, "text": "X_o" }, { "math_id": 6, "text": "X_1" }, { "math_id": 7, "text": "e_1" }, { "math_id": 8, "text": " v_1 " }, { "math_id": 9, "text": "\\delta e_1" }, { "math_id": 10, "text": "e_2" }, { "math_id": 11, "text": "\\delta e_2" }, { "math_id": 12, "text": "\\delta s = 0" }, { "math_id": 13, "text": " \\frac{\\delta J}{J} = C^J_{e_1} \\frac{\\delta e_1}{e_1} + C^J_{e_2} \\frac{\\delta e_2}{e_2} " }, { "math_id": 14, "text": " \\frac{\\delta s}{s} = C^s_{e_1} \\frac{ \\delta e_1}{e_1} + C^s_{e_2} \\frac{\\delta e_2}{e_2} = 0 " }, { "math_id": 15, "text": "v_1" }, { "math_id": 16, "text": "v_2" }, { "math_id": 17, "text": " \\frac{\\delta v_1}{v_1} = \\varepsilon^{v_1}_{e_1} \\frac{\\delta e_1}{e_1} " }, { "math_id": 18, "text": " \\frac{\\delta v_2}{v_2} = \\varepsilon^{v_1}_{e_1} \\frac{\\delta e_2}{e_2} " }, { "math_id": 19, "text": " \\varepsilon " }, { "math_id": 20, "text": " \\frac{\\delta v_1}{v_1} = \\frac{\\delta e_1}{e_1} " }, { "math_id": 21, "text": " \\frac{\\delta v_2}{v_2} = \\frac{\\delta e_2}{e_2} " }, { "math_id": 22, "text": "v_1 = v_2 = J" }, { "math_id": 23, "text": " \\frac{\\delta J}{J} = C^J_{e_1} \\frac{\\delta v_1}{v_1} + C^J_{e_2} \\frac{\\delta v_2}{v_2}" }, { "math_id": 24, "text": " \\frac{\\delta s}{s} = C^s_{e_1} \\frac{\\delta v_1}{v_1} + C^s_{e_2} \\frac{\\delta v_2}{v_2} = 0 " }, { "math_id": 25, "text": "\\delta v_1/v_1 = \\delta v_2/v_2" }, { "math_id": 26, "text": "\\alpha = \\delta J/J = \\delta v_1/v_1 = \\delta v_2/v_2" }, { "math_id": 27, "text": "\n\\alpha = C^J_{e_1} \\alpha + C^J_{e_2} \\alpha = \\alpha (C^J_{e_1} + C^J_{e_2})\n" }, { "math_id": 28, "text": "\n0 = C^s_{e_1} \\alpha + C^s_{e_2} \\alpha = \\alpha (C^s_{e_1} + C^s_{e_2})\n" }, { "math_id": 29, "text": "\\alpha" }, { "math_id": 30, "text": "\\alpha \\neq 0" }, { "math_id": 31, "text": "\n1 = C^J_{e_1} + C^J_{e_2}\n" }, { "math_id": 32, "text": "\n0 = C^s_{e_1} + C^s_{e_2}\n" }, { "math_id": 33, "text": " 1/n " } ]
https://en.wikipedia.org/wiki?curid=75383618
75388505
Quantum Cheshire cat
Phenomena in quantum physics In quantum mechanics, the quantum Cheshire cat is a quantum phenomena that suggests that a particle's physical properties can take a different trajectory from that of the particle itself. The name makes reference to the Cheshire Cat from Lewis Carroll's "Alice's Adventures in Wonderland", a feline character which could disappear leaving only its grin behind. The effect was originally proposed by Yakir Aharonov, Daniel Rohrlich, Sandu Popescu and Paul Skrzypczyk in 2012. In classical physics, physical properties cannot be detached from the object associated to it. If a magnet follows a given trajectory in space and time, its magnetic moment follows it through the same trajectory. However in quantum mechanics, particles can be in a quantum superposition of more than one trajectory previous to measurement. The quantum Cheshire experiments suggests that previous to a measurement, a particle may take two paths, but the property of the particle, like the spin of a massive particle or the polarization of a light beam, travels only through one of the paths, while the particle takes the opposite path. The conclusion is only obtained from an analysis of weak measurements, which consist in interpreting the particle history previous to measurement by studying quantum systems in the presence of small disturbances. Experimental demonstration of the quantum Cheshire cat have already been claimed in different systems, including photons and neutrons. The effect has been suggested as a probe to study properties of massive particles by detaching it from its magnetic moment in order to shield them from electromagnetic disturbances. A dynamical quantum Cheshire cat has also been proposed as a counterfactual quantum communication protocol. Example of the experiment. Neutrons are uncharged subatomic particles that have a magnetic moment, with two possible projections on any given axis. A beam of neutrons, with all with their magnetic moments aligned to the right, enters a Mach–Zehnder interferometer coming from the left-to-right. The neutrons can exit the interferometer into a right port, where a detector of neutrons with right magnetic moment is located, or upwards into a dark port with no detector (see picture). The neutrons enter the interferometer and reach a beam splitter. Each neutron that passes through, enters into a quantum superposition state of two different paths, namely "A" and "B". This initial state is referred to as the preselected state. As the neutrons travel the different paths, their wave functions reunites at a second beam splitter, causing interference. If there is nothing in the path of the neutrons, every neutron exits to the interferometer moving to the right and activates the detector. No neutron escapes upwards into the dark port due to destructive interference. One can add different components and filters in one of the paths. By adding a filter that flips the magnetic moment of the neutron in path "B" (lower branch), it leads to a new superposition state: neutron taking path "A" with a magnetic moment pointing right, plus the neutron taking path "B" with the magnetic moment flipped pointing to the left. This state is called a postselected state. As the states cannot longer interfere coherently due to this modification, the neutrons can exit through the two ports, either to the right reaching the detector or exiting towards the dark port. In this configuration, if the detector clicks, it is only because the neutrons had a magnetic moment oriented in to the right. By means of this postselection, it can be confidently stated that the neutron that reached the detector passed through path "A", which is the only path to contains neutron magnetic moments oriented to the right. This effect can be easily demonstrated by putting a thin absorber of neutrons in the path. By placing the absorber in path "B", the rate of neutrons that are detected remains constant. However, when the absorber is positioned in path A, the detection rate decreases, providing evidence that detected neutrons in the postselected state travel only through path "A". If a magnetic field is applied perpendicular to the plane of the interferometer and localized in either path "A" or path "B", the number of neutrons that are detected changes, as the magnetic fields makes the neutrons precess and alters the probabilities of being measured. Additionally, measuring the magnetism and the trajectory (with an absorber) at the same time is not possible without also disrupting the quantum state. The quantum Cheshire cat appears in the weak limit of the interaction. When a sufficiently small magnetic field is applied to path "A", there is no impact on the measurement. In contrast, if the magnetic field is applied to path "B", the detection rate diminishes, demonstrating that the neutrons magnetism, perpendicular to the plane of the interferometer, predominantly resided in path "B". We can do the same with an thin absorber, showing that only the neutrons that are detected are all from path "A". This experiment effectively separated the "cat", representing the neutron, from its "grin", symbolizing its magnetic moment out of the plane. General description. Consider a particle with a two-level property that can be either formula_0 or formula_1, this can be for example the horizontal and vertical polarization of a photon or the spin projection of a spin-1/2 particle as in the previous example with the neutrons. One of these two polarization states (let's say formula_0) is chosen and the particle is then prepared to be in the following superposition: formula_2 where formula_3 and formula_4 are two possible orthogonal trajectories of the particle. The state formula_5 is called the preselected state. A filter is added in path formula_4 of the particle in order to flip its polarization from formula_0 to formula_1, such that it ends up in the state formula_6 such state indicates that if the particle is measured to be in state formula_0, the particle took path formula_3; analogously, if the particle is measured to be in state formula_1, the particle took path formula_4. The state formula_7 is called the postselected state. Using postselection techniques, the particle is measured in order to detect the overlap between the preselected state and postselected state. If there are no disturbances, the preselected and postselected states produce the same results 1/4 of time. Weak measurements. We define the weak value of an operator formula_8 given by formula_9 where formula_10 is the preselected state and formula_11 the postselected state. This calculation can be thought as the contribution of a given interaction up to linear order. For the system, one considers two projectors operators given by formula_12 and formula_13 which measure if the particle is on either path formula_3 or formula_4, respectively. Additionally, an out-of the-plane polarization operator is defined as formula_14 this operator can be thought as a measure of angular momentum in the system. Outside the weak limit, the interaction related to this operator tends to make the polarization precess between formula_0 and formula_1. Performing the following weak measurements on the positions with formula_15 and formula_16, one obtains the following formula_17, formula_18 These weak values indicate that if the path formula_3 is slightly perturbed, then the measurement is perturbed. While if instead path formula_4 is perturbed this does not affect the measurement. We also consider weak measurements on the out-of the-plane polarization in each of the paths, such that formula_19 formula_20 These values indicate that if the polarization is slightly modified in path formula_4, then the results are slightly modified too. However if the polarization is perturbed in path formula_3 there is no correction to the intensity measured (in the weak limit). These 4 weak values lead to the quantum Cheshire cat conclusion. Interpretations and criticism. The proposal of quantum Cheshire cat has received some criticism. Popescu, one of the authors of the original paper, acknowledged it was not well received by all of the referees who first reviewed the original work. As the quantum Cheshire cat effect is subjected to analysis of the trajectory before measurement, its conclusion depends on the interpretation of quantum mechanics, which is still an open problem in physics. Some authors reach different conclusions for this effect or disregard the effect completely. It has been suggested that the quantum Cheshire cat is just an apparent paradox raising from misinterpreting wave interference. Other authors consider that it can be reproduced classically. The experimental results depend on the postselection and analysis of the data. It has been suggested that the weak value cannot be interpreted as a real property of the system, but as an optimal estimate of the corresponding observable, given that the postselection is successful. Aephraim M. Steinberg, notes that the experiment with neutrons does not prove that any single neutron took a different path than its magnetic moments; but shows only that the measured neutrons behaved this way on average. It has also been argued that even if the weak values were measured in the neutron Cheshire cat experiment, they do not imply that a particle and one of its properties have been disembodied due to unavoidable quadratic interactions in the experiment. This last point was acknowledged by A. Matzkin, one of the coauthors of the neutron experiment paper.
[ { "math_id": 0, "text": "|0\\rangle" }, { "math_id": 1, "text": "|1\\rangle" }, { "math_id": 2, "text": "|\\Psi\\rangle=\\frac{(|A\\rangle+|B \\rangle)}{\\sqrt 2 }|0\\rangle," }, { "math_id": 3, "text": "|A\\rangle" }, { "math_id": 4, "text": "|B\\rangle" }, { "math_id": 5, "text": "|\\Psi\\rangle" }, { "math_id": 6, "text": "|\\Phi\\rangle=\\frac{|A\\rangle|0\\rangle+|B \\rangle|1\\rangle}{\\sqrt 2}," }, { "math_id": 7, "text": "|\\Phi\\rangle" }, { "math_id": 8, "text": "O" }, { "math_id": 9, "text": "\\langle O \\rangle_{\\rm w} =\\frac{\\langle \\phi|O| \\psi\\rangle}{\\langle \\phi |\\psi\\rangle}," }, { "math_id": 10, "text": "|\\psi\\rangle" }, { "math_id": 11, "text": "|\\phi\\rangle" }, { "math_id": 12, "text": "\\Pi_A = |A\\rangle\\langle A|," }, { "math_id": 13, "text": "\\Pi_B = |B\\rangle\\langle B|," }, { "math_id": 14, "text": "\\sigma_\\perp = |0\\rangle\\langle 1|+|1\\rangle\\langle0|," }, { "math_id": 15, "text": "|\\phi\\rangle=|\\Phi\\rangle" }, { "math_id": 16, "text": "|\\psi\\rangle=|\\Psi\\rangle" }, { "math_id": 17, "text": "\\langle \\Pi_A \\rangle_{\\rm w} =1" }, { "math_id": 18, "text": "\\langle \\Pi_B \\rangle_{\\rm w} = 0." }, { "math_id": 19, "text": "\\langle \\Pi_A\\sigma_\\perp \\rangle_{\\rm w} =0," }, { "math_id": 20, "text": "\\langle \\Pi_B\\sigma_\\perp \\rangle_{\\rm w} =1." } ]
https://en.wikipedia.org/wiki?curid=75388505
75389909
Spherical braid group
Generalized Braid group on the Sphere In mathematics, the spherical braid group or Hurwitz braid group is a braid group on n strands. In comparison with the usual braid group, it has an additional group relation that comes from the strands being on the sphere. The group also has relations to the inverse Galois problem. Definition. The spherical braid group on n strands, denoted formula_0 or formula_1, is defined as the fundamental group of the configuration space of the sphere: formula_2 The spherical braid group has a presentation in terms of generators formula_3 with the following relations: The last relation distinguishes the group from the usual braid group. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "SB_n" }, { "math_id": 1, "text": "B_n(S^2)" }, { "math_id": 2, "text": "B_n(S^2) = \\pi_1(\\mathrm{Conf}_n(S^2))." }, { "math_id": 3, "text": "\\sigma_1, \\sigma_2, \\cdots, \\sigma_{n - 1} " }, { "math_id": 4, "text": "\\sigma_i \\sigma_j = \\sigma_j \\sigma_i " }, { "math_id": 5, "text": "|i-j| \\geq 2 " }, { "math_id": 6, "text": "\\sigma_i \\sigma_{i+1} \\sigma_i = \\sigma_{i+1} \\sigma_i \\sigma_{i+1}" }, { "math_id": 7, "text": "1 \\leq i \\leq n - 2" }, { "math_id": 8, "text": "\\sigma_1 \\sigma_2 \\cdots \\sigma_{n-1} \\sigma_{n-1} \\sigma_{n-2} \\cdots \\sigma_{1} = 1" } ]
https://en.wikipedia.org/wiki?curid=75389909
753944
Commensurability (mathematics)
When two functions have co-rational periods, i.e. n T1 = m T2 In mathematics, two non-zero real numbers "a" and "b" are said to be commensurable if their ratio "" is a rational number; otherwise "a" and "b" are called incommensurable. (Recall that a rational number is one that is equivalent to the ratio of two integers.) There is a more general notion of commensurability in group theory. For example, the numbers 3 and 2 are commensurable because their ratio, , is a rational number. The numbers formula_0 and formula_1 are also commensurable because their ratio, formula_2, is a rational number. However, the numbers formula_3 and 2 are incommensurable because their ratio, formula_4, is an irrational number. More generally, it is immediate from the definition that if "a" and "b" are any two non-zero rational numbers, then "a" and "b" are commensurable; it is also immediate that if "a" is any irrational number and "b" is any non-zero rational number, then "a" and "b" are incommensurable. On the other hand, if both "a" and "b" are irrational numbers, then "a" and "b" may or may not be commensurable. History of the concept. The Pythagoreans are credited with the proof of the existence of irrational numbers. When the ratio of the "lengths" of two line segments is irrational, the line segments "themselves" (not just their lengths) are also described as being incommensurable. A separate, more general and circuitous ancient Greek for geometric magnitude was developed in Book V of Euclid's "Elements" in order to allow proofs involving incommensurable lengths, thus avoiding arguments which applied only to a historically restricted definition of number. Euclid's notion of commensurability is anticipated in passing in the discussion between Socrates and the slave boy in Plato's dialogue entitled Meno, in which Socrates uses the boy's own inherent capabilities to solve a complex geometric problem through the Socratic Method. He develops a proof which is, for all intents and purposes, very Euclidean in nature and speaks to the concept of incommensurability. The usage primarily comes from translations of Euclid's "Elements", in which two line segments "a" and "b" are called commensurable precisely if there is some third segment "c" that can be laid end-to-end a whole number of times to produce a segment congruent to "a", and also, with a different whole number, a segment congruent to "b". Euclid did not use any concept of real number, but he used a notion of congruence of line segments, and of one such segment being longer or shorter than another. That "" is rational is a necessary and sufficient condition for the existence of some real number "c", and integers "m" and "n", such that "a" = "mc" and "b" = "nc". Assuming for simplicity that "a" and "b" are positive, one can say that a ruler, marked off in units of length "c", could be used to measure out both a line segment of length "a", and one of length "b". That is, there is a common unit of length in terms of which "a" and "b" can both be measured; this is the origin of the term. Otherwise the pair "a" and "b" are incommensurable. In group theory. In group theory, two subgroups Γ1 and Γ2 of a group "G" are said to be commensurable if the intersection Γ1 ∩ Γ2 is of finite index in both Γ1 and Γ2. Example: Let "a" and "b" be nonzero real numbers. Then the subgroup of the real numbers R generated by "a" is commensurable with the subgroup generated by "b" if and only if the real numbers "a" and "b" are commensurable, in the sense that "a"/"b" is rational. Thus the group-theoretic notion of commensurability generalizes the concept for real numbers. There is a similar notion for two groups which are not given as subgroups of the same group. Two groups "G"1 and "G"2 are (abstractly) commensurable if there are subgroups "H"1 ⊂ "G"1 and "H"2 ⊂ "G"2 of finite index such that "H"1 is isomorphic to "H"2. In topology. Two path-connected topological spaces are sometimes said to be "commensurable" if they have homeomorphic finite-sheeted covering spaces. Depending on the type of space under consideration, one might want to use homotopy equivalences or diffeomorphisms instead of homeomorphisms in the definition. If two spaces are commensurable, then their fundamental groups are commensurable. Example: any two closed surfaces of genus at least 2 are commensurable with each other. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\sqrt{3}" }, { "math_id": 1, "text": "2\\sqrt{3}" }, { "math_id": 2, "text": "\\frac{\\sqrt{3}}{2\\sqrt{3}}=\\frac{1}{2}" }, { "math_id": 3, "text": "\\sqrt{3}" }, { "math_id": 4, "text": "\\frac{\\sqrt{3}}{2}" } ]
https://en.wikipedia.org/wiki?curid=753944
753962
Circulation (physics)
Line integral of the fluid velocity around a closed curve In physics, circulation is the line integral of a vector field around a closed curve. In fluid dynamics, the field is the fluid velocity field. In electrodynamics, it can be the electric or the magnetic field. Circulation was first used independently by Frederick Lanchester, Martin Kutta and Nikolay Zhukovsky. It is usually denoted Γ (Greek uppercase gamma). Definition and properties. If V is a vector field and dl is a vector representing the differential length of a small element of a defined curve, the contribution of that differential length to circulation is dΓ: formula_0 Here, "θ" is the angle between the vectors V and dl. The circulation Γ of a vector field V around a closed curve "C" is the line integral: formula_1 In a conservative vector field this integral evaluates to zero for every closed curve. That means that a line integral between any two points in the field is independent of the path taken. It also implies that the vector field can be expressed as the gradient of a scalar function, which is called a potential. Relation to vorticity and curl. Circulation can be related to curl of a vector field V and, more specifically, to vorticity if the field is a fluid velocity field, formula_2 By Stokes' theorem, the flux of curl or vorticity vectors through a surface "S" is equal to the circulation around its perimeter, formula_3 Here, the closed integration path "∂S" is the boundary or perimeter of an open surface "S", whose infinitesimal element normal dS = ndS is oriented according to the right-hand rule. Thus curl and vorticity are the circulation per unit area, taken around a local infinitesimal loop. In potential flow of a fluid with a region of vorticity, all closed curves that enclose the vorticity have the same value for circulation. Uses. Kutta–Joukowski theorem in fluid dynamics. In fluid dynamics, the lift per unit span (L') acting on a body in a two-dimensional flow field is directly proportional to the circulation, i.e. it can be expressed as the product of the circulation Γ about the body, the fluid density formula_4, and the speed of the body relative to the free-stream formula_5: formula_6 This is known as the Kutta–Joukowski theorem. This equation applies around airfoils, where the circulation is generated by "airfoil action"; and around spinning objects experiencing the Magnus effect where the circulation is induced mechanically. In airfoil action, the magnitude of the circulation is determined by the Kutta condition. The circulation on every closed curve around the airfoil has the same value, and is related to the lift generated by each unit length of span. Provided the closed curve encloses the airfoil, the choice of curve is arbitrary. Circulation is often used in computational fluid dynamics as an intermediate variable to calculate forces on an airfoil or other body. Fundamental equations of electromagnetism. In electrodynamics, the Maxwell-Faraday law of induction can be stated in two equivalent forms: that the curl of the electric field is equal to the negative rate of change of the magnetic field, formula_7 or that the circulation of the electric field around a loop is equal to the negative rate of change of the magnetic field flux through any surface spanned by the loop, by Stokes' theorem formula_8 Circulation of a static magnetic field is, by Ampère's law, proportional to the total current enclosed by the loop formula_9 For systems with electric fields that change over time, the law must be modified to include a term known as Maxwell's correction. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\mathrm{d}\\Gamma = \\mathbf{V} \\cdot \\mathrm{d}\\mathbf{l} = \\left|\\mathbf{V}\\right| \\left|\\mathrm{d}\\mathbf{l}\\right| \\cos \\theta." }, { "math_id": 1, "text": "\\Gamma = \\oint_{C}\\mathbf{V} \\cdot \\mathrm d \\mathbf{l}." }, { "math_id": 2, "text": "\\boldsymbol{\\omega} = \\nabla\\times\\mathbf{V}." }, { "math_id": 3, "text": "\\Gamma = \\oint_{\\partial S} \\mathbf{V}\\cdot \\mathrm{d}\\mathbf{l} = \\iint_S \\nabla \\times \\mathbf{V} \\cdot \\mathrm{d}\\mathbf{S}=\\iint_S \\boldsymbol{\\omega} \\cdot \\mathrm{d}\\mathbf{S}" }, { "math_id": 4, "text": "\\rho" }, { "math_id": 5, "text": "v_{\\infty}" }, { "math_id": 6, "text": "L' = \\rho v_{\\infty} \\Gamma" }, { "math_id": 7, "text": "\\nabla \\times \\mathbf{E} = -\\frac{\\partial \\mathbf{B}} {\\partial t}" }, { "math_id": 8, "text": "\\oint_{\\partial S} \\mathbf{E} \\cdot \\mathrm{d}\\mathbf{l} = \\iint_S \\nabla\\times\\mathbf{E} \\cdot \\mathrm{d}\\mathbf{S} =\n - \\frac{\\mathrm{d}}{\\mathrm{d}t} \\int_{S} \\mathbf{B} \\cdot \\mathrm{d}\\mathbf{S}." }, { "math_id": 9, "text": "\\oint_{\\partial S} \\mathbf{B} \\cdot \\mathrm{d}\\mathbf{l} = \\mu_0 \\iint_S \\mathbf{J} \\cdot \\mathrm{d}\\mathbf{S} = \\mu_0 I_\\text{enc}." } ]
https://en.wikipedia.org/wiki?curid=753962
75398609
Connectivity theorems
The stoichiometric structure and mass-conservation properties of biochemical pathways gives rise to a series of theorems or relationships between the control coefficients and the control coefficients and elasticities. There are a large number of such relationships depending on the pathway configuration (e.g. linear, branched or cyclic) which have been documented and discovered by various authors. The term theorem has been used to describe these relationships because they can be proved in terms of more elementary concepts. The operational proofs in particular are of this nature. The most well known of these theorems are the summation theorems for the control coefficients and the connectivity theorems which relate control coefficients to the elasticities. The focus of this page are the connectivity theorems. When deriving the summation theorems, a thought experiment was conducted that involved manipulating enzyme activities such that concentrations were unaffected but fluxes changed. The connectivity theorems use the opposite thought experiment, that is enzyme activities are changed such that concentrations change but fluxes are unchanged. This is an important observation that highlights the orthogonal nature of these two sets of theorem. As with the summation theorems, the connectivity theorems can also be proved using more rigorous mathematical approaches involving calculus and linear algebra. Here the more intuitive and operational proofs will be used to prove the connectivity theorems. Statement of the connectivity theorems. Two basic sets of theorems exists, one for flux and another for concentrations. The concentration connectivity theorems are divided again depending on whether the system species formula_0 is different from the local species formula_1. formula_2 formula_3 formula_4 Proof. The operational proof for the flux connectivity theorem relies on making perturbations to enzyme levels such that the pathway flux is unchanged but a single metabolite level is changed. This can be illustrated with the following pathway: formula_5 Let us make a change to the rate through formula_6 by increasing the concentration of enzyme formula_7. Assume formula_7 is increased by an amount, formula_8. This will result in a change to the steady-state of the pathway. The concentrations of formula_9, and the flux, formula_10 through the pathway will increase, and the concentration of formula_11 will decrease because it is upstream of the disturbance. Impose a second change to the pathway such that the flux, formula_10 is restored to what it was before the original change. Since the flux increased when formula_7 was changed, the flux can be decreased by decreasing one of the other enzyme levels. If the concentration of formula_12 is decreased, this will reduce the flux. Decreasing formula_12 will also cause the concentration of formula_13 to further increase. However, formula_11 and formula_14 will change in the opposite direction compared to when formula_7 was increased. When formula_12 is sufficiently changed so that the flux is restored to its original value, the concentrations of formula_11 and formula_14 will also be restored to their original values. It is only formula_13 that will differ. This is true because the flux through formula_15 is now the same as it was originally (since we’ve restored the flux), and formula_16 has not been manipulated in anyway. This means that the concentration of formula_11 and all species upstream of formula_11 must be the same as they were before the modulations occurred. The same arguments apply to formula_14 and all species downstream of formula_17. The net result is that formula_7 has been increased by formula_8 resulting a change in flux of formula_18. The concentration of formula_12 was decreased such that the flux was restored to it original value, formula_19. In the process, formula_13 changed by formula_20 but neither formula_11 or formula_14. In fact no other species in the entire system has changed other than formula_13. This thought experiment can be expressed mathematically as follows. The system equations in terms of the flux control coefficients can be written as: formula_21 There are only two terms because only formula_7 and formula_12 were changed. The local change at each step can be written for formula_6 and formula_6 in terms of elasticities: formula_22 formula_23 Note that formula_24 won't necessarily equal formula_25 and by construction both rates, formula_6 and formula_26 showed no change. Also by construction only formula_13 changed. The local equation can be rearranged as: formula_27 formula_28 The right-hand sides can be inserted into the system equation the change in flux: formula_29 Therefore: formula_30 However, by construction of the perturbations, formula_31 does not equal zero, hence we arrive at the connectivity theorem: formula_32 The operational method can also be used for systems where a given metabolite can influence multiple steps. This would apply to cases such as branched systems or systems with negative feedback loops. The same approach can be used to derive the concentration connectivity theorems except one can consider either the case that focuses on a single species or a second case where the system equation is written to consider the effect on a distance species. Interpretation. The flux control coefficient connectivity theorem is the easiest to understand. Starting with a simple two step pathway: formula_33 where formula_34 and formula_35 are fixed species so that the pathway can reach a steady-state. formula_15 and formula_6 are the reaction rates for the first and second steps. We can write the flux connectivity theorem for this simple system as follows: formula_36 where formula_37 is the elasticity of the first step formula_15 with respect to the species formula_38 and formula_39 is the elasticity of the second step formula_6 with respect to the species formula_38. It is easier to interpret the equation with a slight rearrangement to the following form: formula_40 The equation indicates that the ratio of the flux control coefficients is inversely proportional to the elasticities. That is, a high flux control coefficient on step one is associated with a low elasticity formula_37 and vice versa. Likewise a high value for the flux control coefficient on step two is associated with a low elasticity formula_39. This can be explained as follows: If formula_37 is high (in absolute terms, since it is negative) then a change at formula_15 will be resisted by the elasticity, hence the flux control coefficient on step one will be low. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": " S_n " }, { "math_id": 1, "text": " S_m " }, { "math_id": 2, "text": " \\sum_i C^J_i \\varepsilon^i_s = 0 " }, { "math_id": 3, "text": " \\sum_i C^{s_n}_i \\varepsilon^i_{s_m} = 0 \\quad n \\neq m " }, { "math_id": 4, "text": " \\sum_i C^{s_n}_i \\varepsilon^i_{s_m} = -1 \\quad n = m " }, { "math_id": 5, "text": " \\stackrel{v_1}{\\longrightarrow} S_1 \\stackrel{v_2}{\\longrightarrow} S_2 \\stackrel{v_3}{\\longrightarrow} S_3 \\stackrel{v_4}{\\longrightarrow} " }, { "math_id": 6, "text": " v_2 " }, { "math_id": 7, "text": " e_2 " }, { "math_id": 8, "text": " \\delta e_2 " }, { "math_id": 9, "text": " s_2, s_3 " }, { "math_id": 10, "text": " J " }, { "math_id": 11, "text": " s_1 " }, { "math_id": 12, "text": " e_3 " }, { "math_id": 13, "text": " s_2 " }, { "math_id": 14, "text": " s_3 " }, { "math_id": 15, "text": " v_1 " }, { "math_id": 16, "text": " e_1 " }, { "math_id": 17, "text": " v_4 " }, { "math_id": 18, "text": " \\delta J " }, { "math_id": 19, "text": " \\delta J = 0 " }, { "math_id": 20, "text": " \\delta s_2 " }, { "math_id": 21, "text": " \\frac{\\delta J}{J} = 0 = C^J_2 \\frac{\\delta e_2}{e_2} + C^J_3 \\frac{\\delta e_3}{e_3} " }, { "math_id": 22, "text": " 0 = \\frac{\\delta v_2}{v_2} = \\frac{\\delta e_2}{e_2} + \\varepsilon^2_2 \\frac{\\delta s_2}{s_2} " }, { "math_id": 23, "text": " 0 = \\frac{\\delta v_3}{v_3} = \\frac{\\delta e_3}{e_3} + \\varepsilon^3_2 \\frac{\\delta s_2}{s_2} " }, { "math_id": 24, "text": " \\delta e_2/e_2 " }, { "math_id": 25, "text": " \\delta e_2/e_3" }, { "math_id": 26, "text": " v_3 " }, { "math_id": 27, "text": " \\frac{\\delta e_2}{e_2}=-\\varepsilon_2^2 \\frac{\\delta s_2}{s_2} " }, { "math_id": 28, "text": "\\frac{\\delta e_3}{e_3}=-\\varepsilon_2^3 \\frac{\\delta s_2}{s_2} " }, { "math_id": 29, "text": " 0=\\frac{\\delta J}{J}=-\\left(C_{e_2}^J \\varepsilon_2^2 \\frac{\\delta s_2}{s_2}+C_{e_3}^J \\varepsilon_2^3 \\frac{\\delta s_2}{s_2}\\right)" }, { "math_id": 30, "text": " 0=\\frac{\\delta s_2}{s_2}\\left(C_{e_2}^J \\varepsilon_2^2+C_{e_3}^J \\varepsilon_2^3\\right) " }, { "math_id": 31, "text": " \\delta s_2/s_2 " }, { "math_id": 32, "text": " 0=C_{e_2}^J \\varepsilon_2^2+C_{e_3}^J \\varepsilon_2^3 " }, { "math_id": 33, "text": " X_o \\stackrel{v_1}{\\longrightarrow} S_1 \\stackrel{v_2}{\\longrightarrow} X_1 " }, { "math_id": 34, "text": " X_o " }, { "math_id": 35, "text": " X_1 " }, { "math_id": 36, "text": " C^J_1 \\varepsilon^1_1 + C^J_2 \\varepsilon^2_1 = 0 " }, { "math_id": 37, "text": " \\varepsilon^1_1 " }, { "math_id": 38, "text": " S_1 " }, { "math_id": 39, "text": " \\varepsilon^2_1 " }, { "math_id": 40, "text": " \\frac{C^J_1}{C^J_2} = -\\frac{\\varepsilon^2_1}{\\varepsilon^1_1} " } ]
https://en.wikipedia.org/wiki?curid=75398609
7540008
Roshko number
In fluid mechanics, the Roshko number (Ro) is a dimensionless number describing oscillating flow mechanisms. It is named after the American Professor of Aeronautics Anatol Roshko. It is defined as formula_0 formula_1 formula_2 where Correlations. Roshko determined the correlation below from experiments on the flow of air around circular cylinders over range Re=50 to Re=2000: formula_3 valid over [ 50 &lt;= Re &lt; 200] formula_4 valid over [200 &lt;= Re &lt; 2000] Ormières and Provansal investigated vortex shedding in the wake of a sphere and found a relationship between Re and Ro in the range 280 &lt; Re &lt; 360. Notes. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": " \\mathrm{Ro} = {f L^{2}\\over \\nu} =\\mathrm{St}\\,\\mathrm{Re} " }, { "math_id": 1, "text": " \\mathrm{St}= {f L\\over U}, " }, { "math_id": 2, "text": " \\mathrm{Re} = {U L\\over \\nu} " }, { "math_id": 3, "text": " \\mathrm{Ro} = 0.212 \\mathrm{Re} - 4.5" }, { "math_id": 4, "text": " \\mathrm{Ro} = 0.212 \\mathrm{Re} - 2.7" } ]
https://en.wikipedia.org/wiki?curid=7540008
75401253
Dyson Brownian motion
In mathematics, the Dyson Brownian motion is a real-valued continuous-time stochastic process named for Freeman Dyson. Dyson studied this process in the context of random matrix theory. There are several equivalent definitions: Definition by stochastic differential equation:formula_0where formula_1 are different and independent Wiener processes. Start with a Hermitian matrix with eigenvalues formula_2, then let it perform Brownian motion in the space of Hermitian matrices. Its eigenvalues constitute a Dyson Brownian motion. Start with formula_3 independent Wiener processes started at different locations formula_2, then condition on those processes to be non-intersecting for all time. The resulting process is a Dyson Brownian motion starting at the same formula_2.
[ { "math_id": 0, "text": "d \\lambda_i=d B_i+\\sum_{1 \\leq j \\leq n: j \\neq i} \\frac{d t}{\\lambda_i-\\lambda_j}" }, { "math_id": 1, "text": "B_1, ..., B_n" }, { "math_id": 2, "text": "\\lambda_1(0), \\lambda_2(0), ..., \\lambda_n(0)" }, { "math_id": 3, "text": "n" } ]
https://en.wikipedia.org/wiki?curid=75401253
75407072
Pitch angle of a spiral
Property of spirals In the geometry of spirals, the pitch angle or pitch of a spiral is the angle made by the spiral with a circle through one of its points, centered at the center of the spiral. Equivalently, it is the complementary angle to the angle made by the vector from the origin to a point on the spiral, with the tangent vector of the spiral at the same point. Pitch angles are frequently used in astronomy to characterize the shape of spiral galaxies. Logarithmic spirals are characterized by the property that the pitch angle remains invariant for all points of the spiral. Two logarithmic spirals are congruent when they have the same pitch angle, but otherwise are not congruent. For instance, only the golden spiral has pitch angle formula_0 where formula_1 denotes the golden ratio; logarithmic spirals with other angles are not golden spirals. Spirals that are not logarithmic have pitch angles that vary by distance from the center of the spiral. For an Archimedean spiral the angle decreases with the distance, while for a hyperbolic spiral the angle increases with the distance. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\arctan\\left(\\frac{\\ln\\varphi}{\\pi/2}\\right)\\approx 17^\\circ," }, { "math_id": 1, "text": "\\varphi" } ]
https://en.wikipedia.org/wiki?curid=75407072
75409524
Betatron oscillations
Basic concept in accelerator physics Betatron oscillations are the fast transverse oscillations of a charged particle in various focusing systems: linear accelerators, storage rings, transfer channels. Oscillations are usually considered as a small deviations from the ideal reference orbit and determined by transverse forces of focusing elements i.e. depending on transverse deviation value: quadrupole magnets, electrostatic lenses, RF-fields. This transverse motion is the subject of study of electron optics. Betatron oscillations were firstly studied by D.W. Kerst and R. Serber in 1941 while commissioning the fist betatron. The fundamental study of betatron oscillations was carried out by Ernest Courant, Milton S.Livingston and Hartland Snyder that lead to the revolution in high energy accelerators design by applying strong focusing principle. Hill's equations. To hold particles of the beam inside the vacuum chamber of accelerator or transfer channel magnetic or electrostatic elements are used. The guiding field of dipole magnets sets the reference orbit of the beam while focusing magnets with field linearly depending on transverse coordinate returns the particles with small deviations forcing them to oscillate stably around reference orbit. For any orbit one can set locally the co-propagating with the reference particle Frenet–Serret coordinate system. Assuming small deviations of the particle in all directions and after linearization of all the fields one will come to the linear equations of motion which are a pair of Hill equations: formula_0 Here formula_1, formula_2 are periodic functions in a case of cyclic accelerator such as betatron or synchrotron. formula_3 is a gradient of magnetic field. Prime means derivative over s, path along the beam trajectory. The product of guiding field over curvature radius formula_4 is magnetic rigidity, which is via Lorentz force strictly related to the momentum formula_5, where formula_6 is a particle charge. As the equation of transverse motion independent from each other they can be solved separately. For one dimensional motion the solution of Hill equation is a quasi-periodical oscillation. It can be written as formula_7, where formula_8 is Twiss beta-function, formula_9 is a betatron phase advance and formula_10 is an invariant amplitude known as Courant-Snyder invariant. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\n\\begin{cases}\n x'' + k_x(s)x = 0 \\\\\n y'' + k_y(s)y = 0 \\\\\n\\end{cases}.\n" }, { "math_id": 1, "text": "k_x(s) = \\frac{1}{r_0^2} + \\frac{G(s)}{B\\rho}" }, { "math_id": 2, "text": "k_y(s)=-\\frac{G(s)}{B\\rho}" }, { "math_id": 3, "text": "G(s)=\\frac{\\partial B_z}{\\partial x}" }, { "math_id": 4, "text": "B\\rho = B\\cdot r_0" }, { "math_id": 5, "text": "pc=eZB\\rho" }, { "math_id": 6, "text": "eZ" }, { "math_id": 7, "text": "x(s)= A\\sqrt{\\beta_x (s)} \\cdot cos(\\Psi_x (s) + \\phi_0)" }, { "math_id": 8, "text": "\\beta(s)" }, { "math_id": 9, "text": "\\Psi (s)" }, { "math_id": 10, "text": "A" } ]
https://en.wikipedia.org/wiki?curid=75409524
75411912
Theta-subsumption
Theta-subsumption (θ-subsumption, or just subsumption) is a decidable relation between two first-order clauses that guarantees that one clause logically entails the other. It was first introduced by John Alan Robinson in 1965 and has become a fundamental notion in inductive logic programming. Deciding whether a given clause θ-subsumes another is an NP-complete problem. Definition. A clause, that is, a disjunction of first-order literals, can be considered as a set containing all its disjuncts. With this convention, a clause formula_0 θ-subsumes a clause formula_1 if there is a substitution formula_2 such that the clause obtained by applying formula_3 to formula_0 is a subset of formula_1. Properties. θ-subsumption is a weaker relation than logical entailment, that is, whenever a clause formula_0 θ-subsumes a clause formula_1, then formula_0 logically entails formula_4. However, the converse is not true: A clause can logically entail another clause, but not θ-subsume it. θ-subsumption is decidable; more precisely, the problem of whether one clause θ-subsumes another is NP-complete in the length of the clauses. This is still true when restricting the setting to pairs of Horn clauses. As a binary relation among Horn clauses, θ-subsumption is reflexive and transitive. It therefore defines a preorder. It is not antisymmetric, since different clauses can be syntactic variants of each other. However, in every equivalence class of clauses that mutually θ-subsume each other, there is a unique shortest clause up to variable renaming, which can be effectively computed. The class of quotients with respect to this equivalence relation is a complete lattice, which has both infinite ascending and infinite descending chains. A subset of this lattice is known as a &lt;templatestyles src="Template:Visible anchor/styles.css" /&gt;refinement graph. History. θ-subsumption was first introduced by J. Alan Robinson in 1965 in the context of resolution, and was first applied to inductive logic programming by Gordon Plotkin in 1970 for finding and reducing least general generalisations of sets of clauses. In 1977, Lewis D. Baxter proves that θ-subsumption is NP-complete, and the 1979 seminal work on NP-complete problems, Computers and Intractability, includes it among its list of NP-complete problems. Applications. Theorem provers based on the resolution or superposition calculus use θ-subsumption to prune redundant clauses. In addition, θ-subsumption is the most prominent notion of entailment used in inductive logic programming, where it is the fundamental tool to determine whether one clause is a specialisation or a generalisation of another. It is further used to test whether a clause covers an example, and to determine whether a given pair of clauses is redundant. Notes. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "c_1" }, { "math_id": 1, "text": "c_2" }, { "math_id": 2, "text": "\\theta" }, { "math_id": 3, "text": "\\theta" }, { "math_id": 4, "text": "c_2\n" } ]
https://en.wikipedia.org/wiki?curid=75411912
75415145
Pnictogen-substituted tetrahedranes
Pnictogen-substituted tetrahedranes are pnictogen-containing analogues of tetrahedranes with the formula RxCxPn4-x (Pn = N, P, As, Sb, Bi). Computational work has indicated that the incorporation of pnictogens to the tetrahedral core alleviates the ring strain of tetrahedrane. Although theoretical work on pnictogen-substituted tetrahedranes has existed for decades, only the phosphorus-containing species have been synthesized. These species exhibit novel reactivities, most often through ring-opening and polymerization pathways. Phosphatetrahedranes are of interest as new retrons for organophosphorus chemistry. Their strain also make them of interest in the development of energy-dense compounds. History. The first synthetic tetrahedral molecule, tetra-"tert"-butyltetrahedrane ("t"Bu4C4) was reported in 1978 by Maier and coworkers following the synthesis of other Platonic solid species, like cubane and dodecahedrane. The "tert"-butyl substituents were used to encumber the tetrahedral core and quell the radical-mediated ring-opening of an otherwise kinetically stable but thermodynamically strained molecule via the corset effect. As of 2023, the unencumbered tetrahedrane (H4C4) has yet to be synthesized. The substitution of carbons in the tetrahedral core of tetrahedrane with pnictogens to stabilize the highly strained molecule has been suggested due to the known existence of elemental tetrahedral pnictogens. Notably, white phosphorus, the most stable allotrope of phosphorus, is tetrahedral with the molecular formula P4. Arsenic can also exist as a metastable tetrahedral allotrope, As4, known as yellow arsenic. Furthermore, mixed tetrahedral pnictogen molecules have been synthesized, such as AsP3 and, more recently, (PbBi3)-. Elements on the extreme ends of the pnictogen family have not yet been observed in a tetrahedral Pn4 configuration, however. Nitrogen's orbitals lack diffusivity, and bismuth’s orbitals undergo minimal hybridization due to relativistic contraction. Computational studies into mixed pnictogen-tetrel tetrahedranes have suggested that pnictogen-substituted tetrahedranes are more stable than their all tetrel counterparts decades before their first synthesis. In 1990, calculations on azatetrahedranes suggested positive correlation between the number of nitrogens in the tetrahedral core and thermodynamic stability. In the same vein, calculations done in 2010 on pnictacubanes suggested positive correlation between the number of phosphoruses in the cuboidal core and the thermodynamic stability. In 2019, Wolf and coworkers synthesized the first pnictogen-substituted tetrahedrane: di-"tert"-butyldiphosphatetrahedrane ("t"Bu2C2P2), produced from the reaction of nickel catalyst with phosphaalkynes. Shortly thereafter, in 2020, Cummins and coworkers announced that they had synthesized tri-"tert"-butylmonophosphatetrahedrane ("t"Bu3C3P) from a phosphorus-containing anthracene derivative. In 2021, Cummins and coworkers published the synthesis of triphosphatetrahedrane (HCP3), completing the set of tetrahedral molecules with carbon- and phosphorus-containing cores. Phosphatetrahedrane Synthesis. Despite the presentation of phosphatetrahedranes as a series of incrementally changing tetrahedrane derivatives, their syntheses are vastly different. Tri-"tert"-butylmonophosphatetrahedrane. The original synthesis of "t"Bu3C3P reported by Cummins and coworkers in 2020 begins with the phosphorus derivative of anthracene. The addition of sodium hexamethyldisilazide deprotonates the phosphorus followed by triphenylborane then bonding to the phosphorus. The anionic phosphorus-containing intermediate then forms an ionic interaction with the sodium cation in solution. The addition of tri-"tert"-butyl cyclopropenium ion produces the thermally stable cyclopropenyl phosphine intermediate. Upon irradiation with 254 nm light in the presence of triflic acid and either tetra-"n"-butyl ammonium chloride or tetramethylammonium fluoride, the anthracene leaving group is forced out, leaving a halogenated phosphine. Upon addition of lithium tetramethylpiperidide with heating, "t"Bu3C3P and lithium halide salt is generated. An improved version of tri-"tert"-butylmonophosphatetrahedrane synthesis, where the anthracene is replaced by two trimethylsilyl groups, was reported by Cummins and coworkers a year later. To replace one of the trimethylsilyl groups with a chloride, hexachloroethane is added, generating trimethylsilyl chloride and tetrachloroethylene as byproducts. Tetramethylammonium fluoride is then added to remove the remaining trimethylsilyl group and the chloride, generating the "t"Bu3C3P. This route has a "t"Bu3C3P yield of 33% as compared to the original route's 19%. Di-"tert"-butyldiphosphatetrahedrane. In 2019, Wolf and coworkers reported the synthesis of "t"Bu2C2P2 through the use of a metal catalyst. Ni(IPr)(CO)3, upon addition of 1 equivalent of "tert"-butylphosphaacetylene ("t"BuCP), loses two carbon monoxide ligands. The addition of a second equivalent of "t"BuCP generates the 1,3-diphosphacyclobutadiene ligand, now binding with η4 hapticity. Density functional theory calculations into the catalytic cycle suggest that the 1,3-diphosphacyclobutadiene isomerizes into the desired tetrahedrane. Upon addition of a final "t"BuCP, ("t"Bu2C2P2) is released and the catalytic cycle can begin again. Triphosphatetrahedrane. Cummins and coworkers reported the synthesis of HCP3 in 2021. Due to the similarity of HCP3 to AsP3, the [NbII(ODipp)3(P3)]- previously shown to be a retron for AsP3 was used for the synthesis of HCP3. To add a -CH group to [P3]3-, bromodichloromethane undergoes halogen abstraction, leaving a carbon-centered radical. The niobium complex then undergoes P3 transfer to yield HCP3. The use of bromodichloromethyl trimethylsilane instead of bromodichloromethane in this process yields trimethylsilyl triphosphatetrahedrane ((Me3S)CP3). Reactivity. Tri-"tert"-butylmonophosphatetrahedrane. Lewis Acid-Induced Reactions. Addition of W(CO)5(THF) to "t"Bu3C3P generates a phosphorus-containing housene analogue. The addition of 0.2 equivalents of triphenylborane in benzene can produce several cycloadducts. In the absence of exogenous reagents, "t"Bu3C3P dimerizes into a ladderane-like compound with a P-P bond. In the presence of excess styrene or an atmosphere of ethylene, [4 + 2] cycloadditions occur to give 1-phosphabicyclo[2.2.0]hexenes. Silylene Reaction. The cage opening of "t"Bu3C3P can be induced by PhC(N"t"Bu)2SiN(SiMe3)2 over the course of 24 hours to generate the dark red phosphasilene PhC(N"t"Bu)2Si=P("t"Bu3C3). Ylide Reaction. Reaction of "t"Bu3C3P with the ylide Ph3P=CH2 over 48 hours and with heat induces cage opening in the same manner as the silylene reaction to generate H2C=P("t"BuC)3. Reaction of this product with "t"Bu3C3P generates the symmetric product ("t"BuC)3P(C)P("t"BuC)3. Formation of Phosphirane. "t"Bu3C3P is a retron for phosphirane synthesis. Upon reaction with Ni(COD)2 (COD = cycloocta-1,5-diene) catalyst in triisopropylphosphine, cage opening occurs. Like the silylene and ylide reactions, the phosphorus bridges the ("t"BuC)3 and the alkene components. The phosphate undergoes cycloaddition with the double bond to form the phosphirane moiety. This reaction pathway has been demonstrated for styrene, ethylene, and neohexene. Furthermore, this reaction pathway is also capable of synthesizing vinyl-substituted phosphirane as evidenced by "t"Bu3C3P and cyclohexa-1,3-diene. Ligand Substitution. "t"Bu3C3P can be used to replace the ethylene ligand of (Ph3P)Pt(C2H4) in melting THF. Di-"tert"-butyldiphosphatetrahedrane. Dimerization Reactions. Above the melting point of "t"Bu2C2P2 (–32 °C), "t"Bu2C2P2 dimerizes into another ladderane-like structure but it is prone to decomposition. This reaction can be hampered by keeping "t"Bu2C2P2 under its melting point and/or by keeping the "t"Bu2C2P2 concentration low. "t"Bu2C2P2 can also be dimerized using nickel complexes to form a variety of exotic structures. "t"Bu2C2P2 reacted with 1 equivalent of Ni(CpR)(IPr) (IPr = 1,3-bis(2,6-diisopropylphenyl)imidazolin-2-ylidene, R = H, CH3, 4-(CH3CH2)-C6H4) generates 0.5 equivalent of a tetracyclo-compound. Upon addition of another equivalent of the same nickel complex, a butterfly-like geometry is adopted, with two nickel atoms coordinated to opposite phosphorus atoms and two coordinated to adjacent phosphorus atoms on different four membered rings. This butterfly-structured compound is a dark red color. The reaction to the butterfly structure is believed to depend on kinetic access to the middle P-P bond. Bulky substituents on CpR kinetically hinder the P-P bond cleavage and transformation into the butterfly-structured product. "t"Bu2C2P2 can also be reacted with Ni(IMes)2 (IMes = 1,3-bis(2,4,6-trimethylphenyl)imidazolin-2-ylidene) in toluene to produce 0.5 equivalents of an asymmetric compound with two hexacoordinate Ni atoms, a Ni-Ni bond, and two weak P-P interactions. This product is an intermediate for further chemistry. Heating the product at 60 °C for 3 hours causes the expulsion of a di-"tert"-butylacetylene and the reformation of P-P bonds. Another reaction pathway involves the addition of 3 equivalents of CO to the product, leading to the production of Ni(IMes)(CO)3 and the ladderane-like compound described at the beginning of this section. A third reaction pathway involves the addition of hexachloroethane. This produces a 1,2-diphosphocyclobutadiene ring ("vide supra") that is coordinated to both nickel atoms. This third reaction pathway also produces the ladderane analogue. Ligand Substitution. In solution with coordination complexes, "t"Bu2C2P2 can cause ligand substitution. Most of these reactions cause cage-opening. The reaction of "t"Bu2C2P2 with [K([18]crown-6)][Fe(anthracene)2] in toluene and THF causes the expulsion of one anthracene and the cage opening of "t"Bu2C2P2 to form the replacement 1,2-diphosphacyclobutadiene ligand. This cage opening is due to P-C bond cleavage. Mono-ligand substitution is also observed in the reaction of "t"Bu2C2P2 with [(DippBIAN)M(COD)] (Dipp = 2,6-diisopropylphenyl, BIAN = bis(arylimino)acenaphthene, M = Fe, Co). The cobalt product, upon reaction with Cy2PCl (Cy = cyclohexyl), forms a 1,2,3-triphospholium ligand. "t"Bu2C2P2 can also displace the toluene in Ni(toluene)(IPr). Toluene is replaced by the aforementioned ladderane analogue in a η2-fashion. Double-ligand substitution is seen in the reaction of "t"Bu2C2P2 with [Co(COD)2][K(THF)0.2]. The major product has its ligands doubly substituted by 1,2-diphosphacyclobutadiene. A minor product with double substitution by 1,3-diphosphacyclobutadiene ligand is also observed. However, the cobalt complex with one 1,2-diphosphacyclobutadiene ligand and one 1,3-diphosphacyclobutadiene ligand is not observed; this is likely due to steric clash between the "tert"-butyl substituents. The preference for 1,2-diphosphacyclobutadiene makes "t"Bu2C2P2 a potentially valuable retron as phosphalkynes are known to produce 1,3-diphosphacyclobutadiene. Only two of "t"Bu2C2P2's ligand substitution reactions are known to preserve the tetrahedral cage. Reacting (pftb)[Ag(CH2Cl2)2] (pftb = Al[PFTB]- = Al[OC(CF3)3]4-) with "t"Bu2C2P2 in lightless conditions leads to the generation of a disilver complex wherein each of the two "t"Bu2C2P2 ligates to one silver atom and each of the two ladderane analogues ("vida infra") ligates to both silver atoms. By reacting Ni(CO)4 in THF at –80 °C and in the absence of light, three "t"Bu2C2P2 molecules coordinate to Ni in an η2-fashion. Calculations by Intrinsic Bond Orbital (IBO) theory suggest that the coordination occurs through a 3-center-2-electron bond. Reactions with "N"-Heterocyclic Carbenes. "t"Bu2C2P2 can be used a retron to form phosphirenes or phosphaalkenes with the addition of 1 equivalent or 2 equivalents of "N"-heterocyclic carbenes (NHC), respectively. Upon the addition of 1 equivalent of IPr, IMes, or MesDAC (1,3-bis(2,4,6-trimethylphenyl)diamidocarbene), "t"Bu2C2P2 undergoes ring opening at one phosphorus atom's P-C bonds, creating structures with a bridging P-P bond between the NHC and the phosphirene. IBO calculations and crystallographic evidence support the assignment of double bonding to the P=C bond to resultant molecule. This reaction is very slow, taking several weeks to reach completion. Upon the addition of 2 equivalents of TMC (2,3,4,5-tetramethylimidazolin-2-ylidene) in benzene, both phosphorus atoms bond to THC. The P-P bond is broken. A double bond also forms between the two carbons of "t"Bu2C2P2, generating a phosphaalkene. This reaction happens significantly faster, with a reported speed of 1 hour. Selectivity between the two reactions is suggested to be achieved by changing the steric bulk of the NHC used. A bulky NHC should prefer generating a phosphirene, whereas a smaller NHC should prefer generating a phosphaalkene. Triphosphatetrahedrane. Reaction with (dppe)Fe(Cp*)Cl. HCP3, upon addition of [(dppe)Fe(Cp*)]Cl (dppe = 1,2-bis-(diphenylphosphino)ethane) in sodium tetraphenylborate and THF, undergoes salt metathesis and produces [(dppe)Fe(Cp*)(HCP3)][BPh4]. This product is crystallizable, producing purple crystals. This product is prone to decomposition back to HCP3. Theoretical Work. Azatetrahedranes. Bonding Parameters. Ring and cage strain results in poor angular overlap of orbitals, leading to non-linear bonding. Due to the interest in tetrahedrane and their azatetrahedral analogues as highly strained molecules, Politzer and Seminario introduced the "bond deviation index" to determine the deviation of the bond path — defined as the path following maxima between nuclei — and the linear bond between the nuclei. formula_0 The strain of experienced by H4C4 is calculated to be partially alleviated upon the substitution of carbon atoms by nitrogen atoms. The bond deviation index decreases with the number of nitrogens in azatetrahedranes from 0.114 to 0.087 from 0 to 2 nitrogen atoms. The engendered stability is countered by the propensity of the N-N bond in the strained system to escape as dinitrogen. This is evidenced by the calculated bond length: 1.59 Å, which is much higher than that of aromatic N-N bonds: 1.21-1.36 Å. On the basis of the higher electronegativity of nitrogen than carbon, azatetrahedranes have less negative electrostatic potentials at their C-C bonds than H4C4, leading to greater stability against electrophilic attacks. C-C and N-C bonds both contract in azatetrahedranes. This contraction leads to smaller bond deviations and stronger bonding, although the latter is dependent on the distance from the nitrogen atom(s). Azatetrahedranes also seem to distort from an ideal tetrahedral symmetry. These trends directly correlates with the number of nitrogen atoms. Nitration also contracts the tetrahedral core. However, essentially thermodynamically neutral nitro group rotation leads to small amounts of C-N lengthening, weaking the interaction and localizing the electrons. Ionization of all azatetrahedranes cores in the series led to cage-opening at the G4MP2 and G4 levels of theory. Thermodynamics. Early work on azatetrahedranes has also utilized isodesmic reactions — aphysical reactions where compounds are changing but bond types are not — to understand molecular stability. The isodesmic reaction energy is thus a metric of the stabilization/destabilization relative to the starting reagents. Overall, as the number of nitrogens increase, the stability of the system increases. The isodesmic reaction energy goes from 151.8 kcal/mol to 81.8 kcal/mol for 0 to 4 nitrogen atoms. Nitration destabilizes the energy dramatically, as exemplified by 2,4-dinitro-1,3-diazatetrahedrane ((O2NC)2N2) having an isodesmic reaction energy of 152.9 kcal/mol. Due to the instability of many azatetrahedranes, isodesmic comparisons to azacyclobutadiene analogues have been used to determine which core structures are the most synthetically feasible. Alkorta, Elguero, and Rozas reported that every member of the azatetrahedrane core series is always slightly more unstable than their azacyclobutadiene analogue(s). Jursic's calculations suggest that the energetic differential between the azatetrahedrane and the azacyclobutadiene starts off large and decreases as the number of nitrogen atoms increase in the cage. Furthermore, Jursic's calculations suggest that tetraazatetrahedrane may be slightly more stable (difference of 4.4 kcal/mol at 0 K and the CBSQ level of theory) than its azacyclobutadiene analogue. Despite the instability, non-sterically hindered azatetrahedranes may still be detectable in gaseous matrices. Phosphatetrahedranes. Bonding Parameters. Much like azatetrahedranes, phosphatetrahedranes show bond deviation. The molecular graph of "t"Bu3C3P shows significantly warped bond paths; the bond critical points lie far from the linear representation. Investigations into the bonds of phosphatetrahedranes used quasi-atomic orbital analysis with isodesmic reactions. In contrast to azatetrahedranes, the lower electronegativity of phosphorus relative to carbon leads to a localization of negative charge in the carbons and the localization of positive charge in the phosphorus(es). This leads to greater occupancy in the C-C-C ring from 4.17 to 4.28 electrons with the addition of one phosphorus atom. Overall, the extent of charge transfer increases with the number of phosphorus atoms in the tetrahedral core. Of the phosphatetrahedranes, only the triphosphatetrahedrane core did not show evidence of cage-opening upon ionization. Thermodynamics. Ivanov, Bozhenko, and Boldyrev studied the energetic landscape of the phosphatetrahedrane series ((HC)xP4-x). Their calculations suggest that substitution of phosphorus for carbon increasingly favors the tetrahedral structure. The tetrahedral structure is the absolute minima starting for triphosphatetrahedrane, but diphosphatetrahedrane is only 2.3 kcal/mol higher in energy than the absolute minima. They attribute the stabilization of the tetrahedral structure to phosphorus' amiability towards more acute bond angles. The more diffuse orbitals of phosphorus versus carbon also favor the tetrahedral structure's σ-interactions over the planar phosphacyclobutadiene's π-interactions. Riu, Ye, and Cummins report similar computational findings. Their calculations show decreasing strain energy with the number of phosphorus atoms in the tetrahedral cage. They also attribute the stabilization to the diffusitivity of phosphorus orbitals. They also note that the accumulation of p-character on the bond orbitals leads to greater s-character on the lone pairs. The isolability of "t"Bu3C3P was attributed to the controversial hydrogen-hydrogen bonds (HHB), which some chemists have argued may not exist. Each HHB of the "tert"-butyl network were calculated (in absence of steric repulsion) to contribute 0.7 kcal/mol of stabilization. Calculations with one of the "tert"-butyl substituents with a methyl, ethyl, or isopropyl group result in net repulsion due to the loss of HHBs. In total, this forms the basis of the corset effect. Non-Lewis donation of electron density from the tetrahedral core to the "tert"-butyl substituents also stabilizes "t"Bu3C3P according to natural bond orbital theory. This effect was also demonstrated "in silico" for unsubstituted monophosphatetrahedrane. Substitution by Heavier Congeners. Schaefer and coworkers, in light of the synthesis of "t"Bu3C3P, ran calculations on the mono-pnictogen-substituted tetrahedrane series, represented by R3C3Pn (R = H, "t"Bu, Pn = N, P, As, Sb, Bi). These calculations yielded a series of well correlated trends. Consistent with a perturbation of the pnictogen residing above a C-C-C ring, the C-Pn bonds elongate from 1.493 Å to 2.289 Å, and C-Pn-C angle decreases from 58.0° to 37.1° as heavier congeners are used. This is due to the larger atomic radius of the heavier pnictogens. The H-[C-C-C plane] angle increases from 9.1° to 31.1°, which is also attributed to the diffusivity of the heavier congener's orbitals. As noted above with the aza- and phosphatetrahedranes, the change in pnictogen electronegativity changes the interaction between the Pn atom and the C-C-C ring. The C-C-C ring becomes increasingly more negatively charged with the heavier pnictogens.Isodesmic reactions show greater stabilization of the cage structure due to the diffusivity of the pnictogen's orbitals, although even with bismuth, the mono-pnictogen-substituted tetrahedrane is unstable. Delocalization plays a large part in the stabilization of the heavier analogues. For example, electron density is increasingly transferred from the Pn-C bonds into the Pn lone pair in the heavier congeners. These lone pairs are also noted to follow Bent's rule. As noted above with "t"Bu3C3P, non-Lewis interactions stabilize the tetrahedral core. These effects also become more pronounced with the heavier pnictogens. Second order perturbations suggest that the key non-Lewis interactions are C-C to C-R* and C-Pn to C-H* ("i.e.," cage-opening), as well as interactions to Pn-C*. The former set of interactions stabilize the tetrahedral core most when the substituent is an electron-withdrawing group ("e.g.," fluoride), although decreased electron density in C-C and C-Pn can facilitate cage-opening as well. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\lambda=\\frac{1}{R}\\bigg[\\frac{1}{N}\\sum_i^N{r_i^2}\\bigg]^{\\frac{1}{2}}" } ]
https://en.wikipedia.org/wiki?curid=75415145
7541904
Caroline era
Period in English and Scottish history The Caroline era is the period in English and Scottish history named for the 24-year reign of Charles I (1625–1649). The term is derived from "Carolus", Latin for Charles. The Caroline era followed the Jacobean era, the reign of Charles's father James I &amp; VI (1603–1625), overlapped with the English Civil War (1642–1651), and was followed by the English Interregnum until The Restoration in 1660. It should not be confused with the Carolean era, which refers to the reign of Charles I's son King Charles II. The Caroline era was dominated by growing religious, political, and social discord between the King and his supporters, termed the Royalist party, and the Parliamentarian opposition that evolved in response to particular aspects of Charles's rule. While the Thirty Years' War was raging in continental Europe, Britain had an uneasy peace, growing more restless as the civil conflict between the King and the supporters of Parliament worsened. Despite the friction between King and Parliament dominating society, there were developments in the arts and sciences. The period also saw the colonisation of North America with the foundation of new colonies between 1629 and 1636 in Carolina, Maryland, Connecticut and Rhode Island. Development of colonies in Virginia, Massachusetts, and Newfoundland also continued. In Massachusetts, the Pequot War of 1637 was the first major armed conflict between the people of New England and the Pequot Tribe. Arts. The highest standards of the arts and architecture all flourished under the patronage of the King, although drama slipped from the previous Shakespearean age. All the arts were greatly impacted by the enormous political and religious controversies, and the degree to which they were themselves influential is a matter of ongoing debate among scholars. Patrick Collinson argues that an emerging Puritan community was highly suspicious of the fine arts. Edward Chaney argues that Catholic patrons and professionals were quite numerous and greatly influenced the direction of the arts. Poetry. The Caroline period saw the flourishing of the cavalier poets (including Thomas Carew, Richard Lovelace, and John Suckling) and the metaphysical poets (including George Herbert, Henry Vaughan, Katherine Philips), movements that produced figures like John Donne, Robert Herrick and John Milton. Cavalier poetry differs from traditional poetry in subject matter. Instead of tackling issues such as religion, philosophy and the arts, cavalier poetry aims to express the joys and celebrations in a much livelier way than did its predecessors. The intent was often to promote the crown, and they often spoke outwardly against the Roundheads. Most cavalier works had allegorical or classical references, drawing on knowledge of Horace, Cicero, and Ovid. By using these resources they were able to produce poetry that impressed King Charles I. The cavalier poets strove to create poetry where both pleasure and virtue thrived. They were rich in reference to the ancients, and most poems "celebrate beauty, love, nature, sensuality, drinking, good fellowship, honor, and social life". Cavalier poets wrote in a way that promoted seizing the day and the opportunities presented to them and their kinsmen. They wanted to revel in society and come to be the best that they possibly could within the bounds of that society. Living life to the fullest, for cavalier writers, often included gaining material wealth and having sex with women. These themes contributed to the triumphant and boisterous tone and attitude of the poetry. Platonic love was also another characteristic of cavalier poetry, where the man would show his divine love for a woman, and where she would be worshipped as a creature of perfection. George Wither (1588–1667) was a prolific poet, pamphleteer, satirist and writer of hymns. He is best known for "Britain's Remembrancer" of 1625, with its wide range of contemporary topics including the plague and politics. It reflects on nature of poetry and prophecy, explores the fault lines in politics, and rejects tyranny of the sort the king was denounced for fostering. It warns about the wickedness of the times and prophesizes that disasters are about to befall the kingdom. Theatre. Caroline theatre unquestionably saw a falling-off after the peak achievements of William Shakespeare and Ben Jonson, though some of their successors, especially Philip Massinger, James Shirley, and John Ford, carried on to create interesting, even compelling theatre. In recent years the comedies of Richard Brome have gained in critical recognition. The peculiar artistic form of the court masque was still being written and performed. A masque involved music and dancing, singing and acting, within an elaborate stage design, in which the architectural framing and costumes might be designed by a renowned architect, often Inigo Jones, to present a deferential allegory flattering to the patron. Professional actors and musicians were hired for the speaking and singing parts. Often those acting, who did not speak or sing, were courtiers. In a strong contrast to Jacobean and Elizabethan theatre, seen by a very wide public, these were private performances in houses or palaces for a small court audience. The lavish expenditures on these showpiece masques – the production of a single masque could approach £15,000 – was one of a growing number of grievances that critics in general, and the Parliamentarians in particular, held against the King and his court. The conventional theatre in London also continued the Jacobean trend of moving to smaller, more intimate, but also more expensive venues, performing in front of a much narrower social range. The only new London theatre in the reign seems to have been the Salisbury Court Theatre, open from 1629 until the closing of the theatres in 1642. Sir Henry Herbert as (in theory) deputy Master of the Revels, was a dominant figure, in the 1630s often causing trouble for the two leading companies, the King's Men, whose patronage Charles had inherited from his father, and Queen Henrietta's Men, formed in 1625, partly from earlier companies under the patronage of Charles' mother and sister. The theatres were closed for a long time because of plague in 1638–39, although after the Long Parliament officially closed them for good in 1642, private performances continued, and at some periods public ones. In other forms of literature, and especially in drama, the Caroline period was a diminished continuation of the trends of the previous two reigns. In the specialized domain of literary criticism and theory, Henry Reynolds' "Mythomystes" was published in 1632, in which the author attempts a systematic application of Neoplatonism to poetry. The result has been characterized as "a tropical forest of strange fancies" and "perversities of taste." Painting. Charles I can be compared to King Henry VIII and King George III as a highly influential royal collector; he was by far the keenest collector of art of all the Stuart kings. He saw painting as a way of promoting his elevated view of the monarchy. His collection reflected his aesthetic tastes, which contrasted with the systematic acquisition of a wide range of objects that was typical of contemporary German and Habsburg princes. By his death, he had amassed about 1,760 paintings, including works by Titian, Raphael and Correggio among others. Charles commissioned the ceiling of the Banqueting House, Whitehall from Rubens and paintings by artists from the Low Countries such as Gerard van Honthorst and Daniel Mytens. In 1628, he bought the collection that the Duke of Mantua was forced to sell. In 1632, the peripatetic king visited Spain, where he sat for a portrait by Diego Velázquez, although the picture is now lost. As king he worked to entice leading foreign painters to London for longer or shorter spells. In 1626, he was able to persuade Orazio Gentileschi to settle in England, later to be joined by his daughter Artemisia and some of his sons. Rubens was a particular target: eventually in 1630 he came on a diplomatic mission that included painting, and he later sent Charles more paintings from Antwerp. Rubens was very well treated during his nine-month visit, during which he was knighted. Charles's court portraitist was Daniël Mijtens. Van Dyck. Anthony van Dyck (appointed "painter to the king," 1633–1641) was a dominant influence. Often in Antwerp, but closely in touch with the English court, he assisted King Charles's agents in their search for pictures. Van Dyck also sent back some of his own works and had painted Charles's sister, Queen Elizabeth of Bohemia, at The Hague in 1632. Van Dyck was knighted and given a pension of £200 a year, in a grant in which he was described as "principalle Paynter in ordinary to their majesties". He was provided with a house on the River Thames at Blackfriars, and a suite of rooms in Eltham Palace. His Blackfriars studio was frequently visited by the King and Queen, who hardly sat for another painter while van Dyck lived. Van Dyck undertook a large series of portraits of the King and Queen Henrietta Maria, as well as their children and some courtiers. Many were completed in several versions and used as diplomatic gifts or given to supporters of the increasingly embattled king. Van Dyck's subjects appear relaxed and elegant but with an overarching air of authority, a tone that dominated English portrait painting until the end of the 18th century. Many of the portraits have lush landscape backgrounds. His portraits of Charles on horseback updated the grandeur of Titian's Emperor Charles V, but even more effective and original is his portrait in the Louvre of Charles dismounted: "Charles is given a totally natural look of instinctive sovereignty, in a deliberately informal setting where he strolls so negligently that he seems at first glance nature's gentleman rather than England's King". Although he established the classic "Cavalier" style and dress, a majority of his most important patrons took the Parliamentarian side in the English Civil War that broke out soon after his death. Upon his death in 1641, van Dyke's position as portraitist to the royal family was filled, practically if not formally, by William Dobson (c. 1610–1646), who is known to have had access to the Royal Collection and copied works by Titian and van Dyck. Dobson was thus the most prominent native-born English artist of the era. Architecture. The Classical architecture popular in Italy and France was introduced to Britain during the Caroline era; until then Renaissance architecture had largely passed Britain by. The style arrived in the form of Palladianism, the most influential pioneer of the style was the Englishman Inigo Jones. Jones travelled throughout Italy with the 'Collector' Earl of Arundel, annotating his copy of Palladio's treatise, in 1613–1614. The "Palladianism" of Jones and his contemporaries and later followers was a style largely of facades, and the mathematical formulae dictating layout were not strictly applied. A handful of great country houses in England built between 1640 and 1680, such as Wilton House, are in this Palladian style. These follow the success of Jones' Palladian designs for the Queen's House at Greenwich and the Banqueting House at Whitehall (the residence of English monarchy from 1530 to 1698), and the uncompleted royal palace in London of Charles I. Jones's St Paul's, Covent Garden (1631) was the first completely new English church since the Reformation, and an imposing transcription of the Tuscan order as described by Vitruvius – in effect Early Roman or Etruscan architecture. Possibly "nowhere in Europe had this literal primitivism been attempted", according to Sir John Summerson. Jones was a figure of the court, and most commissions for large houses during the reign were built in a style for which Summerson's name "Artisan Mannerism" has been widely accepted. This was a development of Jacobean architecture led by a group of mostly London-based craftsmen still active in their guilds (called livery companies in London). Often the names of the architects or designers are uncertain, and often the main building contractor played a large part in the design. The most prominent of these, and also the leading native sculptor of the period, was the stonemason Nicholas Stone, who also worked with Inigo Jones. John Jackson (d. 1663) was based in Oxford, and made additions to various colleges there. The owner of Swakeleys House (1638), now on the edge of London, was a merchant who became Lord Mayor of London in 1640, and the house shows "what a gulf there was between the taste of the Court and that of the City." It features prominently the fancy quasi-classical gable ends that were a mark of the style. Other houses from the 1630s in the style are the "Dutch House", as it was known, now Kew Palace, Broome Park in Kent, Barnham Court in West Sussex, West Horsley Place and Slyfield Manor, the last two near Guildford. These are mainly in brick, apart from stone or wood mullions. The interiors often show a riot of decoration, as carpenters and stuccoists were given their head. Raynham Hall in Norfolk (1630s), where the origins of the design have been much discussed, also features large and proud gable ends, but in a far more restrained fashion, that reflects Italian influence, by whatever route it came. Following the execution of Charles I, the Palladian designs advocated by Inigo Jones were too closely associated with the court of the late king to survive the turmoil of the Civil War. Following the Stuart restoration, Jones's Palladianism was eclipsed by the Baroque designs of such architects as William Talman and Sir John Vanbrugh, Nicholas Hawksmoor, and even Jones' pupil John Webb. Science. Medicine. Medicine saw a major step forward with the 1628 publication by William Harvey of his study of the circulatory system, "Exercitatio Anatomica de Motu Cordis et Sanguinis in Animalibus" ("An Anatomical Exercise on the Motion of the Heart and Blood in Living Beings"). Its reception was highly critical and hostile; but within a generation his work began to receive the valuation it deserved. Countering medical progress, the occultist Robert Fludd continued his series of enormous and convoluted volumes of esoteric lore, begun during the previous reign. In 1626 appeared his "Philosophia Sacra" (which constituted Portion IV of Section I of Tractate II of Volume II of Fludd's "History of the Macrocosm and Microcosm"), which was followed in 1629 and 1631 by the two-part medical text "Medicina Catholica". Fludd's last major work would be the posthumously published "Philosophia Moysaica". Philosophy. The revolution in thinking that connects Sir Francis Bacon (1561–1626) with the foundation of the Royal Society (1660) was ongoing throughout the Caroline period; Bacon's "New Atlantis" was first printed in 1627, and contributed to the evolving new paradigm among receptive individuals. The men who would begin the Royal Society were for the most part still schoolboys and students in this period—though John Wilkins was already publishing early works of Copernican astronomy and science advocacy, "The Discovery of a World in the Moon" (1638) and "A Discourse Concerning a New Planet" (1640). Lacking formal scientific institutions and organisations, Caroline scientists, proto-scientists, and "natural philosophers" had to cluster in informal groups, often under the social and financial patronage of a sympathetic aristocrat. This again was an old phenomenon: a precedent in the prior reigns of Elizabeth and James can be identified in the circle that revolved around the "Wizard Earl" of Northumberland. Caroline scientists often clustered similarly. These ad hoc associations led to a decline in mystical philosophies popular at the time, such as alchemy and astrology, Neoplatonism and Kabbalah and sympathetic magic. Mathematics. In mathematics, two major works were published in a single year, 1631. Thomas Harriot's "Artis analyticae praxis", published ten years posthumously, and William Oughtred's "Clavis mathematicae". Both contributed to the evolution of modern mathematical language; the former introduced the formula_0 sign for multiplication and (::) sign for proportion. In philosophy, Thomas Hobbes (1588–1679) was already writing some of his works and evolving his key concepts, though they were not in print until after the end of the Caroline era. Religion. Regardless of religious doctrine or political belief, the vast majority in all three kingdoms believed a 'well-ordered' monarchy was divinely mandated. They disagreed on what 'well-ordered' meant, and who held ultimate authority in clerical affairs. Episcopalians generally supported a church governed by bishops, appointed by, and answerable to, the king; Puritans believed he was answerable to the leaders of the church, appointed by their congregations. The Caroline period was one of intense debate over religious practice and liturgy. While the Church of Scotland, or kirk, was overwhelmingly Presbyterian, the position in England was more complex. 'Puritan' was a general term for anyone who wanted to reform, or 'purify', the Church of England, and contained many different sects. Presbyterians were the most prominent, particularly in Parliament, but there were many others, such as Congregationalists, often grouped together as Independents. Close links between religion and politics added further complexity; bishops sat in the House of Lords, where they often blocked Parliamentary legislation. Although Charles was firmly Protestant, even among those who supported Episcopalianism, many opposed the High church rituals he sought to impose in England and Scotland. Often seen as essentially Catholic, these caused widespread suspicion and mistrust. Genuinely felt, there were a number of reasons for this; first, close links between 17th century religion and politics meant alterations in one were often viewed as implying alterations in the other. Second, in a period dominated by the Thirty Years' War, it reflected concerns Charles was failing to support Protestant Europe, when it was under threat from Catholic powers. Charles worked closely with Archbishop William Laud (1573–1645) on remodelling the church, including preparation of a new Book of Common Prayer. Historians Kevin Sharpe and Julian Davies suggest Charles was the prime instigator of religious change, with Laud ensuring the appointment of key supporters, such as Roger Maynwaring and Robert Sibthorpe. Scottish resistance to Caroline reforms ended with the 1639 and 1640 Bishops Wars, which expelled bishops from the kirk, and established a Covenanter government. Following the 1643 Solemn League and Covenant, the English and Scots set up the Westminster Assembly, intending to create a unified, Presbyterian church of England and Scotland. However, it soon became clear such a proposal would not be approved, even by the Puritan dominated Long Parliament, and it was abandoned in 1647. Foreign policy. King James I (reigned 1603–1625) was sincerely devoted to peace, not just for his three kingdoms, but for Europe as a whole. Europe was deeply polarised, and on the verge of the massive Thirty Years' War (1618–1648), with the smaller established Protestant states facing the aggression of the larger Catholic empires. The Catholics in Spain, as well as the Emperor Ferdinand II, the Vienna-based leader of the Habsburgs and head of the Holy Roman Empire, were both heavily influenced by the Catholic Counter-Reformation. They had the goal of expelling Protestantism from their domains. Charles inherited a weak navy and the early years of the era saw numerous ships lost to Barbary pirates, in the pay of the Ottoman empire, whose prisoners became slaves. This extended to coastal raids, such as the taking of 60 people in August 1625 from Mount's Bay, Cornwall, and it is estimated that by 1626, 4,500 Britons were held in captivity in North Africa. Ships continued to be seized even in British waters, and by the 1640s, Parliament was passing measures to raise money to ransom hostages from the Turks. The Duke of Buckingham (1592–1628), who increasingly was the actual ruler of Britain, wanted an alliance with Spain. Buckingham took Charles with him to Spain to woo the Infanta in 1623. However, Spain's terms were that James must drop Britain's anti-Catholic intolerance or no marriage. Buckingham and Charles were humiliated and Buckingham became the leader of the widespread British demand for a war against Spain. Meanwhile, the Protestant princes looked to Britain, since it was the strongest of all the Protestant countries, to give military support for their cause. His son-in-law and daughter became king and queen of Bohemia, which outraged Vienna. The Thirty Years' War began in 1618, as the Habsburg Emperor ousted the new king and queen of Bohemia, and massacred their followers. Catholic Bavaria then invaded the Palatine, and James's son-in-law begged for James's military intervention. James finally realised his policies had backfired and refused these pleas. He successfully kept Britain out of the European-wide war that proved so heavily devastating for three decades. James's backup plan was to marry his son Charles to a French Catholic princess, who would bring a handsome dowry. Parliament and the British people were strongly opposed to any Catholic marriage, were demanding immediate war with Spain, and strongly favored with the Protestant cause in Europe. James had alienated both elite and popular opinion in Britain, and Parliament was cutting back its financing. Historians credit James for pulling back from a major war at the last minute, and keeping Britain in peace. Charles trusted Buckingham, who made himself rich in the process but proved a failure at foreign and military policy. Charles I gave him command of the military expedition against Spain in 1625. It was a total fiasco with many dying from disease and starvation. He led another disastrous military campaign in 1627. Buckingham was hated and the damage to the king's reputation was irreparable. England rejoiced when he was assassinated in 1628 by John Felton. The eleven years 1629–1640, during which Charles ruled England without a Parliament, are referred to as the Personal Rule. There was no money for war so peace was essential. Without the means in the foreseeable future to raise funds from Parliament for a European war, or the help of Buckingham, Charles made peace with France and Spain. Lack of funds for war, and internal conflict between the king and Parliament led to a redirection of English involvement in European affairs – much to the dismay of Protestant forces on the continent. This involved a continued reliance on the Anglo-Dutch brigade as the main agency of English military participation against the Habsburgs, although regiments also fought for Sweden thereafter. The determination of James I and Charles I to avoid involvement in the continental conflict appears in retrospect as one of the most significant, and most positive, aspects of their reigns. There was a small naval Anglo-French War (1627–1629), in which the England supported the French Huguenots against King Louis XIII of France. During 1600–1650 England made repeated efforts to colonise Guiana in South America. They all failed and the lands (Surinam) were ceded to the Dutch in 1667. Colonial developments. Between 1620 and 1643, religious dissatisfaction, mostly from Puritans and those opposed to the King's purported Catholic leanings, led to large scale voluntary emigration, which later came to be known as The Great Migration. Of the estimated 80,000 emigrants from England, approximately 20,000 settled in North America, Where New England was most often the destination. The colonists to New England were mostly families with some education who were leading relatively prosperous lives in England. Carolina. In 1629, King Charles granted his attorney-general, Sir Robert Heath, the Cape Fear region of what is now the United States. It was incorporated as the Province of Carolana, named in honour of the King. Heath attempted and failed to populate the province, but lost interest and eventually sold it to Lord Maltravers. The first permanent settlers to Carolina arrived during the reign of Charles II, who issued a new charter. Maryland. In 1632, King Charles I granted a charter for Maryland, a proprietary colony of about twelve million acres (49,000 km2), to the Roman Catholic 2nd Baron Baltimore who wanted to realise his father's ambition of founding a colony where Catholic's could live in harmony alongside Protestants. Unlike the royal charter granted for Carolina to Robert Heath, the Maryland charter decreed no stipulations regarding future settlers' religious beliefs. Therefore, it was assumed that Catholics would be able live unmolested in the new colony. The new colony was named after the devoutly Catholic Henrietta Maria of France, Charles I's wife and Queen Consort. Whatever the King's reason for granting the colony to Baltimore, it suited his strategic policies to have a colony north of the Potomac in 1632. The colony of New Netherland begun by England's great imperial rival, the Dutch United Provinces, which claimed the Delaware River valley and was deliberately vague about its border with Virginia. Charles rejected all the Dutch claims on the Atlantic seaboard and wanted to maintain English claims by formally occupying the territory. Lord Baltimore sought both Catholic and Protestant settlers for Maryland, often enticing them with large grants of land and a promise of religious toleration. The new colony also used the headright system, which originated in Jamestown, whereby settlers were given of land for each person they brought into the colony. However, of the approximately 200 initial settlers who travelled to Maryland on the ships "Ark" and "Dove," the majority were Protestant. The Roman Catholics, already a minority, led by a Jesuit Father Andrew White worked together with Protestants, under the patronage of Leonard Calvert, the 2nd Lord Baltimore's brother to create a new settlement, St. Mary's City. This became the first capital of Maryland. Today, the city is considered the birthplace of religious freedom in the United States, with the earliest North American colonial settlement ever established with the specific mandate of being a haven for both Catholic and Protestant Christian faiths. Roman Catholics were, though, encouraged to be reticent regarding their faith in order not to cause discord with their Protestant neighbours. Religious tolerance continued to be an aspiration and in the province's first legislative assembly the Maryland Toleration Act of 1649 was passed, enshrining religious freedom in law. Later in the century, the Protestant Revolution put an end to Maryland's religious toleration, as Catholicism was outlawed. Religious toleration would not be restored in Maryland until after the American Revolution. Connecticut. The Connecticut Colony was originally a number of small settlements at Windsor, Wethersfield, Saybrook, Hartford, and New Haven. The first English settlers arrived in 1633 and settled at Windsor. John Winthrop the Younger of Massachusetts received a commission to create Saybrook Colony at the mouth of the Connecticut River in 1635. The main body of settlers – Puritans from Massachusetts Bay Colony, led by Thomas Hooker – arrived in 1636 and established the Connecticut Colony at Hartford. The Quinnipiac Colony ... The New Haven Colony was established by John Davenport, Theophilus Eaton, and others in March 1638. This colony had its own constitution called "The Fundamental Agreement of the New Haven Colony" ratified in 1639. The Caroline era settlers held Calvinist religious beliefs and maintained a separation from the Church of England. Mostly they had immigrated to New England during the Great Migration. These individually independent settlements were unsanctioned by the Crown. Official recognition did not come until the Carolean era. Rhode Island (1636). What would become the Colony of Rhode Island and Providence Plantations (commonly shortened to merely Rhode Island) was founded during the Caroline era. Dissenters from the Puritan-dominated Massachusetts Bay Colony moved into the area in two separate waves during the 1630s. The first, led by Roger Williams in 1636, settled the Providence Plantations, today the modern city of Providence, Rhode Island as well as including neighbouring communities such as Cranston (then Patuxent). A year later, a different group led by Anne Hutchinson, settled on the northern part of Aquidneck Island (then known as "Rhode" Island). This was following her trial and banishment during the Antinomian Controversy, a key politico-religious movement in New England at the time. Another dissenter that was originally part of Williams's party, Samuel Gorton, later split from that group and founded his own settlement of Shawomet Purchase in 1642, today this is the community of Warwick. After some conflicts between Gorton's settlement and the already established and chartered Massachusetts Bay Colony, Gorton travelled back to England and received orders from Robert Rich, 2nd Earl of Warwick for Massachusetts Bay to allow the settlements to manage their own affairs. While this fell short of a full charter, it did grant the Providence and Rhode Island settlements some degree of autonomy, until the Rhode Island Royal Charter of 1663 officially recognised the colony as fully independent of Massachusetts Bay. Barbados. After visits by Portuguese and Spanish explorers, Barbados was claimed on 14 May 1625 for James I (who had died six weeks earlier) by Captain John Powell. Two years later, a party of 80 settlers and 10 slaves, led by his brother, Captain Henry Powell, occupied the island. In 1639 the colonists established a local democratic assembly. Agriculture, reliant on indenture, was developed by the introduction of sugar cane, tobacco and cotton, beginning in the 1630s. End of the era. After Charles' abortive attempt to arrest five members of Parliament on 4 January 1642, the over-confident King declared war on Parliament and the Civil War began with the King fighting the armies of both the English and Scottish parliaments. A key supporter of Charles was his nephew Prince Rupert (1619–82), third son of Elector Palatine Frederick V and Elizabeth, sister of Charles. He was the most brilliant and dashing of Charles I's generals and the dominant royalist during the Civil War. He was also active in the British navy, a founder-director of the Royal African Company and the Hudson's Bay Company, a scientist, and an artist. Following Charles' defeat at the Battle of Naseby in June 1645, he surrendered to the Scottish parliamentary army which eventually handed him over to the English Parliament. Held under house arrest at Hampton Court Palace, Charles steadfastly refused demands for a constitutional monarchy. In November 1647 he fled from Hampton Court, but was quickly recaptured and imprisoned by Parliament in the more secure Carisbrooke Castle on the Isle of Wight. At Carisbrooke, Charles still intriguing and plotting futile escapes managed to forge an alliance with Scotland, by promising to establish Presbyterianism, and a Scottish invasion of England was planned. However, by the end of 1648 Oliver Cromwell's New Model Army had consolidated its control over England and the invading Scots were defeated at the Battle of Preston where 2,000 of Charles' troops were killed and a further 9,000 captured. The King, now truly defeated, was charged with the crimes of tyranny and treason. The King was tried, convicted, and executed in January 1649. His execution took place outside a window of Inigo Jones' Banqueting House, with its ceiling Charles's had commissioned from Rubens as the first phase of his new royal palace. The palace was never completed and the King's art collection dispersed. In his lifetime Charles accumulated enemies who mocked his artistic interests as an extravagant expenditures of state funds, and whispered that he fell under the influence of Cardinal Francesco Barberini, the pope's nephew who was also a distinguished collector. The high points of English culture became a major casualty of the Puritan victory in the Civil War. They closed theaters and impeded poetic drama, but most significantly they ended royal and court patronage of artists and musicians. Following the King's execution, under The Protectorate, with the exception of sacred music and, in its latter years, opera, the arts did not flourish again until The Restoration and beginning of the Carolean era in 1660 under Charles II. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\times" } ]
https://en.wikipedia.org/wiki?curid=7541904
75425433
Center-of-gravity method
The center-of-gravity method is a theoretic algorithm for convex optimization. It can be seen as a generalization of the bisection method from one-dimensional functions to multi-dimensional functions.Sec.8.2.2 It is theoretically important as it attains the optimal convergence rate. However, it has little practical value as each step is very computationally expensive. Input. Our goal is to solve a convex optimization problem of the form:minimize "f"("x") s.t. "x" in "G",where "f" is a convex function, and "G" is a convex subset of a Euclidean space "Rn". We assume that we have a "subgradient oracle": a routine that can compute a subgradient of "f" at any given point (if "f" is differentiable, then the only subgradient is the gradient formula_0; but we do not assume that "f" is differentiable). Method. The method is iterative. At each iteration "t", we keep a convex region "Gt", which surely contains the desired minimum. Initially we have "G"0 = "G". Then, each iteration "t" proceeds as follows. Note that, by the above inequality, every minimum point of "f" must be in "Gt"+1.Sec.8.2.2 Convergence. It can be proved that formula_1 .Therefore,formula_2.In other words, the method has linear convergence of the residual objective value, with convergence rate formula_3. To get an ε-approximation to the objective value, the number of required steps is at most formula_4.Sec.8.2.2 Computational complexity. The main problem with the method is that, in each step, we have to compute the center-of-gravity of a polytope. All the methods known so far for this problem require a number of arithmetic operations that is exponential in the dimension "n.Sec.8.2.2" Therefore, the method is not useful in practice when there are 5 or more dimensions. See also. The ellipsoid method can be seen as a tractable approximation to the center-of-gravity method. Instead of maintaining the feasible polytope "Gt", it maintains an ellipsoid that contains it. Computing the center-of-gravity of an ellipsoid is much easier than of a general polytope, and hence the ellipsoid method can usually be computed in polynomial time. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\nabla f" }, { "math_id": 1, "text": "Volume(G_{t+1})\\leq \\left[1-\\left(\\frac{n}{n+1}\\right)^n\\right]\\cdot Volume(G_t)" }, { "math_id": 2, "text": "f(x_t) - \\min_G f \\leq \\left[1-\\left(\\frac{n}{n+1}\\right)^n\\right]^{t/n} [\\max_G f - \\min_G f] " }, { "math_id": 3, "text": "\\left[1-\\left(\\frac{n}{n+1}\\right)^n\\right]^{1/n} \\leq (1-1/e)^{1/n} " }, { "math_id": 4, "text": "2.13 n \\ln(1/\\epsilon) + 1 " } ]
https://en.wikipedia.org/wiki?curid=75425433
7543
Computational complexity theory
Inherent difficulty of computational problems In theoretical computer science and mathematics, computational complexity theory focuses on classifying computational problems according to their resource usage, and explores the relationships between these classifications. A computational problem is a task solved by a computer. A computation problem is solvable by mechanical application of mathematical steps, such as an algorithm. A problem is regarded as inherently difficult if its solution requires significant resources, whatever the algorithm used. The theory formalizes this intuition, by introducing mathematical models of computation to study these problems and quantifying their computational complexity, i.e., the amount of resources needed to solve them, such as time and storage. Other measures of complexity are also used, such as the amount of communication (used in communication complexity), the number of gates in a circuit (used in circuit complexity) and the number of processors (used in parallel computing). One of the roles of computational complexity theory is to determine the practical limits on what computers can and cannot do. The P versus NP problem, one of the seven Millennium Prize Problems, is part of the field of computational complexity. Closely related fields in theoretical computer science are analysis of algorithms and computability theory. A key distinction between analysis of algorithms and computational complexity theory is that the former is devoted to analyzing the amount of resources needed by a particular algorithm to solve a problem, whereas the latter asks a more general question about all possible algorithms that could be used to solve the same problem. More precisely, computational complexity theory tries to classify problems that can or cannot be solved with appropriately restricted resources. In turn, imposing restrictions on the available resources is what distinguishes computational complexity from computability theory: the latter theory asks what kinds of problems can, in principle, be solved algorithmically. Computational problems. Problem instances. A computational problem can be viewed as an infinite collection of "instances" together with a set (possibly empty) of "solutions" for every instance. The input string for a computational problem is referred to as a problem instance, and should not be confused with the problem itself. In computational complexity theory, a problem refers to the abstract question to be solved. In contrast, an instance of this problem is a rather concrete utterance, which can serve as the input for a decision problem. For example, consider the problem of primality testing. The instance is a number (e.g., 15) and the solution is "yes" if the number is prime and "no" otherwise (in this case, 15 is not prime and the answer is "no"). Stated another way, the "instance" is a particular input to the problem, and the "solution" is the output corresponding to the given input. To further highlight the difference between a problem and an instance, consider the following instance of the decision version of the travelling salesman problem: Is there a route of at most 2000 kilometres passing through all of Germany's 15 largest cities? The quantitative answer to this particular problem instance is of little use for solving other instances of the problem, such as asking for a round trip through all sites in Milan whose total length is at most 10 km. For this reason, complexity theory addresses computational problems and not particular problem instances. Representing problem instances. When considering computational problems, a problem instance is a string over an alphabet. Usually, the alphabet is taken to be the binary alphabet (i.e., the set {0,1}), and thus the strings are bitstrings. As in a real-world computer, mathematical objects other than bitstrings must be suitably encoded. For example, integers can be represented in binary notation, and graphs can be encoded directly via their adjacency matrices, or by encoding their adjacency lists in binary. Even though some proofs of complexity-theoretic theorems regularly assume some concrete choice of input encoding, one tries to keep the discussion abstract enough to be independent of the choice of encoding. This can be achieved by ensuring that different representations can be transformed into each other efficiently. Decision problems as formal languages. Decision problems are one of the central objects of study in computational complexity theory. A decision problem is a type of computational problem where the answer is either "yes" or "no" (alternatively, 1 or 0). A decision problem can be viewed as a formal language, where the members of the language are instances whose output is yes, and the non-members are those instances whose output is no. The objective is to decide, with the aid of an algorithm, whether a given input string is a member of the formal language under consideration. If the algorithm deciding this problem returns the answer "yes", the algorithm is said to accept the input string, otherwise it is said to reject the input. An example of a decision problem is the following. The input is an arbitrary graph. The problem consists in deciding whether the given graph is connected or not. The formal language associated with this decision problem is then the set of all connected graphs — to obtain a precise definition of this language, one has to decide how graphs are encoded as binary strings. Function problems. A function problem is a computational problem where a single output (of a total function) is expected for every input, but the output is more complex than that of a decision problem—that is, the output is not just yes or no. Notable examples include the traveling salesman problem and the integer factorization problem. It is tempting to think that the notion of function problems is much richer than the notion of decision problems. However, this is not really the case, since function problems can be recast as decision problems. For example, the multiplication of two integers can be expressed as the set of triples formula_0 such that the relation formula_1 holds. Deciding whether a given triple is a member of this set corresponds to solving the problem of multiplying two numbers. Measuring the size of an instance. To measure the difficulty of solving a computational problem, one may wish to see how much time the best algorithm requires to solve the problem. However, the running time may, in general, depend on the instance. In particular, larger instances will require more time to solve. Thus the time required to solve a problem (or the space required, or any measure of complexity) is calculated as a function of the size of the instance. The input size is typically measured in bits. Complexity theory studies how algorithms scale as input size increases. For instance, in the problem of finding whether a graph is connected, how much more time does it take to solve a problem for a graph with formula_2 vertices compared to the time taken for a graph with formula_3 vertices? If the input size is formula_3, the time taken can be expressed as a function of formula_3. Since the time taken on different inputs of the same size can be different, the worst-case time complexity formula_4 is defined to be the maximum time taken over all inputs of size formula_3. If formula_4 is a polynomial in formula_3, then the algorithm is said to be a polynomial time algorithm. Cobham's thesis argues that a problem can be solved with a feasible amount of resources if it admits a polynomial-time algorithm. Machine models and complexity measures. Turing machine. A Turing machine is a mathematical model of a general computing machine. It is a theoretical device that manipulates symbols contained on a strip of tape. Turing machines are not intended as a practical computing technology, but rather as a general model of a computing machine—anything from an advanced supercomputer to a mathematician with a pencil and paper. It is believed that if a problem can be solved by an algorithm, there exists a Turing machine that solves the problem. Indeed, this is the statement of the Church–Turing thesis. Furthermore, it is known that everything that can be computed on other models of computation known to us today, such as a RAM machine, Conway's Game of Life, cellular automata, lambda calculus or any programming language can be computed on a Turing machine. Since Turing machines are easy to analyze mathematically, and are believed to be as powerful as any other model of computation, the Turing machine is the most commonly used model in complexity theory. Many types of Turing machines are used to define complexity classes, such as deterministic Turing machines, probabilistic Turing machines, non-deterministic Turing machines, quantum Turing machines, symmetric Turing machines and alternating Turing machines. They are all equally powerful in principle, but when resources (such as time or space) are bounded, some of these may be more powerful than others. A deterministic Turing machine is the most basic Turing machine, which uses a fixed set of rules to determine its future actions. A probabilistic Turing machine is a deterministic Turing machine with an extra supply of random bits. The ability to make probabilistic decisions often helps algorithms solve problems more efficiently. Algorithms that use random bits are called randomized algorithms. A non-deterministic Turing machine is a deterministic Turing machine with an added feature of non-determinism, which allows a Turing machine to have multiple possible future actions from a given state. One way to view non-determinism is that the Turing machine branches into many possible computational paths at each step, and if it solves the problem in any of these branches, it is said to have solved the problem. Clearly, this model is not meant to be a physically realizable model, it is just a theoretically interesting abstract machine that gives rise to particularly interesting complexity classes. For examples, see non-deterministic algorithm. Other machine models. Many machine models different from the standard multi-tape Turing machines have been proposed in the literature, for example random-access machines. Perhaps surprisingly, each of these models can be converted to another without providing any extra computational power. The time and memory consumption of these alternate models may vary. What all these models have in common is that the machines operate deterministically. However, some computational problems are easier to analyze in terms of more unusual resources. For example, a non-deterministic Turing machine is a computational model that is allowed to branch out to check many different possibilities at once. The non-deterministic Turing machine has very little to do with how we physically want to compute algorithms, but its branching exactly captures many of the mathematical models we want to analyze, so that non-deterministic time is a very important resource in analyzing computational problems. Complexity measures. For a precise definition of what it means to solve a problem using a given amount of time and space, a computational model such as the deterministic Turing machine is used. The "time required" by a deterministic Turing machine formula_5 on input formula_6 is the total number of state transitions, or steps, the machine makes before it halts and outputs the answer ("yes" or "no"). A Turing machine formula_5 is said to operate within time formula_7 if the time required by formula_5 on each input of length formula_3 is at most formula_7. A decision problem formula_8 can be solved in time formula_7 if there exists a Turing machine operating in time formula_7 that solves the problem. Since complexity theory is interested in classifying problems based on their difficulty, one defines sets of problems based on some criteria. For instance, the set of problems solvable within time formula_7 on a deterministic Turing machine is then denoted by DTIME(formula_7). Analogous definitions can be made for space requirements. Although time and space are the most well-known complexity resources, any complexity measure can be viewed as a computational resource. Complexity measures are very generally defined by the Blum complexity axioms. Other complexity measures used in complexity theory include communication complexity, circuit complexity, and decision tree complexity. The complexity of an algorithm is often expressed using big O notation. Best, worst and average case complexity. The best, worst and average case complexity refer to three different ways of measuring the time complexity (or any other complexity measure) of different inputs of the same size. Since some inputs of size formula_3 may be faster to solve than others, we define the following complexities: The order from cheap to costly is: Best, average (of discrete uniform distribution), amortized, worst. For example, the deterministic sorting algorithm quicksort addresses the problem of sorting a list of integers. The worst-case is when the pivot is always the largest or smallest value in the list (so the list is never divided). In this case, the algorithm takes time O(formula_9). If we assume that all possible permutations of the input list are equally likely, the average time taken for sorting is formula_10. The best case occurs when each pivoting divides the list in half, also needing formula_10 time. Upper and lower bounds on the complexity of problems. To classify the computation time (or similar resources, such as space consumption), it is helpful to demonstrate upper and lower bounds on the maximum amount of time required by the most efficient algorithm to solve a given problem. The complexity of an algorithm is usually taken to be its worst-case complexity unless specified otherwise. Analyzing a particular algorithm falls under the field of analysis of algorithms. To show an upper bound formula_4 on the time complexity of a problem, one needs to show only that there is a particular algorithm with running time at most formula_4. However, proving lower bounds is much more difficult, since lower bounds make a statement about all possible algorithms that solve a given problem. The phrase "all possible algorithms" includes not just the algorithms known today, but any algorithm that might be discovered in the future. To show a lower bound of formula_4 for a problem requires showing that no algorithm can have time complexity lower than formula_4. Upper and lower bounds are usually stated using the big O notation, which hides constant factors and smaller terms. This makes the bounds independent of the specific details of the computational model used. For instance, if formula_11, in big O notation one would write formula_12. Complexity classes. Defining complexity classes. A complexity class is a set of problems of related complexity. Simpler complexity classes are defined by the following factors: Some complexity classes have complicated definitions that do not fit into this framework. Thus, a typical complexity class has a definition like the following: The set of decision problems solvable by a deterministic Turing machine within time formula_7. (This complexity class is known as DTIME(formula_7).) But bounding the computation time above by some concrete function formula_7 often yields complexity classes that depend on the chosen machine model. For instance, the language formula_13 can be solved in linear time on a multi-tape Turing machine, but necessarily requires quadratic time in the model of single-tape Turing machines. If we allow polynomial variations in running time, Cobham-Edmonds thesis states that "the time complexities in any two reasonable and general models of computation are polynomially related" . This forms the basis for the complexity class P, which is the set of decision problems solvable by a deterministic Turing machine within polynomial time. The corresponding set of function problems is FP. Important complexity classes. Many important complexity classes can be defined by bounding the time or space used by the algorithm. Some important complexity classes of decision problems defined in this manner are the following: Logarithmic-space classes do not account for the space required to represent the problem. It turns out that PSPACE = NPSPACE and EXPSPACE = NEXPSPACE by Savitch's theorem. Other important complexity classes include BPP, ZPP and RP, which are defined using probabilistic Turing machines; AC and NC, which are defined using Boolean circuits; and BQP and QMA, which are defined using quantum Turing machines. #P is an important complexity class of counting problems (not decision problems). Classes like IP and AM are defined using Interactive proof systems. ALL is the class of all decision problems. Hierarchy theorems. For the complexity classes defined in this way, it is desirable to prove that relaxing the requirements on (say) computation time indeed defines a bigger set of problems. In particular, although DTIME(formula_3) is contained in DTIME(formula_9), it would be interesting to know if the inclusion is strict. For time and space requirements, the answer to such questions is given by the time and space hierarchy theorems respectively. They are called hierarchy theorems because they induce a proper hierarchy on the classes defined by constraining the respective resources. Thus there are pairs of complexity classes such that one is properly included in the other. Having deduced such proper set inclusions, we can proceed to make quantitative statements about how much more additional time or space is needed in order to increase the number of problems that can be solved. More precisely, the time hierarchy theorem states that formula_14. The space hierarchy theorem states that formula_15. The time and space hierarchy theorems form the basis for most separation results of complexity classes. For instance, the time hierarchy theorem tells us that P is strictly contained in EXPTIME, and the space hierarchy theorem tells us that L is strictly contained in PSPACE. Reduction. Many complexity classes are defined using the concept of a reduction. A reduction is a transformation of one problem into another problem. It captures the informal notion of a problem being at most as difficult as another problem. For instance, if a problem formula_16 can be solved using an algorithm for formula_17, formula_16 is no more difficult than formula_17, and we say that formula_16 "reduces" to formula_17. There are many different types of reductions, based on the method of reduction, such as Cook reductions, Karp reductions and Levin reductions, and the bound on the complexity of reductions, such as polynomial-time reductions or log-space reductions. The most commonly used reduction is a polynomial-time reduction. This means that the reduction process takes polynomial time. For example, the problem of squaring an integer can be reduced to the problem of multiplying two integers. This means an algorithm for multiplying two integers can be used to square an integer. Indeed, this can be done by giving the same input to both inputs of the multiplication algorithm. Thus we see that squaring is not more difficult than multiplication, since squaring can be reduced to multiplication. This motivates the concept of a problem being hard for a complexity class. A problem formula_16 is "hard" for a class of problems formula_18 if every problem in formula_18 can be reduced to formula_16. Thus no problem in formula_18 is harder than formula_16, since an algorithm for formula_16 allows us to solve any problem in formula_18. The notion of hard problems depends on the type of reduction being used. For complexity classes larger than P, polynomial-time reductions are commonly used. In particular, the set of problems that are hard for NP is the set of NP-hard problems. If a problem formula_16 is in formula_18 and hard for formula_18, then formula_16 is said to be "complete" for formula_18. This means that formula_16 is the hardest problem in formula_18. (Since many problems could be equally hard, one might say that formula_16 is one of the hardest problems in formula_18.) Thus the class of NP-complete problems contains the most difficult problems in NP, in the sense that they are the ones most likely not to be in P. Because the problem P = NP is not solved, being able to reduce a known NP-complete problem, formula_19, to another problem, formula_20, would indicate that there is no known polynomial-time solution for formula_20. This is because a polynomial-time solution to formula_20 would yield a polynomial-time solution to formula_19. Similarly, because all NP problems can be reduced to the set, finding an NP-complete problem that can be solved in polynomial time would mean that P = NP. Important open problems. P versus NP problem. The complexity class P is often seen as a mathematical abstraction modeling those computational tasks that admit an efficient algorithm. This hypothesis is called the Cobham–Edmonds thesis. The complexity class NP, on the other hand, contains many problems that people would like to solve efficiently, but for which no efficient algorithm is known, such as the Boolean satisfiability problem, the Hamiltonian path problem and the vertex cover problem. Since deterministic Turing machines are special non-deterministic Turing machines, it is easily observed that each problem in P is also member of the class NP. The question of whether P equals NP is one of the most important open questions in theoretical computer science because of the wide implications of a solution. If the answer is yes, many important problems can be shown to have more efficient solutions. These include various types of integer programming problems in operations research, many problems in logistics, protein structure prediction in biology, and the ability to find formal proofs of pure mathematics theorems. The P versus NP problem is one of the Millennium Prize Problems proposed by the Clay Mathematics Institute. There is a US$1,000,000 prize for resolving the problem. Problems in NP not known to be in P or NP-complete. It was shown by Ladner that if formula_21 then there exist problems in formula_22 that are neither in formula_23 nor formula_22-complete. Such problems are called NP-intermediate problems. The graph isomorphism problem, the discrete logarithm problem and the integer factorization problem are examples of problems believed to be NP-intermediate. They are some of the very few NP problems not known to be in formula_23 or to be formula_22-complete. The graph isomorphism problem is the computational problem of determining whether two finite graphs are isomorphic. An important unsolved problem in complexity theory is whether the graph isomorphism problem is in formula_23, formula_22-complete, or NP-intermediate. The answer is not known, but it is believed that the problem is at least not NP-complete. If graph isomorphism is NP-complete, the polynomial time hierarchy collapses to its second level. Since it is widely believed that the polynomial hierarchy does not collapse to any finite level, it is believed that graph isomorphism is not NP-complete. The best algorithm for this problem, due to László Babai and Eugene Luks has run time formula_24 for graphs with formula_3 vertices, although some recent work by Babai offers some potentially new perspectives on this. The integer factorization problem is the computational problem of determining the prime factorization of a given integer. Phrased as a decision problem, it is the problem of deciding whether the input has a prime factor less than formula_25. No efficient integer factorization algorithm is known, and this fact forms the basis of several modern cryptographic systems, such as the RSA algorithm. The integer factorization problem is in formula_22 and in formula_26 (and even in UP and co-UP). If the problem is formula_22-complete, the polynomial time hierarchy will collapse to its first level (i.e., formula_22 will equal formula_26). The best known algorithm for integer factorization is the general number field sieve, which takes time formula_27 to factor an odd integer formula_3. However, the best known quantum algorithm for this problem, Shor's algorithm, does run in polynomial time. Unfortunately, this fact doesn't say much about where the problem lies with respect to non-quantum complexity classes. Separations between other complexity classes. Many known complexity classes are suspected to be unequal, but this has not been proved. For instance formula_28, but it is possible that formula_29. If formula_23 is not equal to formula_22, then formula_23 is not equal to formula_30 either. Since there are many known complexity classes between formula_23 and formula_30, such as formula_31, formula_32, formula_33, formula_34, formula_35, formula_36, etc., it is possible that all these complexity classes collapse to one class. Proving that any of these classes are unequal would be a major breakthrough in complexity theory. Along the same lines, formula_26 is the class containing the complement problems (i.e. problems with the "yes"/"no" answers reversed) of formula_22 problems. It is believed that formula_22 is not equal to formula_26; however, it has not yet been proven. It is clear that if these two complexity classes are not equal then formula_23 is not equal to formula_22, since formula_37. Thus if formula_38 we would have formula_39 whence formula_40. Similarly, it is not known if formula_41 (the set of all problems that can be solved in logarithmic space) is strictly contained in formula_23 or equal to formula_23. Again, there are many complexity classes between the two, such as formula_42 and formula_43, and it is not known if they are distinct or equal classes. It is suspected that formula_23 and formula_32 are equal. However, it is currently open if formula_44. Intractability. A problem that can theoretically be solved, but requires impractical and finite resources (e.g., time) to do so, is known as an &lt;templatestyles src="Template:Visible anchor/styles.css" /&gt;intractable problem. Conversely, a problem that can be solved in practice is called a &lt;templatestyles src="Template:Visible anchor/styles.css" /&gt;tractable problem, literally "a problem that can be handled". The term "infeasible" (literally "cannot be done") is sometimes used interchangeably with "intractable", though this risks confusion with a feasible solution in mathematical optimization. Tractable problems are frequently identified with problems that have polynomial-time solutions (formula_23, formula_45); this is known as the Cobham–Edmonds thesis. Problems that are known to be intractable in this sense include those that are EXPTIME-hard. If formula_22 is not the same as formula_23, then NP-hard problems are also intractable in this sense. However, this identification is inexact: a polynomial-time solution with large degree or large leading coefficient grows quickly, and may be impractical for practical size problems; conversely, an exponential-time solution that grows slowly may be practical on realistic input, or a solution that takes a long time in the worst case may take a short time in most cases or the average case, and thus still be practical. Saying that a problem is not in formula_23 does not imply that all large cases of the problem are hard or even that most of them are. For example, the decision problem in Presburger arithmetic has been shown not to be in formula_23, yet algorithms have been written that solve the problem in reasonable times in most cases. Similarly, algorithms can solve the NP-complete knapsack problem over a wide range of sizes in less than quadratic time and SAT solvers routinely handle large instances of the NP-complete Boolean satisfiability problem. To see why exponential-time algorithms are generally unusable in practice, consider a program that makes formula_46 operations before halting. For small formula_3, say 100, and assuming for the sake of example that the computer does formula_47 operations each second, the program would run for about formula_48 years, which is the same order of magnitude as the age of the universe. Even with a much faster computer, the program would only be useful for very small instances and in that sense the intractability of a problem is somewhat independent of technological progress. However, an exponential-time algorithm that takes formula_49 operations is practical until formula_3 gets relatively large. Similarly, a polynomial time algorithm is not always practical. If its running time is, say, formula_50, it is unreasonable to consider it efficient and it is still useless except on small instances. Indeed, in practice even formula_51 or formula_9 algorithms are often impractical on realistic sizes of problems. Continuous complexity theory. Continuous complexity theory can refer to complexity theory of problems that involve continuous functions that are approximated by discretizations, as studied in numerical analysis. One approach to complexity theory of numerical analysis is information based complexity. Continuous complexity theory can also refer to complexity theory of the use of analog computation, which uses continuous dynamical systems and differential equations. Control theory can be considered a form of computation and differential equations are used in the modelling of continuous-time and hybrid discrete-continuous-time systems. History. An early example of algorithm complexity analysis is the running time analysis of the Euclidean algorithm done by Gabriel Lamé in 1844. Before the actual research explicitly devoted to the complexity of algorithmic problems started off, numerous foundations were laid out by various researchers. Most influential among these was the definition of Turing machines by Alan Turing in 1936, which turned out to be a very robust and flexible simplification of a computer. The beginning of systematic studies in computational complexity is attributed to the seminal 1965 paper "On the Computational Complexity of Algorithms" by Juris Hartmanis and Richard E. Stearns, which laid out the definitions of time complexity and space complexity, and proved the hierarchy theorems. In addition, in 1965 Edmonds suggested to consider a "good" algorithm to be one with running time bounded by a polynomial of the input size. Earlier papers studying problems solvable by Turing machines with specific bounded resources include John Myhill's definition of linear bounded automata (Myhill 1960), Raymond Smullyan's study of rudimentary sets (1961), as well as Hisao Yamada's paper on real-time computations (1962). Somewhat earlier, Boris Trakhtenbrot (1956), a pioneer in the field from the USSR, studied another specific complexity measure. As he remembers: &lt;templatestyles src="Template:Blockquote/styles.css" /&gt; In 1967, Manuel Blum formulated a set of axioms (now known as Blum axioms) specifying desirable properties of complexity measures on the set of computable functions and proved an important result, the so-called speed-up theorem. The field began to flourish in 1971 when Stephen Cook and Leonid Levin proved the existence of practically relevant problems that are NP-complete. In 1972, Richard Karp took this idea a leap forward with his landmark paper, "Reducibility Among Combinatorial Problems", in which he showed that 21 diverse combinatorial and graph theoretical problems, each infamous for its computational intractability, are NP-complete. See also. &lt;templatestyles src="Div col/styles.css"/&gt; References. Citations. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "(a, b, c)" }, { "math_id": 1, "text": "a \\times b = c" }, { "math_id": 2, "text": "2n" }, { "math_id": 3, "text": "n" }, { "math_id": 4, "text": "T(n)" }, { "math_id": 5, "text": "M" }, { "math_id": 6, "text": "x" }, { "math_id": 7, "text": "f(n)" }, { "math_id": 8, "text": "A" }, { "math_id": 9, "text": "n^2" }, { "math_id": 10, "text": "O(n \\log n)" }, { "math_id": 11, "text": "T(n) = 7n^2 + 15n + 40" }, { "math_id": 12, "text": "T(n) = O(n^2)" }, { "math_id": 13, "text": "\\{xx \\mid x \\text{ is any binary string}\\}" }, { "math_id": 14, "text": "\\mathsf{DTIME}\\big(o(f(n)) \\big) \\subsetneq \\mathsf{DTIME} \\big(f(n) \\cdot \\log(f(n)) \\big)" }, { "math_id": 15, "text": "\\mathsf{DSPACE}\\big(o(f(n))\\big) \\subsetneq \\mathsf{DSPACE} \\big(f(n) \\big)" }, { "math_id": 16, "text": "X" }, { "math_id": 17, "text": "Y" }, { "math_id": 18, "text": "C" }, { "math_id": 19, "text": "\\Pi_2" }, { "math_id": 20, "text": "\\Pi_1" }, { "math_id": 21, "text": "P \\neq NP" }, { "math_id": 22, "text": "NP" }, { "math_id": 23, "text": "P" }, { "math_id": 24, "text": "O(2^{\\sqrt{n \\log n}})" }, { "math_id": 25, "text": "k" }, { "math_id": 26, "text": "co\\text{-}NP" }, { "math_id": 27, "text": "O(e^{\\left(\\sqrt[3]{\\frac{64}{9}}\\right)\\sqrt[3]{(\\log n)}\\sqrt[3]{(\\log \\log n)^2}})" }, { "math_id": 28, "text": "P \\subseteq NP \\subseteq PP \\subseteq PSPACE" }, { "math_id": 29, "text": "P = PSPACE" }, { "math_id": 30, "text": "PSPACE" }, { "math_id": 31, "text": "RP" }, { "math_id": 32, "text": "BPP" }, { "math_id": 33, "text": "PP" }, { "math_id": 34, "text": "BQP" }, { "math_id": 35, "text": "MA" }, { "math_id": 36, "text": "PH" }, { "math_id": 37, "text": "P = co\\text{-}P" }, { "math_id": 38, "text": "P = NP" }, { "math_id": 39, "text": "co\\text{-}P = co\\text{-}NP" }, { "math_id": 40, "text": "NP = P = co\\text{-}P = co\\text{-}NP" }, { "math_id": 41, "text": "L" }, { "math_id": 42, "text": "NL" }, { "math_id": 43, "text": "NC" }, { "math_id": 44, "text": "BPP = NEXP" }, { "math_id": 45, "text": "PTIME" }, { "math_id": 46, "text": "2^n" }, { "math_id": 47, "text": "10^{12}" }, { "math_id": 48, "text": "4 \\times 10^{10}" }, { "math_id": 49, "text": "1.0001^n" }, { "math_id": 50, "text": "n^{15}" }, { "math_id": 51, "text": "n^3" } ]
https://en.wikipedia.org/wiki?curid=7543
75431503
Bride's Chair
Illustration of the Pythagorean theorem In geometry, a Bride's Chair is an illustration of the Pythagorean theorem. The figure appears in Proposition 47 of Book I of Euclid's Elements. It is also known by several other names, such as the Franciscan's cowl, peacock's tail, windmill, Pythagorean pants, Figure of the Bride, theorem of the married women, and chase of the little married women. According to Swiss-American historian of mathematics Florian Cajori, the ultimate etymology of the term "Bride's Chair" lies in a Greek homonym: "Some Arabic writers [...] call the Pythagorean theorem 'figure of the bride'." The Greek word has two relevant definitions: 'bride', and 'winged insect'. The figure of a right triangle with the three squares has reminded various writers of an insect, so the 'insect' sense of the Greek word came to be applied to right triangles with three squares, and to the Pythagorean theorem. Arabic speakers writing in Greek would often mistakenly assume the other sense of the word was intended, and would translate the phrase back into Arabic using the word for 'bride'. A nice illustration of the Bride's Chair showing a chair upon which, according to ancient tradition, a bride might have been carried to the marriage ceremony can be seen in Sidney J. Kolpas' "The Pythagorean Theorem: Eight Classic Proofs" (page 3). As a proof. &lt;templatestyles src="Block indent/styles.css"/&gt;"The proof presented below is an excerpt from Pythagorean theorem § Euclid's proof" The Bride's chair proof of the Pythagorean theorem, that is, the proof of the Pythagorean theorem based on the Bride's Chair diagram, is given below. The proof has been severely criticized by the German philosopher Arthur Schopenhauer as being unnecessarily complicated, with construction lines drawn here and there and a long line of deductive steps. According to Schopenhauer, the proof is a "brilliant piece of perversity". A different Bride's Chair. The name Bride's Chair is also used to refer to a certain diagram attributed to the twelfth century Indian mathematician Bhaskara II (c. 1114–1185) who used it as an illustration for the proof of the Pythagorean theorem. The description of this diagram appears in verse 129 of "Bijaganita" of Bhaskara II. There is a legend that Bhaskara's proof of the Pythagorean theorem consisted of only just one word, namely, "Behold!". However, using the notations of the diagram, the theorem follows from the following equation: formula_0 References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "c^2=(a-b)^2+4(\\tfrac{1}{2}ab)=a^2+b^2." } ]
https://en.wikipedia.org/wiki?curid=75431503
7543270
Fractional cascading
In computer science, fractional cascading is a technique to speed up a sequence of binary searches for the same value in a sequence of related data structures. The first binary search in the sequence takes a logarithmic amount of time, as is standard for binary searches, but successive searches in the sequence are faster. The original version of fractional cascading, introduced in two papers by Chazelle and Guibas in 1986 (; ), combined the idea of cascading, originating in range searching data structures of and , with the idea of fractional sampling, which originated in . Later authors introduced more complex forms of fractional cascading that allow the data structure to be maintained as the data changes by a sequence of discrete insertion and deletion events. Example. As a simple example of fractional cascading, consider the following problem. We are given as input a collection of formula_0 ordered lists formula_1 of numbers, such that the total length formula_2 of all lists is formula_3, and must process them so that we can perform binary searches for a query value formula_4 in each of the formula_0 lists. For instance, with formula_5 and formula_6, formula_7 = 24, 64, 65, 80, 93 formula_8 = 23, 25, 26 formula_9 = 13, 44, 62, 66 formula_10 = 11, 35, 46, 79, 81 The simplest solution to this searching problem is just to store each list separately. If we do so, the space requirement is formula_11, but the time to perform a query is formula_12, as we must perform a separate binary search in each of formula_0 lists. The worst case for querying this structure occurs when each of the formula_0 lists has equal size formula_13, so each of the formula_0 binary searches involved in a query takes time formula_14. A second solution allows faster queries at the expense of more space: we may merge all the formula_0 lists into a single big list formula_15, and associate with each item formula_16 of formula_15 a list of the results of searching for formula_16 in each of the smaller lists formula_1. If we describe an element of this merged list as formula_17 where formula_16 is the numerical value and formula_18, formula_19, formula_20, and formula_21 are the positions (the first number has position 0) of the next element at least as large as formula_16 in each of the original input lists (or the position after the end of the list if no such element exists), then we would have formula_15 = 11[0,0,0,0], 13[0,0,0,1], 23[0,0,1,1], 24[0,1,1,1], 25[1,1,1,1], 26[1,2,1,1], 35[1,3,1,1], 44[1,3,1,2], 46[1,3,2,2], 62[1,3,2,3], 64[1,3,3,3], 65[2,3,3,3], 66[3,3,3,3], 79[3,3,4,3], 80[3,3,4,4], 81[4,3,4,4], 93[4,3,4,5] This merged solution allows a query in time formula_22: simply search for formula_4 in formula_15 and then report the results stored at the item formula_16 found by this search. For instance, if formula_23, searching for formula_4 in formula_15 finds the item 62[1,3,2,3], from which we return the results formula_24, formula_25 (a flag value indicating that formula_4 is past the end of formula_8), formula_26, and formula_27. However, this solution pays a high penalty in space complexity: it uses space formula_28 as each of the formula_3 items in formula_15 must store a list of formula_0 search results. Fractional cascading allows this same searching problem to be solved with time and space bounds meeting the best of both worlds: query time formula_22, and space formula_11. The fractional cascading solution is to store a new sequence of lists formula_29. The final list in this sequence, formula_30, is equal to formula_31; each earlier list formula_29 is formed by merging formula_1 with every second item from formula_32. With each item formula_16 in this merged list, we store two numbers: the position resulting from searching for formula_16 in formula_1 and the position resulting from searching for formula_16 in formula_32. For the data above, this would give us the following lists: formula_33 = 24[0, 1], 25[1, 1], 35[1, 3], 64[1, 5], 65[2, 5], 79[3, 5], 80[3, 6], 93[4, 6] formula_34 = 23[0, 1], 25[1, 1], 26[2, 1], 35[3, 1], 62[3, 3], 79[3, 5] formula_35 = 13[0, 1], 35[1, 1], 44[1, 2], 62[2, 3], 66[3, 3], 79[4, 3] formula_36 = 11[0, 0], 35[1, 0], 46[2, 0], 79[3, 0], 81[4, 0] Suppose we wish to perform a query in this structure, for formula_23. We first do a standard binary search for formula_4 in formula_33, finding the value 64[1,5]. The "1" in 64[1,5], tells us that the search for formula_4 in formula_7 should return formula_24. The "5" in 64[1,5] tells us that the approximate location of formula_4 in formula_34 is position 5. More precisely, binary searching for formula_4 in formula_34 would return either the value 79[3,5] at position 5, or the value 62[3,3] one place earlier. By comparing formula_4 to 62, and observing that it is smaller, we determine that the correct search result in formula_34 is 62[3,3]. The first "3" in 62[3,3] tells us that the search for formula_4 in formula_8 should return formula_25, a flag value meaning that formula_4 is past the end of list formula_8. The second "3" in 62[3,3] tells us that the approximate location of formula_4 in formula_35 is position 3. More precisely, binary searching for formula_4 in formula_35 would return either the value 62[2,3] at position 3, or the value 44[1,2] one place earlier. A comparison of formula_4 with the smaller value 44 shows us that the correct search result in formula_35 is 62[2,3]. The "2" in 62[2,3] tells us that the search for formula_4 in formula_9 should return formula_26, and the "3" in 62[2,3] tells us that the result of searching for formula_4 in formula_36 is either 79[3,0] at position 3 or 46[2,0] at position 2; comparing formula_4 with 46 shows that the correct result is 79[3,0] and that the result of searching for formula_4 in formula_10 is formula_27. Thus, we have found formula_4 in each of our four lists, by doing a binary search in the single list formula_33 followed by a single comparison in each of the successive lists. More generally, for any data structure of this type, we perform a query by doing a binary search for formula_4 in formula_33, and determining from the resulting value the position of formula_4 in formula_7. Then, for each formula_37, we use the known position of formula_4 in formula_29 to find its position in formula_32. The value associated with the position of formula_4 in formula_29 points to a position in formula_32 that is either the correct result of the binary search for formula_4 in formula_32 or is a single step away from that correct result, so stepping from formula_38 to formula_39 requires only a single comparison. Thus, the total time for a query is formula_22 In our example, the fractionally cascaded lists have a total of 25 elements, less than twice that of the original input. In general, the size of formula_29 in this data structure is at most formula_40 as may easily be proven by induction. Therefore, the total size of the data structure is at most formula_41 as may be seen by regrouping the contributions to the total size coming from the same input list formula_1 together with each other. The general problem. In general, fractional cascading begins with a "catalog graph", a directed graph in which each vertex is labeled with an ordered list. A query in this data structure consists of a path in the graph and a query value "q"; the data structure must determine the position of "q" in each of the ordered lists associated with the vertices of the path. For the simple example above, the catalog graph is itself a path, with just four nodes. It is possible for later vertices in the path to be determined dynamically as part of a query, in response to the results found by the searches in earlier parts of the path. To handle queries of this type, for a graph in which each vertex has at most "d" incoming and at most "d" outgoing edges for some constant "d", the lists associated with each vertex are augmented by a fraction of the items from each outgoing neighbor of the vertex; the fraction must be chosen to be smaller than 1/"d", so that the total amount by which all lists are augmented remains linear in the input size. Each item in each augmented list stores with it the position of that item in the unaugmented list stored at the same vertex, and in each of the outgoing neighboring lists. In the simple example above, "d" = 1, and we augmented each list with a 1/2 fraction of the neighboring items. A query in this data structure consists of a standard binary search in the augmented list associated with the first vertex of the query path, together with simpler searches at each successive vertex of the path. If a 1/"r" fraction of items are used to augment the lists from each neighboring item, then each successive query result may be found within at most "r" steps of the position stored at the query result from the previous path vertex, and therefore may be found in constant time without having to perform a full binary search. Dynamic fractional cascading. In "dynamic fractional cascading", the list stored at each node of the catalog graph may change dynamically, by a sequence of updates in which list items are inserted and deleted. This causes several difficulties for the data structure. First, when an item is inserted or deleted at a node of the catalog graph, it must be placed within the augmented list associated with that node, and may cause changes to propagate to other nodes of the catalog graph. Instead of storing the augmented lists in arrays, they should be stored as binary search trees, so that these changes can be handled efficiently while still allowing binary searches of the augmented lists. Second, an insertion or deletion may cause a change to the subset of the list associated with a node that is passed on to neighboring nodes of the catalog graph. It is no longer feasible, in the dynamic setting, for this subset to be chosen as the items at every "d"th position of the list, for some "d", as this subset would change too drastically after every update. Rather, a technique closely related to B-trees allows the selection of a fraction of data that is guaranteed to be smaller than 1/"d", with the selected items guaranteed to be spaced a constant number of positions apart in the full list, and such that an insertion or deletion into the augmented list associated with a node causes changes to propagate to other nodes for a fraction of the operations that is less than 1/"d". In this way, the distribution of the data among the nodes satisfies the properties needed for the query algorithm to be fast, while guaranteeing that the average number of binary search tree operations per data insertion or deletion is constant. Third, and most critically, the static fractional cascading data structure maintains, for each element "x" of the augmented list at each node of the catalog graph, the index of the result that would be obtained when searching for "x" among the input items from that node and among the augmented lists stored at neighboring nodes. However, this information would be too expensive to maintain in the dynamic setting. Inserting or deleting a single value "x" could cause the indexes stored at an unbounded number of other values to change. Instead, dynamic versions of fractional cascading maintain several data structures for each node: These data structures allow dynamic fractional cascading to be performed at a time of O(log "n") per insertion or deletion, and a sequence of "k" binary searches following a path of length "k" in the catalog graph to be performed in time O(log "n" + "k" log log "n"). Applications. Typical applications of fractional cascading involve range search data structures in computational geometry. For example, consider the problem of "half-plane range reporting": that is, intersecting a fixed set of "n" points with a query half-plane and listing all the points in the intersection. The problem is to structure the points in such a way that a query of this type may be answered efficiently in terms of the intersection size "h". One structure that can be used for this purpose is the convex layers of the input point set, a family of nested convex polygons consisting of the convex hull of the point set and the recursively-constructed convex layers of the remaining points. Within a single layer, the points inside the query half-plane may be found by performing a binary search for the half-plane boundary line's slope among the sorted sequence of convex polygon edge slopes, leading to the polygon vertex that is inside the query half-plane and farthest from its boundary, and then sequentially searching along the polygon edges to find all other vertices inside the query half-plane. The whole half-plane range reporting problem may be solved by repeating this search procedure starting from the outermost layer and continuing inwards until reaching a layer that is disjoint from the query halfspace. Fractional cascading speeds up the successive binary searches among the sequences of polygon edge slopes in each layer, leading to a data structure for this problem with space O("n") and query time O(log "n" + "h"). The data structure may be constructed in time O("n" log "n") by an algorithm of . As in our example, this application involves binary searches in a linear sequence of lists (the nested sequence of the convex layers), so the catalog graph is just a path. Another application of fractional cascading in geometric data structures concerns point location in a monotone subdivision, that is, a partition of the plane into polygons such that any vertical line intersects any polygon in at most two points. As showed, this problem can be solved by finding a sequence of polygonal paths that stretch from left to right across the subdivision, and binary searching for the lowest of these paths that is above the query point. Testing whether the query point is above or below one of the paths can itself be solved as a binary search problem, searching for the x coordinate of the points among the x coordinates of the path vertices to determine which path edge might be above or below the query point. Thus, each point location query can be solved as an outer layer of binary search among the paths, each step of which itself performs a binary search among x coordinates of vertices. Fractional cascading can be used to speed up the time for the inner binary searches, reducing the total time per query to O(log "n") using a data structure with space O("n"). In this application the catalog graph is a tree representing the possible search sequences of the outer binary search. Beyond computational geometry, and apply fractional cascading in the design of data structures for fast packet filtering in internet routers. use fractional cascading as a model for data distribution and retrieval in sensor networks. References. &lt;templatestyles src="Refbegin/styles.css" /&gt;
[ { "math_id": 0, "text": "k" }, { "math_id": 1, "text": "L_i" }, { "math_id": 2, "text": "\\sum_i |L_i|" }, { "math_id": 3, "text": "n" }, { "math_id": 4, "text": "q" }, { "math_id": 5, "text": "k=4" }, { "math_id": 6, "text": "n=17" }, { "math_id": 7, "text": "L_1" }, { "math_id": 8, "text": "L_2" }, { "math_id": 9, "text": "L_3" }, { "math_id": 10, "text": "L_4" }, { "math_id": 11, "text": "O(n)" }, { "math_id": 12, "text": "O\\bigl(k\\log(n/k)\\bigr)" }, { "math_id": 13, "text": "n/k" }, { "math_id": 14, "text": "O\\bigl(\\log(n/k)\\bigr)" }, { "math_id": 15, "text": "L" }, { "math_id": 16, "text": "x" }, { "math_id": 17, "text": "x[a,b,c,d]" }, { "math_id": 18, "text": "a" }, { "math_id": 19, "text": "b" }, { "math_id": 20, "text": "c" }, { "math_id": 21, "text": "d" }, { "math_id": 22, "text": "O(k+\\log n)" }, { "math_id": 23, "text": "q=50" }, { "math_id": 24, "text": "L_1[1]=64" }, { "math_id": 25, "text": "L_2[3]" }, { "math_id": 26, "text": "L_3[2]=62" }, { "math_id": 27, "text": "L_4[3]=79" }, { "math_id": 28, "text": "O(kn)" }, { "math_id": 29, "text": "M_i" }, { "math_id": 30, "text": "M_k" }, { "math_id": 31, "text": "L_k" }, { "math_id": 32, "text": "M_{i+1}" }, { "math_id": 33, "text": "M_1" }, { "math_id": 34, "text": "M_2" }, { "math_id": 35, "text": "M_3" }, { "math_id": 36, "text": "M_4" }, { "math_id": 37, "text": "i>1" }, { "math_id": 38, "text": "i" }, { "math_id": 39, "text": "i+1" }, { "math_id": 40, "text": "|L_i|+\\frac12|L_{i+1}|+\\frac14|L_{i+2}|+\\cdots+\\frac1{2^j}|L_{i+j}|+\\cdots," }, { "math_id": 41, "text": "\\sum|M_i|=\\sum |L_i| \\left(1+\\frac12+\\frac14+\\cdots\\right) \\leq 2n=O(n)," } ]
https://en.wikipedia.org/wiki?curid=7543270
75434287
TCR World Ranking
TCR Touring Car drivers rankings The TCR World Ranking is a method for calculating race drivers' performance in TCR races. The ranking was established by World Sporting Consulting (WSC) Group, the global rights holder for TCR, in October 2022 and takes into account all races utilizing TCR regulations since 1 January 2021. The rankings are updated every Wednesday. Ranking method. Race points. Ranking points obtained by a driver from a race would be calculated as follows: formula_0 where: Importance coefficient. The importance coefficient is determined by the series the race is held under. If a race is a part of two or more series at the same time (e.g. TCR World Tour with the host series), it will be considered as part of the series with highest coefficient for the ranking. Participation coefficient. The participation coefficient is determined by the number of entered TCR cars in the race. Finishing points. Finishing position of the race will award points according to the following scale: Driver must be classified as finisher of the race to get points. If a car is driven by two or more drivers, all drivers get full points for their position. If a race is a part of multi-class event, position is counted relative to the TCR class. TCR World Tour points. Race points scored by TCR World Tour full time entries in World Tour events are increased by 50%. Driver's total. A driver's total points is the sum of their last 20 races. If the driver doesn't take part in any TCR event for the last 30 weeks or more, one oldest race result will be dropped for every 4 weeks of inactivity. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "R = \\frac{A \\times B \\times P}{100}" }, { "math_id": 1, "text": "R" }, { "math_id": 2, "text": "A" }, { "math_id": 3, "text": "B" }, { "math_id": 4, "text": "P" } ]
https://en.wikipedia.org/wiki?curid=75434287
75439302
Power cone
In linear algebra, a power cone is a kind of a convex cone that is particularly important in modeling convex optimization problems. . It is a generalization of the quadratic cone: the quadratic cone is defined using a quadratic equation (with the power 2), whereas a power cone can be defined using any power, not necessarily 2. Definition. The "n"-dimensional power cone is parameterized by a real number formula_0. It is defined as:formula_1 An alternative definition is formula_2 Applications. The main application of the power cone is in constraints of convex optimization programs. There are many problems that can be described as minimizing a convex function over a power cone. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "0<r<1" }, { "math_id": 1, "text": "P_{n, r, 1-r} := \\left\\{ \n\\mathbf{x}\\in \\mathbb{R}^n:~~x_1\\geq 0,~~ x_2\\geq 0,~~ x_1^r\\cdot x_2^{1-r} \\geq \\sqrt{x_3^2 + \\cdots + x_n^2}\n\\right\\}" }, { "math_id": 2, "text": "P_{r, 1-r} := \\left\\{ \n\\mathbf{x_1, x_2, x_3}:~~x_1\\geq 0,~~ x_2\\geq 0,~~ x_1^r\\cdot x_2^{1-r} \\geq |x_3|\n\\right\\}" } ]
https://en.wikipedia.org/wiki?curid=75439302
75442022
Garfield's proof of the Pythagorean theorem
1876 mathematical proof by the US president Garfield's proof of the Pythagorean theorem is an original proof the Pythagorean theorem invented by James A. Garfield (November 19, 1831 – September 19, 1881), the 20th president of the United States. The proof appeared in print in the "New-England Journal of Education" (Vol. 3, No.14, April 1, 1876). At the time of the publication of the proof Garfield was not the President, he was only the Congressman from Ohio. He assumed the office of President on March 4, 1881, and served in that position only for a brief period up to September 19, 1881. Garfield was the only President of the United States to have contributed anything original to mathematics. The proof is nontrivial and, according to the historian of mathematics, William Dunham, "Garfield's is really a very clever proof." The proof appears as the 231st proof in "The Pythagorean Proposition", a compendium of 370 different proofs of the Pythagorean theorem. The proof. In the figure, formula_0 is a right-angled triangle with right angle at formula_1. The side-lengths of the triangle are formula_2. Pythagorean theorem asserts that formula_3. To prove the theorem, Garfield drew a line through formula_4 perpendicular to formula_5 and on this line chose a point formula_6 such that formula_7. Then, from formula_8 he dropped a perpendicular formula_9 upon the extended line formula_10. From the figure, one can easily see that the triangles formula_0 and formula_11 are congruent. Since formula_12 and formula_9 are both perpendicular to formula_13, they are parallel and so the quadrilateral formula_14 is a trapezoid. The theorem is proved by computing the area of this trapezoid in two different ways. formula_15. formula_16 From these one gets formula_17 which on simplification yields formula_18 References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "ABC" }, { "math_id": 1, "text": "C" }, { "math_id": 2, "text": "a,b,c" }, { "math_id": 3, "text": "c^2=a^2+b^2" }, { "math_id": 4, "text": "B" }, { "math_id": 5, "text": "AB" }, { "math_id": 6, "text": "D" }, { "math_id": 7, "text": "BD=BA" }, { "math_id": 8, "text": " D" }, { "math_id": 9, "text": "DE" }, { "math_id": 10, "text": "CB" }, { "math_id": 11, "text": " BDE" }, { "math_id": 12, "text": "AC" }, { "math_id": 13, "text": "CE" }, { "math_id": 14, "text": "ACED" }, { "math_id": 15, "text": "\\begin{align}\\text{area of trapezoid } ACED & = \\text{height}\\times \\text{average of parallel sides}\\\\ & = CE\\times\\tfrac{1}{2}(AC+DE)=(a+b)\\times \\tfrac{1}{2}(a+b)\\end{align}" }, { "math_id": 16, "text": "\\begin{align} \\text{area of trapezoid } ACED & = \\text{area of }\\Delta ACB + \\text{area of } \\Delta ABD + \\text{area of } \\Delta BDE \\\\& = \\tfrac{1}{2}(a\\times b) + \\tfrac{1}{2}(c\\times c) + \\tfrac{1}{2}(a\\times b)\\end{align}" }, { "math_id": 17, "text": "(a+b)\\times \\tfrac{1}{2}(a+b) = \\tfrac{1}{2}(a\\times b) + \\tfrac{1}{2}(c\\times c) + \\tfrac{1}{2}(a\\times b)" }, { "math_id": 18, "text": "a^2+b^2=c^2" } ]
https://en.wikipedia.org/wiki?curid=75442022
75446479
Collar neighbourhood
In topology, a branch of mathematics, a collar neighbourhood of a manifold with boundary formula_0 is a neighbourhood of its boundary formula_0 that has the same structure as formula_1. Formally if formula_0 is a differentiable manifold with boundary, formula_2 is a "collar neighbourhood" of formula_0 whenever there is a diffeomorphism formula_3 such that for every formula_4, formula_5. Every differentiable manifold has a collar neighbourhood. Formally if formula_0 is a topological manifold with boundary, formula_2 is a "collar neighbourhood" of formula_0 whenever there is an homeomorphism formula_3 such that for every formula_4, formula_5. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "M" }, { "math_id": 1, "text": "\\partial M \\times [0, 1)" }, { "math_id": 2, "text": "U \\subset M" }, { "math_id": 3, "text": "f : \\partial M \\times [0, 1) \\to U" }, { "math_id": 4, "text": "x \\in \\partial M" }, { "math_id": 5, "text": "f (x, 0) = x" } ]
https://en.wikipedia.org/wiki?curid=75446479
75446963
Wave intensity analysis
Method in the dynamics of blood flow Wave intensity analysis provides a method to calculate the properties of arterial waves that give rise to arterial blood pressure, based on measurements of pressure, P, and velocity, U, waveforms (Figure 1). Wave intensity analysis is applicable to the evaluation of circulatory physiology and quantifying the pathophysiology of disorders such as coronary artery disease. The method is based on discrete, successive wave fronts (wavelets) and is carried out in the time domain. These wavelets travel forward and backwards in the arteries with amplitudes formula_0 and formula_1. The wave intensity, formula_2, of a particular wavelet is defined asformula_3It is related to sound intensity in acoustics and describes the power per unit area carried by the wavelet. From the theory discussed below, there is a relationship between the pressure amplitude and the velocity amplitude of a waveletformula_4where ρ is the density of blood and c is the wave speed of the wavelet. From these equations, generally known as the water hammer equations, it follows that the wave intensity for forward wavelets formula_5 and for backward wavelets formula_6. The ability to determine the direction of a wavelet from its sign is the basis of the practical utility of wave intensity analysis. Net wave intensity. The pressure amplitude of a wavelet can be positive (compression) or negative (decompression) and the velocity amplitude can be positive (acceleration) or negative (deceleration). The measured changes formula_7 and formula_8 are the sums of the amplitudes of the forward and backward wavelets arriving at the measurement site at the time of the measurement and so the wave intensity formula_9 is sometimes called net wave intensity. The formula_9 in the Figure 2 shows the normal pattern in the aorta and illustrates four important features: Departures from this pattern of wave intensity is usually indicative of pathology. Separation of forward and backward waves. The additivity of the forward and backward wavelets coinciding at the site of measurement at a particular time can be combined algebraically with the water-hammer equations to calculate the magnitudes of the two waveletsformula_4This method assumes that the wave speed is constant. In general, the wave speed is a function of the pressure. A more complex method of separation involving integrals along the characteristics is available. The forward and backward waveforms follow from summing the magnitudes of the sequential waveletsformula_10The pressure shown in Figure 1 is separated into its forward and backward components in Figure 3. This separation is carried out in the time domain and can be applied to irregular, non-periodic data. For periodic heart beats this separation coincides closely with the separation obtained using Fourier analysis methods. Theoretical basis of wave intensity analysis. Wave intensity analysis is based on the 1-D equations for the conservation of mass and momentum of the blood in the elastic arteries (first published by Euler in 1775) and a relationship between the cross-sectional area of the artery and the pressure within it (generally called a 'tube law'). The resultant equations are hyperbolic in form and can be solved using the method of characteristics. The method is based on the identification of 'characteristic' directions along which the partial differential equations reduce to ordinary differential equations. For the arteries these characteristic directions were (formula_11) where formula_12 is the wave speedformula_13where formula_14 is the distensibility of the artery wall. These characteristic directions describe the path of the wavelets in formula_15. The derivation of an equation for the local wave speed is a significant advantage of the method of characteristics. Practical estimation of the local wave speed. A method for estimating local wave speed from clinically measurable data is important in wave intensity analysis. The most common approach is referred to as the 'P-U loop' method. During the initial compression wave at the start of systole when the forward wave is dominant the local wave speed is estimated from the slope of the P-U loopformula_16In cases where there is no period during the cardiac cycle when it is obvious that forward waves dominate another method, generally called the 'sum of squares' method is applicable:formula_17Here the sums are taken over a complete cardiac period. This estimation has been used extensively in the wave intensity analysis of measurements in the coronary arteries. It should be remembered that both of these estimates are approximations and should be used with caution. Variants of wave intensity analysis. Wave intensity analysis was developed at an era when intra-arterial pressure and velocity waveforms were measured most commonly in the clinic. Other methods of clinical measurements have emerged (e.g. ultrasound and magnetic resonance imaging) and wave intensity analysis has been recast in terms of the parameters that are measured. No systematic notation has been developed to distinguish the different variants. This is a possible source of confusion. Applications. Although wave intensity has some clinical applications, its main influence has been on the basic understanding of arterial hemodynamics. The first publication pointed out that the deceleration of blood in the aorta during late systole is mainly due to forward deceleration waves rather than backward reflections from the periphery. Backward wavelets are present as indicated by the negative wave intensity during mid-systole but most of the deceleration is caused by forward decompression waves generated when the contraction of the ventricle can not keep up with the flow of blood established by the initial contraction wave. Various studies have looked at the magnitudes of the forward and backward peaks of wave intensity in various pathologies. In the 1990s, Aloka introduced clinical ultrasound scanners that measured the instantaneous wave intensity directly from simultaneous measurements of diameter (by B-mode) and blood velocity (by Doppler). Recent studies suggest that wave intensity calculated from catheter measurements in the pulmonary artery can be used to differentiate between pulmonary arterial hypertension and chronic thromboembolic pulmonary hypertension. Wave intensity analysis is frequently used in the study of coronary artery hemodynamics where impedance analysis is seldom, if ever, used. It played an important role in the development of the Instantaneous Wave-free Ratio as a measure of the functional effect of stenoses in the coronary arteries. This method informs an interventional cardiologist if stenting is required by measuring the ratio of downstream and upstream pressures during a period in mid-diastole when the net wave intensity is minimal. Large scale clinical trials have shown that this index is "at least as good as" the Functional Flow Resistance Index which requires the injection of a potent vasodilator. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\Delta P" }, { "math_id": 1, "text": "\\Delta U" }, { "math_id": 2, "text": "\\Delta I" }, { "math_id": 3, "text": "\\Delta I = \\Delta P\\Delta U" }, { "math_id": 4, "text": "\\Delta P_\\pm = \\frac{1}{2} (\\Delta P_\\pm \\pm \\rho c \\Delta U_\\pm)" }, { "math_id": 5, "text": "\\Delta I_+ > 0" }, { "math_id": 6, "text": "\\Delta I_- < 0" }, { "math_id": 7, "text": "\\Delta P" }, { "math_id": 8, "text": "\\Delta U" }, { "math_id": 9, "text": "\\Delta I" }, { "math_id": 10, "text": "P_\\pm = \\Sigma \\Delta P_\\pm" }, { "math_id": 11, "text": "x \\pm ct" }, { "math_id": 12, "text": "c" }, { "math_id": 13, "text": "c = \\frac{1}{\\sqrt{\\rho \\mathcal{D}} }" }, { "math_id": 14, "text": "\\mathcal{D} = \\frac{1}{A} \\frac{dA}{dP}" }, { "math_id": 15, "text": "(x,t)" }, { "math_id": 16, "text": "c \\approx \\frac{1}{\\rho} \\left| \\frac{dP}{dU} \\right|_{initial \\; systole}" }, { "math_id": 17, "text": "c \\approx \\frac{1}{\\rho} \\sqrt{\\frac{\\Sigma \\Delta P^2}{\\Sigma \\Delta U^2} }" }, { "math_id": 18, "text": "\\Delta I = \\frac{\\Delta P}{\\Delta t} \\frac{\\Delta U}{\\Delta t}" }, { "math_id": 19, "text": "\\Delta t" }, { "math_id": 20, "text": "P = P(A)" }, { "math_id": 21, "text": "P" }, { "math_id": 22, "text": "\\Delta I = \\Delta ln(A) \\Delta U " }, { "math_id": 23, "text": "= 2 \\Delta ln(D) \\Delta U" }, { "math_id": 24, "text": "Q = UA" }, { "math_id": 25, "text": "\\Delta I = \\Delta P \\Delta Q" } ]
https://en.wikipedia.org/wiki?curid=75446963
754487
Permeability (electromagnetism)
Ability of magnetization In electromagnetism, permeability is the measure of magnetization produced in a material in response to an applied magnetic field. Permeability is typically represented by the (italicized) Greek letter "μ". It is the ratio of the magnetic induction formula_0 to the magnetizing field formula_1 as a function of the field formula_1 in a material. The term was coined by William Thomson, 1st Baron Kelvin in 1872, and used alongside permittivity by Oliver Heaviside in 1885. The reciprocal of permeability is magnetic reluctivity. In SI units, permeability is measured in henries per meter (H/m), or equivalently in newtons per ampere squared (N/A2). The permeability constant "μ"0, also known as the magnetic constant or the permeability of free space, is the proportionality between magnetic induction and magnetizing force when forming a magnetic field in a classical vacuum. A closely related property of materials is magnetic susceptibility, which is a dimensionless proportionality factor that indicates the degree of magnetization of a material in response to an applied magnetic field. Explanation. In the macroscopic formulation of electromagnetism, there appear two different kinds of magnetic field: The concept of permeability arises since in many materials (and in vacuum), there is a simple relationship between H and B at any location or time, in that the two fields are precisely proportional to each other: formula_2, where the proportionality factor "μ" is the permeability, which depends on the material. The permeability of vacuum (also known as permeability of free space) is a physical constant, denoted "μ"0. The SI units of "μ" are volt-seconds/ampere-meter, equivalently henry/meter. Typically "μ" would be a scalar, but for an anisotropic material, "μ" could be a second rank tensor. However, inside strong magnetic materials (such as iron, or permanent magnets), there is typically no simple relationship between H and B. The concept of permeability is then nonsensical or at least only applicable to special cases such as unsaturated magnetic cores. Not only do these materials have nonlinear magnetic behaviour, but often there is significant magnetic hysteresis, so there is not even a single-valued functional relationship between B and H. However, considering starting at a given value of B and H and slightly changing the fields, it is still possible to define an "incremental permeability" as: formula_3. assuming B and H are parallel. In the microscopic formulation of electromagnetism, where there is no concept of an H field, the vacuum permeability "μ"0 appears directly (in the SI Maxwell's equations) as a factor that relates total electric currents and time-varying electric fields to the B field they generate. In order to represent the magnetic response of a linear material with permeability "μ", this instead appears as a magnetization M that arises in response to the B field: formula_4. The magnetization in turn is a contribution to the total electric current—the magnetization current. Relative permeability and magnetic susceptibility. Relative permeability, denoted by the symbol formula_5, is the ratio of the permeability of a specific medium to the permeability of free space "μ"0: formula_6 where formula_7 4π × 10−7 H/m is the magnetic permeability of free space. In terms of relative permeability, the magnetic susceptibility is formula_8 The number "χ"m is a dimensionless quantity, sometimes called "volumetric" or "bulk" susceptibility, to distinguish it from "χ"p ("magnetic mass" or "specific" susceptibility) and "χ"M ("molar" or "molar mass" susceptibility). Diamagnetism. "Diamagnetism" is the property of an object which causes it to create a magnetic field in opposition of an externally applied magnetic field, thus causing a repulsive effect. Specifically, an external magnetic field alters the orbital velocity of electrons around their atom's nuclei, thus changing the magnetic dipole moment in the direction opposing the external field. Diamagnets are materials with a magnetic permeability less than "μ"0 (a relative permeability less than 1). Consequently, diamagnetism is a form of magnetism that a substance exhibits only in the presence of an externally applied magnetic field. It is generally a quite weak effect in most materials, although superconductors exhibit a strong effect. Paramagnetism. "Paramagnetism" is a form of magnetism which occurs only in the presence of an externally applied magnetic field. Paramagnetic materials are attracted to magnetic fields, hence have a relative magnetic permeability greater than one (or, equivalently, a positive magnetic susceptibility). The magnetic moment induced by the applied field is "linear" in the field strength, and it is rather "weak". It typically requires a sensitive analytical balance to detect the effect. Unlike ferromagnets, paramagnets do not retain any magnetization in the absence of an externally applied magnetic field, because thermal motion causes the spins to become "randomly oriented" without it. Thus the total magnetization will drop to zero when the applied field is removed. Even in the presence of the field, there is only a small "induced" magnetization because only a small fraction of the spins will be oriented by the field. This fraction is proportional to the field strength and this explains the linear dependency. The attraction experienced by ferromagnets is non-linear and much stronger so that it is easily observed, for instance, in magnets on one's refrigerator. Gyromagnetism. For gyromagnetic media (see Faraday rotation) the magnetic permeability response to an alternating electromagnetic field in the microwave frequency domain is treated as a non-diagonal tensor expressed by: formula_9 Values for some common materials. The following table should be used with caution as the permeability of ferromagnetic materials varies greatly with field strength and specific composition and fabrication. For example, 4% electrical steel has an initial relative permeability (at or near 0 T) of 2,000 and a maximum of 38,000 at T = 1 and different range of values at different percent of Si and manufacturing process, and, indeed, the relative permeability of any material at a sufficiently high field strength trends toward 1 (at magnetic saturation). A good magnetic core material must have high permeability. For "passive" magnetic levitation a relative permeability below 1 is needed (corresponding to a negative susceptibility). Permeability varies with a magnetic field. Values shown above are approximate and valid only at the magnetic fields shown. They are given for a zero frequency; in practice, the permeability is generally a function of the frequency. When the frequency is considered, the permeability can be complex, corresponding to the in-phase and out of phase response. Complex permeability. A useful tool for dealing with high frequency magnetic effects is the complex permeability. While at low frequencies in a linear material the magnetic field and the auxiliary magnetic field are simply proportional to each other through some scalar permeability, at high frequencies these quantities will react to each other with some lag time. These fields can be written as phasors, such that formula_10 where formula_11 is the phase delay of formula_0 from formula_1. Understanding permeability as the ratio of the magnetic flux density to the magnetic field, the ratio of the phasors can be written and simplified as formula_12 so that the permeability becomes a complex number. By Euler's formula, the complex permeability can be translated from polar to rectangular form, formula_13 The ratio of the imaginary to the real part of the complex permeability is called the loss tangent, formula_14 which provides a measure of how much power is lost in material versus how much is stored. Notes. &lt;templatestyles src="Reflist/styles.css" /&gt; References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "B" }, { "math_id": 1, "text": "H" }, { "math_id": 2, "text": "\\mathbf{B}=\\mu \\mathbf{H}" }, { "math_id": 3, "text": "\\Delta\\mathbf{B}=\\mu \\Delta\\mathbf{H}" }, { "math_id": 4, "text": "\\mathbf{M} = \\left(\\mu_0^{-1} - \\mu^{-1}\\right) \\mathbf{B}" }, { "math_id": 5, "text": "\\mu_\\mathrm{r}" }, { "math_id": 6, "text": "\\mu_\\mathrm{r} = \\frac \\mu {\\mu_0}," }, { "math_id": 7, "text": "\\mu_0 \\approx " }, { "math_id": 8, "text": "\\chi_m = \\mu_r - 1." }, { "math_id": 9, "text": "\\begin{align}\n\\mathbf{B}(\\omega) & = \\begin{vmatrix}\n\\mu_1 & -i \\mu_2 & 0\\\\\ni \\mu_2 & \\mu_1 & 0\\\\\n0 & 0 & \\mu_z\n\\end{vmatrix} \\mathbf{H}(\\omega)\n\\end{align}" }, { "math_id": 10, "text": "H = H_0 e^{j \\omega t} \\qquad B = B_0 e^{j\\left(\\omega t - \\delta \\right)}" }, { "math_id": 11, "text": "\\delta" }, { "math_id": 12, "text": "\\mu = \\frac{B}{H} = \\frac{ B_0 e^{j\\left(\\omega t - \\delta \\right) }}{H_0 e^{j \\omega t}} = \\frac{B_0}{H_0}e^{-j\\delta}," }, { "math_id": 13, "text": "\\mu = \\frac{B_0}{H_0}\\cos(\\delta) - j \\frac{B_0}{H_0}\\sin(\\delta) = \\mu' - j \\mu''." }, { "math_id": 14, "text": "\\tan(\\delta) = \\frac{\\mu''}{\\mu'}," } ]
https://en.wikipedia.org/wiki?curid=754487
754488
Permeability (materials science)
Measure of the ability of a porous material to allow fluids to pass through it Permeability in fluid mechanics, materials science and Earth sciences (commonly symbolized as "k") is a measure of the ability of a porous material (often, a rock or an unconsolidated material) to allow fluids to pass through it. Permeability. Permeability is a property of porous materials that is an indication of the ability for fluids (gas or liquid) to flow through them. Fluids can more easily flow through a material with high permeability than one with low permeability. The permeability of a medium is related to the porosity, but also to the shapes of the pores in the medium and their level of connectedness. Fluid flows can also be influenced in different lithological settings by brittle deformation of rocks in fault zones; the mechanisms by which this occurs are the subject of fault zone hydrogeology. Permeability is also affected by the pressure inside a material. Units. The SI unit for permeability is the square metre (m2). A practical unit for permeability is the "darcy" (d), or more commonly the "millidarcy" (md) (1 d formula_0 10−12 m2). The name honors the French Engineer Henry Darcy who first described the flow of water through sand filters for potable water supply. Permeability values for most materials commonly range typically from a fraction to several thousand millidarcies. The unit of square centimetre (cm2) is also sometimes used (1 cm2 = 10−4 m2 formula_0 108 d). Applications. The concept of permeability is of importance in determining the flow characteristics of hydrocarbons in oil and gas reservoirs, and of groundwater in aquifers. For a rock to be considered as an exploitable hydrocarbon reservoir without stimulation, its permeability must be greater than approximately 100 md (depending on the nature of the hydrocarbon – gas reservoirs with lower permeabilities are still exploitable because of the lower viscosity of gas with respect to oil). Rocks with permeabilities significantly lower than 100 md can form efficient "seals" (see petroleum geology). Unconsolidated sands may have permeabilities of over 5000 md. The concept also has many practical applications outside of geology, for example in chemical engineering (e.g., filtration), as well as in Civil Engineering when determining whether the ground conditions of a site are suitable for construction. Description. Permeability is part of the proportionality constant in Darcy's law which relates discharge (flow rate) and fluid physical properties (e.g. viscosity), to a pressure gradient applied to the porous media: formula_1 (for linear flow) Therefore: formula_2 where: formula_3 is the fluid velocity through the porous medium (i.e., the average flow velocity calculated as if the fluid was the only phase present in the porous medium) (m/s) formula_4 is the permeability of a medium (m2) formula_5 is the dynamic viscosity of the fluid (Pa·s) formula_6 is the applied pressure difference (Pa) formula_7 is the thickness of the bed of the porous medium (m) In naturally occurring materials, the permeability values range over many orders of magnitude (see table below for an example of this range). Relation to hydraulic conductivity. The global proportionality constant for the flow of water through a porous medium is called the hydraulic conductivity (K, unit: m/s). Permeability, or intrinsic permeability, (k, unit: m2) is a part of this, and is a specific property characteristic of the solid skeleton and the microstructure of the porous medium itself, independently of the nature and properties of the fluid flowing through the pores of the medium. This allows to take into account the effect of temperature on the viscosity of the fluid flowing though the porous medium and to address other fluids than pure water, "e.g.", concentrated brines, petroleum, or organic solvents. Given the value of hydraulic conductivity for a studied system, the permeability can be calculated as follows: formula_8 where Anisotropic permeability. Tissue such as brain, liver, muscle, etc can be treated as a heterogeneous porous medium. Describing the flow of biofluids (blood, cerebrospinal fluid, etc.) within such a medium requires a full 3-dimensional anisotropic treatment of the tissue. In this case the scalar hydraulic permeability is replaced with the hydraulic permeability tensor so that Darcy's Law reads formula_12 Connecting this expression to the isotropic case, formula_22, where k is the scalar hydraulic permeability, and 1 is the identity tensor. Determination. Permeability is typically determined in the lab by application of Darcy's law under steady state conditions or, more generally, by application of various solutions to the diffusion equation for unsteady flow conditions. Permeability needs to be measured, either directly (using Darcy's law), or through estimation using empirically derived formulas. However, for some simple models of porous media, permeability can be calculated (e.g., random close packing of identical spheres). Permeability model based on conduit flow. Based on the Hagen–Poiseuille equation for viscous flow in a pipe, permeability can be expressed as: formula_23 where: formula_24 is the intrinsic permeability [length2] formula_25 is a dimensionless constant that is related to the configuration of the flow-paths formula_26 is the average, or effective pore diameter [length]. Absolute permeability (aka intrinsic or specific permeability). "Absolute permeability" denotes the permeability in a porous medium that is 100% saturated with a single-phase fluid. This may also be called the "intrinsic permeability" or "specific permeability." These terms refer to the quality that the permeability value in question is an intensive property of the medium, not a spatial average of a heterogeneous block of material ; and that it is a function of the material structure only (and not of the fluid). They explicitly distinguish the value from that of relative permeability. Permeability to gases. Sometimes permeability to gases can be somewhat different than those for liquids in the same media. One difference is attributable to "slippage" of gas at the interface with the solid when the gas mean free path is comparable to the pore size (about 0.01 to 0.1 μm at standard temperature and pressure). See also Knudsen diffusion and constrictivity. For example, measurement of permeability through sandstones and shales yielded values from 9.0×10−19 m2 to 2.4×10−12 m2 for water and between 1.7×10−17 m2 to 2.6×10−12 m2 for nitrogen gas. Gas permeability of reservoir rock and source rock is important in petroleum engineering, when considering the optimal extraction of gas from unconventional sources such as shale gas, tight gas, or coalbed methane. Permeability tensor. To model permeability in anisotropic media, a permeability tensor is needed. Pressure can be applied in three directions, and for each direction, permeability can be measured (via Darcy's law in 3D) in three directions, thus leading to a 3 by 3 tensor. The tensor is realised using a 3 by 3 matrix being both symmetric and positive definite (SPD matrix): The permeability tensor is always diagonalizable (being both symmetric and positive definite). The eigenvectors will yield the principal directions of flow where flow is parallel to the pressure gradient, and the eigenvalues represent the principal permeabilities. Ranges of common intrinsic permeabilities. These values do not depend on the fluid properties; see the table derived from the same source for values of hydraulic conductivity, which are specific to the material through which the fluid is flowing. Footnotes. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\approx" }, { "math_id": 1, "text": "v = \\frac {k}{\\eta} \\frac{\\Delta P}{\\Delta x}" }, { "math_id": 2, "text": "k = v \\frac{\\eta \\Delta x}{\\Delta P}" }, { "math_id": 3, "text": "v" }, { "math_id": 4, "text": "k" }, { "math_id": 5, "text": "\\eta" }, { "math_id": 6, "text": "\\Delta P" }, { "math_id": 7, "text": "\\Delta x" }, { "math_id": 8, "text": " k = K \\frac {\\eta} {\\rho g}" }, { "math_id": 9, "text": "K" }, { "math_id": 10, "text": "\\rho" }, { "math_id": 11, "text": "g" }, { "math_id": 12, "text": "\\boldsymbol q = -\\frac{1}{\\eta}\\boldsymbol \\kappa \\cdot\\nabla P " }, { "math_id": 13, "text": "\\boldsymbol q" }, { "math_id": 14, "text": "[\\text{Length}][\\text{Time}]^{-1}" }, { "math_id": 15, "text": "[\\text{Mass}][\\text{L}]^{-1}[T]^{-1}" }, { "math_id": 16, "text": "\\boldsymbol \\kappa " }, { "math_id": 17, "text": "[\\text{L}]^2" }, { "math_id": 18, "text": "\\nabla " }, { "math_id": 19, "text": "[\\text{L}]^{-1}" }, { "math_id": 20, "text": "P" }, { "math_id": 21, "text": "[\\text{M}][\\text{L}]^{-1}[\\text{T}]^{-2}" }, { "math_id": 22, "text": "\\boldsymbol \\kappa = k\\mathbb 1" }, { "math_id": 23, "text": "k_{I}=C \\cdot d^2" }, { "math_id": 24, "text": "k_{I}" }, { "math_id": 25, "text": "C" }, { "math_id": 26, "text": "d" } ]
https://en.wikipedia.org/wiki?curid=754488
75452565
Brahmagupta polynomials
Brahmagupta polynomials are a class of polynomials associated with the Brahmagupa matrix which in turn is associated with the Brahmagupta's identity. The concept and terminology were introduced by E. R. Suryanarayan, University of Rhode Island, Kingston in a paper published in 1996. These polynomials have several interesting properties and have found applications in tiling problems and in the problem of finding Heronian triangles in which the lengths of the sides are consecutive integers. Definition. Brahmagupta's identity. In algebra, Brahmagupta's identity says that, for given integer N, the product of two numbers of the form formula_0 is again a number of the form. More precisely, we have formula_1 This identity can be used to generate infinitely many solutions to the Pell's equation. It can also be used to generate successively better rational approximations to square roots of arbitrary integers. Brahmagupta matrix. If, for an arbitrary real number formula_2, we define the matrix formula_3 then, Brahmagupta's identity can be expressed in the following form: formula_4 The matrix formula_5 is called the Brahmagupta matrix. Brahmagupta polynomials. Let formula_6 be as above. Then, it can be seen by induction that the matrix formula_7 can be written in the form formula_8 Here, formula_9 and formula_10 are polynomials in formula_11. These polynomials are called the Brahmagupta polynomials. The first few of the polynomials are listed below: formula_12 Properties. A few elementary properties of the Brahmagupta polynomials are summarized here. More advanced properties are discussed in the paper by Suryanarayan. Recurrence relations. The polynomials formula_9 and formula_10 satisfy the following recurrence relations: Exact expressions. The eigenvalues of formula_5 are formula_19 and the corresponding eigenvectors are formula_20. Hence formula_21. It follows that formula_22. This yields the following exact expressions for formula_9 and formula_10: Expanding the powers in the above exact expressions using the binomial theorem and simplifying one gets the following expressions for formula_9 and formula_10: formula_30 is the Fibonacci sequence formula_31. formula_32 is the Lucas sequence formula_33. formula_36 which are the numerators of continued fraction convergents to formula_37. This is also the sequence of half Pell-Lucas numbers. formula_38 which is the sequence of Pell numbers. A differential equation. formula_9 and formula_10 are polynomial solutions of the following partial differential equation: formula_39 References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "x^2 -Ny^2" }, { "math_id": 1, "text": "(x_1^2 - Ny_1^2)(x_2^2 - Ny_2^2) = (x_1x_2 + Ny_1y_2)^2 - N(x_1y_2 + x_2y_1)^2. " }, { "math_id": 2, "text": "t" }, { "math_id": 3, "text": "B(x,y) = \\begin{bmatrix} x & y \\\\ ty & x \\end{bmatrix}" }, { "math_id": 4, "text": "\\det B(x_1,y_1) \\det B(x_2,y_2) = \\det ( B(x_1,y_1)B(x_2,y_2))" }, { "math_id": 5, "text": "B(x,y)" }, { "math_id": 6, "text": "B=B(x,y)" }, { "math_id": 7, "text": "B^n" }, { "math_id": 8, "text": " B^n = \\begin{bmatrix} x_n & y_n \\\\ ty_n & x_n \\end{bmatrix}" }, { "math_id": 9, "text": "x_n" }, { "math_id": 10, "text": "y_n" }, { "math_id": 11, "text": "x, y, t" }, { "math_id": 12, "text": "\n\\begin{alignat}{2}\nx_1 & = x & y_1 & = y \\\\\nx_2 & = x^2+ty^2 & y_2 & = 2xy \\\\\nx_3 & = x^3+3txy^2 & y_3 & = 3x^2y+ty^3 \\\\\nx_4 & = x^4+6t^2x^2y^2+t^2y^4\\qquad & y_4 & = 4x^3y +4txy^3\n\\end{alignat}\n" }, { "math_id": 13, "text": "x_{n+1} = xx_n+tyy_n" }, { "math_id": 14, "text": "y_{n+1}=xy_n+yx_n" }, { "math_id": 15, "text": "x_{n+1} = 2xx_n - (x^2-ty^2)x_{n-1}" }, { "math_id": 16, "text": "y_{n+1} = 2xy_n - (x^2-ty^2)y_{n-1}" }, { "math_id": 17, "text": "x_{2n}=x_n^2+ty_n^2" }, { "math_id": 18, "text": "y_{2n}=2x_ny_n" }, { "math_id": 19, "text": "x\\pm y\\sqrt{t}" }, { "math_id": 20, "text": "[1, \\pm \\sqrt{t}]^T" }, { "math_id": 21, "text": "B[1, \\pm \\sqrt{t}]^T = (x\\pm y\\sqrt{t})[1, \\pm \\sqrt{t}]^T" }, { "math_id": 22, "text": "B^n[1, \\pm \\sqrt{t}]^T = (x\\pm y\\sqrt{t})^n[1, \\pm \\sqrt{t}]^T" }, { "math_id": 23, "text": "x_n = \\tfrac{1}{2}\\big[ (x + y\\sqrt{t})^n + (x - y\\sqrt{t})^n\\big]" }, { "math_id": 24, "text": "y_n = \\tfrac{1}{2\\sqrt{t}}\\big[ (x + y\\sqrt{t})^n - (x - y\\sqrt{t})^n\\big]" }, { "math_id": 25, "text": "x_n = x^n +t {n \\choose 2} x^{n-2}y^2 + t^2 {n\\choose 4}x^{n-4}y^4+\\cdots " }, { "math_id": 26, "text": " y_n = nx^{n-1}y +t{n\\choose 3}x^{n-3}y^3 + t^2{n \\choose 5}x^{n-5}y^5 +\\cdots " }, { "math_id": 27, "text": "x=y=\\tfrac{1}{2}" }, { "math_id": 28, "text": "t=5" }, { "math_id": 29, "text": "n>0" }, { "math_id": 30, "text": "2y_n=F_n" }, { "math_id": 31, "text": "1, 1, 2, 3, 5, 8, 13, 21, 34, 55, \\ldots" }, { "math_id": 32, "text": "2x_n=L_n" }, { "math_id": 33, "text": "2, 1, 3, 4, 7, 11, 18, 29, 47, 76, 123, \\ldots" }, { "math_id": 34, "text": "x=y=1" }, { "math_id": 35, "text": "t=2" }, { "math_id": 36, "text": "x_n=1,1,3,7,17,41,99,239,577,\\ldots" }, { "math_id": 37, "text": "\\sqrt{2}" }, { "math_id": 38, "text": "y_n= 0,1,2,5,12,29,70,169,408, \\ldots" }, { "math_id": 39, "text": " \\left( \\frac{\\partial^2}{\\partial x^2} - \\frac{1}{t}\\frac{\\partial^2}{\\partial y^2}\\right)U=0" } ]
https://en.wikipedia.org/wiki?curid=75452565
75460161
Linear biochemical pathway
A linear biochemical pathway is a chain of enzyme-catalyzed reaction steps where the product of one reaction becomes the substrate for the next reaction. The molecules progress through the pathway sequentially from the starting substrate to the final product. Each step in the pathway is usually facilitated by a different specific enzyme that catalyzes the chemical transformation. An example includes DNA replication, which connects the starting substrate and the end product in a straightforward sequence. Biological cells consume nutrients to sustain life. These nutrients are broken down to smaller molecules. Some of the molecules are used in the cells for various biological functions, and others are reassembled into more complex structures required for life. The breakdown and reassembly of nutrients is called metabolism. An individual cell contains thousands of different kinds of small molecules, such as sugars, lipids, and amino acids. The interconversion of these molecules is carried out by catalysts called enzymes. For example, the most widely studied bacterium, "E. coli" strain K-12, is able to produce about 2,338 metabolic enzymes. These enzymes collectively form a complex web of reactions comprising pathways by which substrates (including nutients and intermediates) are converted to products (other intermediates and end-products). The figure below shows a four step pathway, with intermediates, formula_0 and formula_1. To sustain a steady-state, the boundary species formula_2 and formula_3 are fixed. Each step is catalyzed by an enzyme, formula_4. Linear pathways follow a step-by-step sequence, where each enzymatic reaction results in the transformation of a substrate into an intermediate product. This intermediate is processed by subsequent enzymes until the final product is synthesized. A linear pathway can be studied in various ways. Multiple computer simulations can be run to try to understand the pathway's behavior. Another way to understand the properties of a linear pathway is to take a more analytical approach. Analytical solutions can be derived for the steady-state if simple mass-action kinetics are assumed. Analytical solutions for the steady-state when assuming Michaelis-Menten kinetics can be obtained but are quite often avoided. Instead, such models are linearized. The three approaches that are usually used are therefore: Computer simulation. It is possible to build a computer simulation of a linear biochemical pathway. This can be done by building a simple model that describes each intermediate through a differential equation. The differential equations can be written by invoking mass conservation. For example, for the linear pathway: formula_5 where formula_2 and formula_3 are fixed boundary species, the non-fixed intermediate formula_6 can be described using the differential equation: formula_7 The rate of change of the non-fixed intermediates formula_8 and formula_1 can be written in the same way: formula_9 formula_10 To run a simulation the rates, formula_11 need to be defined. If mass-action kinetics are assumed for the reaction rates, then the differential equation can be written as: formula_12 If values are assigned to the rate constants, formula_13, and the fixed species formula_2 and formula_3the differential equations can be solved. Analytical solutions. Computer simulations can only yield so much insight, as one would be required to run simulations on a wide range of parameter values, which can be unwieldy. A generally more powerful way to understand the properties of a model is to solve the differential equations analytically. Analytical solutions are possible if simple mass-action kinetics on each reaction step are assumed: formula_14 where formula_15 and formula_16 are the forward and reverse rate-constants, respectively. formula_17 is the substrate and formula_18 the product. If the equilibrium constant for this reaction is: formula_19 The mass-action kinetic equation can be modified to be: formula_20 Given the reaction rates, the differential equations describing the rates of change of the species can be described. For example, the rate of change of formula_21 will equal: formula_22 By setting the differential equations to zero, the steady-state concentration for the species can be derived. From here, the pathway flux equation can be determined. For the three-step pathway, the steady-state concentrations of formula_23 and formula_24 are given by: formula_25 Inserting either formula_21 or formula_24 into one of the rate laws will give the steady-state pathway flux, formula_26: formula_27 A pattern can be seen in this equation such that, in general, for a linear pathway of formula_28 steps, the steady-state pathway flux is given by: formula_29 Note that the pathway flux is a function of all the kinetic and thermodynamic parameters. This means there is no single parameter that determines the flux completely. If formula_13 is equated to enzyme activity, then every enzyme in the pathway has some influence over the flux. Linearized model: deriving control coefficients. Given the flux expression, it is possible to derive the flux control coefficients by differentiation and scaling of the flux expression. This can be done for the general case of formula_28 steps: formula_30 This result yields two corollaries: For the three-step linear chain, the flux control coefficients are given by: formula_32 where formula_33 is given by: formula_34 Given these results, there are some patterns: With more moderate equilibrium constants, perturbations can travel upstream as well as downstream. For example, a perturbation at the last step, formula_37, is better able to influence the reaction rates upstream, which results in an alteration in the steady-state flux. An important result can be obtained if all formula_13 are set as equal to each other. Under these conditions, the flux control coefficient is proportional to the numerator. That is: formula_38 If it is assumed that the equilibrium constants are all greater than 1.0, as earlier steps have more formula_39 terms, it must mean that earlier steps will, in general, have high larger flux control coefficients. In a linear chain of reaction steps, flux control will tend to be biased towards the front of the pathway. From a metabolic engineering or drug-targeting perspective, preference should be given to targeting the earlier steps in a pathway since they have the greatest effect on pathway flux. Note that this rule only applies to pathways without negative feedback loops. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "S_1, S_2, " }, { "math_id": 1, "text": "S_3" }, { "math_id": 2, "text": "X_o" }, { "math_id": 3, "text": "X_1" }, { "math_id": 4, "text": "e_i" }, { "math_id": 5, "text": "X_o \\stackrel{v_1}{\\longrightarrow} S_1 \\stackrel{v_2}{\\longrightarrow} S_2 \\stackrel{v_3}{\\longrightarrow} \nS_3 \\stackrel{v_4}{\\longrightarrow} X_1" }, { "math_id": 6, "text": "S_1" }, { "math_id": 7, "text": "\\frac{dS_1}{dt} = v_1 - v_2" }, { "math_id": 8, "text": "S_2" }, { "math_id": 9, "text": "\\frac{dS_2}{dt} = v_2 - v_3" }, { "math_id": 10, "text": "\\frac{dS_3}{dt} = v_3 - v_4" }, { "math_id": 11, "text": "v_i" }, { "math_id": 12, "text": "\\begin{array}{lcl} \\dfrac{dS_1}{dt} &=& k_1 X_o - k_2 S_1 \\\\[4pt] \n\\dfrac{dS_2}{dt} &=& k_2 S_1 - k_3 S_2 \\\\[4pt]\n\\dfrac{dS_3}{dt} &=& k_3 S_2 - k_4 S_3\n\\end{array}" }, { "math_id": 13, "text": "k_i" }, { "math_id": 14, "text": " v_i = k_i s_{i-1} - k_{-i} s_{i} " }, { "math_id": 15, "text": " k_i" }, { "math_id": 16, "text": "k_{-1}" }, { "math_id": 17, "text": "s_{i-1}" }, { "math_id": 18, "text": "s_i" }, { "math_id": 19, "text": "K_{eq} = q_i = \\frac{k_i}{k_{-i}} = \\frac{s_i}{s_{i-1}} " }, { "math_id": 20, "text": " v_i = k_i \\left( s_{i-1} - \\frac{s_i}{q_i} \\right) " }, { "math_id": 21, "text": " s_1" }, { "math_id": 22, "text": " \\frac{ds_1}{dt} = k_1 \\left( x_0 - \\frac{s_1}{q_1} \\right) - k_2 \\left( s_1 - \\frac{s_2}{q_2} \\right) " }, { "math_id": 23, "text": "s_1" }, { "math_id": 24, "text": "s_2" }, { "math_id": 25, "text": "\n\\begin{aligned}\n&s_1=\\frac{q_1}{q_3} \\frac{k_2 k_3 x_1+k_1 k_2 q_3 x_o+k_1 k_3 q_2 q_3 x_o}{k_1 k_2+k_1 k_3 q_2+k_2 k_3 q_1 q_2} \\\\[6pt]\n&s_2=\\frac{q_2}{q_3} \\frac{k_1 k_3 x_1+k_2 k_3 q_1 x_1+k_1 k_2 q_1 q_3 x_o}{k_1 k_2+k_1 k_3 q_2+k_2 k_3 q_1 q_2}\n\\end{aligned}\n" }, { "math_id": 26, "text": " J" }, { "math_id": 27, "text": "J=\\frac{x_o q_1 q_2 q_3-x_1}{\\frac{1}{k_1} q_1 q_2 q_3+\\frac{1}{k_2} q_2 q_3+\\frac{1}{k_3} q_3} " }, { "math_id": 28, "text": "n" }, { "math_id": 29, "text": "J=\\frac{x_o \\prod_{i=1}^n q_i-x_1}{\\sum_{i=1}^n \\frac{1}{k_i}\\left(\\prod_{j=i}^n q_j\\right)}" }, { "math_id": 30, "text": "C_i^J=\\frac{\\frac{1}{k_i} \\prod_{j=i}^n q_j}{\\sum_{j=1}^n \\frac{1}{k_j} \\prod_{k=j}^n q_k}" }, { "math_id": 31, "text": "0 \\leq C^J_i \\leq 1" }, { "math_id": 32, "text": " C_1^J=\\frac{1}{k_1} \\frac{q_1 q_2 q_3}{d} ; \\quad C_2^J=\\frac{1}{k_2} \\frac{q_2 q_3}{d} ; \\quad C_3^J=\\frac{1}{k_3} \\frac{q_3}{d} " }, { "math_id": 33, "text": "d" }, { "math_id": 34, "text": " d=\\frac{1}{k_1} q_1 q_2 q_3+\\frac{1}{k_2} q_2 q_3+\\frac{1}{k_3} q_3 " }, { "math_id": 35, "text": "q_i \\gg 1" }, { "math_id": 36, "text": " C^J_{1}" }, { "math_id": 37, "text": "k_3" }, { "math_id": 38, "text": "\n\\begin{aligned}\nC^J_1 &\\propto q_1 q_2 q_ 3\\\\\nC^J_2 &\\propto q_2 q_ 3\\\\\nC^J_3 &\\propto q_ 3\\\\\n\\end{aligned}\n" }, { "math_id": 39, "text": "q_i" } ]
https://en.wikipedia.org/wiki?curid=75460161
75460257
Narayana polynomials
Narayana polynomials are a class of polynomials whose coefficients are the Narayana numbers. The Narayana numbers and Narayana polynomials are named after the Canadian mathematician T. V. Narayana (1930–1987). They appear in several combinatorial problems. Definitions. For a positive integer formula_0 and for an integer formula_1, the Narayana number formula_2 is defined by formula_3 The number formula_4 is defined as formula_5 for formula_6 and as formula_7 for formula_8. For a nonnegative integer formula_9, the formula_0-th Narayana polynomial formula_10 is defined by formula_11 The associated Narayana polynomial formula_12 is defined as the reciprocal polynomial of formula_10: formula_13. Examples. The first few Narayana polynomials are formula_14 formula_15 formula_16 formula_17 formula_18 formula_19 Properties. A few of the properties of the Narayana polynomials and the associated Narayana polynomials are collected below. Further information on the properties of these polynomials are available in the references cited. Alternative form of the Narayana polynomials. The Narayana polynomials can be expressed in the following alternative form: formula_34. formula_36 with formula_37 and formula_38. Generating function. The ordinary generating function the Narayana polynomials is given by formula_39 Integral representation. The formula_0-th degree Legendre polynomial formula_40 is given by formula_41 Then, for "n" &gt; 0, the Narayana polynomial formula_10 can be expressed in the following form: References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "n" }, { "math_id": 1, "text": "k\\geq0" }, { "math_id": 2, "text": "N(n,k)" }, { "math_id": 3, "text": " N(n,k) = \\frac{1}{n}{n \\choose k}{n\\choose k-1}." }, { "math_id": 4, "text": "N(0,k)" }, { "math_id": 5, "text": "1" }, { "math_id": 6, "text": "k=0" }, { "math_id": 7, "text": "0" }, { "math_id": 8, "text": "k\\ne0" }, { "math_id": 9, "text": "n " }, { "math_id": 10, "text": "N_n(z)" }, { "math_id": 11, "text": "N_n(z) = \\sum_{k=0}^n N(n,k)z^k." }, { "math_id": 12, "text": "\\mathcal N_n(z)" }, { "math_id": 13, "text": "\\mathcal N_n(z)=z^nN_n\\left(\\tfrac{1}{z}\\right)" }, { "math_id": 14, "text": "N_0(z)=1" }, { "math_id": 15, "text": "N_1(z)=z" }, { "math_id": 16, "text": "N_2(z)=z^2+z " }, { "math_id": 17, "text": "N_3(z)=z^3+3z^2+z " }, { "math_id": 18, "text": "N_4(z)=z^4+6z^3+6z^2+z " }, { "math_id": 19, "text": "N_5(z)=z^5+10z^4+20z^3+10z^2+z " }, { "math_id": 20, "text": "N_n(z)= \\sum_0^n \\frac{1}{n+1}{n+1 \\choose k}{2n-k \\choose n}(z-1)^k " }, { "math_id": 21, "text": "N_n(1) " }, { "math_id": 22, "text": "C_n=\\frac{1}{n+1}{2n \\choose n} " }, { "math_id": 23, "text": "1, 1, 2, 5, 14, 42, 132, 429, \\ldots" }, { "math_id": 24, "text": "N_n(2) " }, { "math_id": 25, "text": "1, 2, 6, 22, 90, 394, 1806, 8558, \\ldots" }, { "math_id": 26, "text": "n\\ge 0" }, { "math_id": 27, "text": "d_n" }, { "math_id": 28, "text": "(0,0)" }, { "math_id": 29, "text": "(n,n)" }, { "math_id": 30, "text": "n\\times n" }, { "math_id": 31, "text": "S = \\{(k, 0) : k \\in \\mathbb N^+\\} \\cup \\{(0, k) : k \\in \\mathbb N^+\\}" }, { "math_id": 32, "text": "d_n = \\mathcal N(4)" }, { "math_id": 33, "text": "n \\ge 3" }, { "math_id": 34, "text": "\\mathcal N_n(z) = (1+z)N_{n-1}(z) + z \\sum_{k=1}^{n-2}\\mathcal N_k(z)\\mathcal N_{n-k-1}(z)" }, { "math_id": 35, "text": "n\\ge 3" }, { "math_id": 36, "text": "(n+1)\\mathcal N_n(z) = (2n-1)(1+z)\\mathcal N_{n-1}(z) - (n-2)(z-1)^2\\mathcal N_{n-2}(z)" }, { "math_id": 37, "text": "\\mathcal N_1(z)=1" }, { "math_id": 38, "text": "\\mathcal N_2(z)=1+z" }, { "math_id": 39, "text": " \\sum_{n=0}^{\\infty} N_n(z)t^n = \\frac{1+t-t z -\\sqrt{1 - 2(1+z) t + (1-z)^2 t^2 }}{2 t}." }, { "math_id": 40, "text": "P_n(x)" }, { "math_id": 41, "text": " P_n(x) = 2^{-n}\\sum_{k=0}^{\\left\\lfloor \\frac{n}{2}\\right\\rfloor } (-1)^k {n-k \\choose k}{2n-2k \\choose n-k}x^{n-2k}" }, { "math_id": 42, "text": "N_n(z)=(z-1)^{n+1}\\int_0^{\\frac{z}{z-1}} P_n(2x-1)\\,dx" } ]
https://en.wikipedia.org/wiki?curid=75460257
75463818
Quasi-isodynamic stellarator
Class of magnetic confinement fusion reactor A quasi-isodynamic (QI) stellarator is a type of stellarator (a magnetic confinement fusion reactor) that satisfies the property of omnigeneity, avoids the potentially hazardous toroidal bootstrap current, and has minimal neoclassical transport in the collisionless regime. Wendelstein 7-X, the largest stellarator in the world, was designed to be roughly quasi-isodynamic (QI). In contrast to quasi-symmetric fields, exactly QI fields on flux surfaces cannot be expressed analytically. However, it has been shown that nearly-exact QI can be extremely well approximated through mathematical optimization, and that the resulting fields enjoy the aforementioned properties. In a QI field, level curves of the magnetic field strength formula_0 on a flux surface close poloidally (the short way around the torus), and not toroidally (the long way around), causing the stellarator to resemble a series of linked magnetic mirrors. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "B" } ]
https://en.wikipedia.org/wiki?curid=75463818
75465176
Ahlswede–Khachatrian theorem
Theorem in extremal set theory In extremal set theory, the Ahlswede–Khachatrian theorem generalizes the Erdős–Ko–Rado theorem to t-intersecting families. Given parameters n, k and t, it describes the maximum size of a t-intersecting family of subsets of formula_0 of size k, as well as the families achieving the maximum size. Statement. Let formula_1 be integer parameters. A "t-intersecting family" &amp;NoBreak;&amp;NoBreak; is a collection of subsets of formula_0 of size k such that if &amp;NoBreak;&amp;NoBreak; then formula_2. Frankl constructed the t-intersecting families formula_3 The Ahlswede–Khachatrian theorem states that if &amp;NoBreak;&amp;NoBreak; is t-intersecting then formula_4 Furthermore, equality is possible only if &amp;NoBreak;&amp;NoBreak; is "equivalent" to a Frankl family, meaning that it coincides with one after permuting the coordinates. More explicitly, if formula_5 then formula_7, with equality if an only if &amp;NoBreak;&amp;NoBreak; is equivalent to formula_8; and if formula_9 then formula_10, with equality if an only if &amp;NoBreak;&amp;NoBreak; is equivalent to formula_8 or to formula_11. History. Erdős, Ko and Rado showed that if formula_12 then the maximum size of a t-intersecting family is formula_13. Frankl proved that when formula_14, the same bound holds for all formula_15, which is tight due to the example formula_16. This was extended to all t (using completely different techniques) by Wilson. As for smaller n, Erdős, Ko and Rado made the "&amp;NoBreak;&amp;NoBreak; conjecture", which states that when formula_17, the maximum size of a t-intersecting family is formula_18 which coincides with the size of the Frankl family formula_19. This conjecture is a special case of the Ahlswede–Khachatrian theorem. Ahlswede and Khachatrian proved their theorem in two different ways: using generating sets and using its dual. Using similar techniques, they later proved the corresponding Hilton–Milner theorem, which determines the maximum size of a t-intersecting family with the additional condition that no element is contained in all sets of the family. Related results. Weighted version. Katona's intersection theorem determines the maximum size of an intersecting family of subsets of formula_0. When &amp;NoBreak;&amp;NoBreak; is odd, the unique optimal family consists of all sets of size at least &amp;NoBreak;&amp;NoBreak; (corresponding to the Majority function), and when &amp;NoBreak;&amp;NoBreak; is odd, the unique optimal families consist of all sets whose intersection with a fixed set of size &amp;NoBreak;&amp;NoBreak; is at least &amp;NoBreak;&amp;NoBreak; (Majority on &amp;NoBreak;&amp;NoBreak; coordinates). Friedgut considered a measure-theoretic generalization of Katona's theorem, in which instead of maximizing the size of the intersecting family, we maximize its &amp;NoBreak;&amp;NoBreak;-measure, where &amp;NoBreak;&amp;NoBreak; is given by the formula formula_20 The measure &amp;NoBreak;&amp;NoBreak; corresponds to the process which chooses a random subset of formula_0 by adding each element with probability p independently. Katona's intersection theorem is the case &amp;NoBreak;&amp;NoBreak;. Friedgut considered the case &amp;NoBreak;&amp;NoBreak;. The weighted analog of the Erdős–Ko–Rado theorem states that if &amp;NoBreak;&amp;NoBreak; is intersecting then &amp;NoBreak;&amp;NoBreak; for all &amp;NoBreak;&amp;NoBreak;, with equality if and only if &amp;NoBreak;&amp;NoBreak; consists of all sets containing a fixed point. Friedgut proved the analog of Wilson's result in this setting: if &amp;NoBreak;&amp;NoBreak; is t-intersecting then &amp;NoBreak;&amp;NoBreak; for all &amp;NoBreak;&amp;NoBreak;, with equality if and only if &amp;NoBreak;&amp;NoBreak; consists of all sets containing t fixed points. Friedgut's techniques are similar to Wilson's. Dinur and Safra and Ahlswede and Khachatrian observed that the Ahlswede–Khachatrian theorem implies its own weighted version, for all &amp;NoBreak;&amp;NoBreak;. To state the weighted version, we first define the analogs of the Frankl families: formula_21 The weighted Ahlswede–Khachatrian theorem states that if &amp;NoBreak;&amp;NoBreak; is t-intersecting then for all &amp;NoBreak;&amp;NoBreak;, formula_22 with equality only if &amp;NoBreak;&amp;NoBreak; is equivalent to a Frankl family. Explicitly, formula_23 is optimal in the range formula_24 The argument of Dinur and Safra proves this result for all &amp;NoBreak;&amp;NoBreak;, without the characterization of the optimal cases. The main idea is that if we take a random subset of formula_25 of size &amp;NoBreak;&amp;NoBreak;, then the distribution of its intersection with formula_26 tends to &amp;NoBreak;&amp;NoBreak; as &amp;NoBreak;&amp;NoBreak;. Filmus proved weighted Ahlswede–Khachatrian theorem for all &amp;NoBreak;&amp;NoBreak; using the original arguments of Ahlswede and Khachatrian for &amp;NoBreak;&amp;NoBreak;, and using a different argument of Ahlswede and Khachatrian, originally used to provide an alternative proof of Katona's theorem, for &amp;NoBreak;&amp;NoBreak;. He also showed that the Frankl families are the unique optimal families for all &amp;NoBreak;&amp;NoBreak;. Version for strings. Ahlswede and Khachatrian proved a version of the Ahlswede–Khachatrian theorem for strings over a finite alphabet. Given a finite alphabet &amp;NoBreak;&amp;NoBreak;, a collection of strings of length n is "t-intersecting" if any two strings in the collection agree in at least t places. The analogs of the Frankl family in this setting are formula_27 where &amp;NoBreak;&amp;NoBreak; is an arbitrary word, and formula_28 is the number of positions in which w and &amp;NoBreak;&amp;NoBreak; agree. The Ahlswede–Khachatrian theorem for strings states that if &amp;NoBreak;&amp;NoBreak; is t-intersecting then formula_29 with equality if and only if &amp;NoBreak;&amp;NoBreak; is equivalent to a Frankl family. The theorem is proved by a reduction to the weighted Ahlswede–Khachatrian theorem, with formula_30. References. Notes. &lt;templatestyles src="Reflist/styles.css" /&gt; Works cited. &lt;templatestyles src="Refbegin/styles.css" /&gt;
[ { "math_id": 0, "text": "\\{1,\\dots,n\\}" }, { "math_id": 1, "text": "n \\ge k \\ge t \\ge 1" }, { "math_id": 2, "text": "|A\\cap B| \\ge t" }, { "math_id": 3, "text": "\n\\mathcal{F}_{n,k,t,r} = \\{ A \\subseteq \\{1,\\dots,n\\} : |A|=k \\text{ and } |A \\cap \\{1,\\dots,t+2r\\}| \\ge t+r \\}.\n" }, { "math_id": 4, "text": "\n|\\cal F| \\leq \\max_{r\\colon t+2r \\leq n} |\\mathcal{F}_{n,k,t,r}|.\n" }, { "math_id": 5, "text": "\n(k-t+1)(2+\\tfrac{t-1}{r+1}) < n < (k-t+1)(2+\\tfrac{t-1}{r})\n" }, { "math_id": 6, "text": "r=0" }, { "math_id": 7, "text": "|\\mathcal{F}| \\leq |\\mathcal{F}_{n,k,t,r}|" }, { "math_id": 8, "text": "\\mathcal{F}_{n,k,t,r}" }, { "math_id": 9, "text": "\n(k-t+1)(2+\\tfrac{t-1}{r+1}) = n\n" }, { "math_id": 10, "text": "|\\mathcal{F}| \\leq |\\mathcal{F}_{n,k,t,r}| = |\\mathcal{F}_{n,k,t,r+1}|" }, { "math_id": 11, "text": "\\mathcal{F}_{n,k,t,r+1}" }, { "math_id": 12, "text": "n \\ge t + (k-t) \\binom{k}{t}^2" }, { "math_id": 13, "text": "|\\mathcal{F}_{n,k,t,0}| = \\binom{n-t}{k-t}" }, { "math_id": 14, "text": "t \\ge 15" }, { "math_id": 15, "text": "n \\ge (t+1)(k-t-1)" }, { "math_id": 16, "text": "\\mathcal{F}_{n,k,t,1}" }, { "math_id": 17, "text": "(n,k,t)=(4m,2m,2)" }, { "math_id": 18, "text": "\n|\\{ A \\subseteq \\{1,\\ldots,4m\\} : |A|=2m \\text{ and } |A \\cap \\{1,\\ldots,2m\\}| \\ge m+1 \\}|,\n" }, { "math_id": 19, "text": "\\mathcal{F}_{4m,2m,2,m-1}" }, { "math_id": 20, "text": "\n\\mu_p(S) = p^{|S|} (1-p)^{n-|S|}.\n" }, { "math_id": 21, "text": "\n\\mathcal{F}_{n,t,r} = \\{ A \\subseteq \\{1,\\dots,n\\} : |A \\cap \\{1,\\dots,t+2r\\}| \\ge t+r \\}.\n" }, { "math_id": 22, "text": "\n\\mu_p(\\mathcal{F}) \\leq \\max_{r\\colon t+2r \\leq n} \\mu_p(\\mathcal{F}_{n,t,r}),\n" }, { "math_id": 23, "text": "\\mathcal{F}_{n,t,r}" }, { "math_id": 24, "text": "\n\\frac{r}{t+2r-1} \\leq p \\leq \\frac{r+1}{t+2r+1}.\n" }, { "math_id": 25, "text": "\\{1,\\dots,N\\}" }, { "math_id": 26, "text": "\\{1,\\ldots,n\\}" }, { "math_id": 27, "text": "\n\\mathcal{F}_{n,t,r} = \\{ w \\in \\Sigma^n : |w \\cap w_0| \\ge t+r \\},\n" }, { "math_id": 28, "text": "|w \\cap w_0|" }, { "math_id": 29, "text": "\n|\\mathcal{F}| \\leq \\max_{r\\colon t+2r \\leq n} |\\mathcal{F}_{n,t,r}|,\n" }, { "math_id": 30, "text": "p=1/|\\Sigma|" } ]
https://en.wikipedia.org/wiki?curid=75465176
75468149
Hsuan thu
Hsuan thu () is a diagram given in the ancient Chinese astronomical and mathematical text "Zhoubi Suanjing" indicating a proof of the Pythagorean theorem. "Zhoubi Suanjing" is one of the oldest Chinese texts on mathematics. The exact date of composition of the book has not been determined. Some estimates of the date range as far back as 1100 BCE, while others estimate the date as late as 200 CE. However, from astronomical evidence available in the book it would appear that much of the material in the book is from the time of Confucius, that is, the 6th century BCE. Hsuan thu represents one of the earliest known proofs of the Pythagorean theorem and also one of the simplest. The text in "Zhoubi Suanjing" accompanying the diagram has been translated as follows: "The art of numbering proceeds from the circle and the square. The circle is derived from the square and the square from the rectangle (literally, the T-square or the carpenter's square). The rectangle originate from the fact that 9x9 = 81 (that is, the multiplication table or properties of numbers as such). Thus let us cut a rectangle (diagonally) and make the width 3 (units) wide and the height 4 (units) long. The diagonal between the two corners will then be 5 (units) long. Now after drawing a square on the diagonal, circumscribe it by half-rectangles like that which has been left outside, so as to form a (square) plate. Thus the (four) outer half-rectangles of width 3, length 4 and diagonal 5, together make two rectangles (of area 24); then (when this is subtracted from the square plate of area 24) the remainder is of area 25. This (process) is called piling up 'piling up the rectangles' ("chi chu")." The "hsuan thu" diagram makes use of the 3,4,5 right triangle to demonstrate the Pythagorean theorem. However the Chinese people seems to have generalized its conclusion to all right triangles. The "hsuan thu" diagram, in its generalized form can be found in the writings of the Indian mathematician Bhaskara II (c. 1114–1185). The description of this diagram appears in verse 129 of "Bijaganita" of Bhaskara II. There is a legend that Bhaskara's proof of the Pythagorean theorem consisted of only just one word, namely, "Behold!". However, using the notations of the diagram, the theorem follows from the following equation: formula_0 References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "c^2=(a-b)^2+4(\\tfrac{1}{2}ab)=a^2+b^2." } ]
https://en.wikipedia.org/wiki?curid=75468149
75468396
Gravitational Aharonov-Bohm effect
In physics, the gravitational Aharonov-Bohm effect is a phenomenon involving the behavior of particles acting according to quantum mechanics while under the influence of a classical gravitational field. It is the gravitational analog of the well-known Aharonov–Bohm effect, which is about the quantum mechanical behavior of particles in a classical electromagnetic field. Electric effect. There are many variants of the Aharonov-Bohm effect in electromagnetism. Here we review an electric version of the Aharonov-Bohm effect that is most similar to the gravitational effect which has been experimentally observed. This electric effect is caused by a charged particle (say, an electron) being in a superposition of traveling down two different paths. In both paths, the electric field that the electron sees is zero everywhere along the path, but the scalar electric potential that the electron sees is not the same for both paths. In the above figure, the beamsplitter puts the electron in a superposition of taking the upper path and taking the lower path. In both paths, when the electron gets to the mirror, it is stopped and held there. During that time when the electron is held in place at a mirror, 2 electric charges each with charge formula_0 are brought near the upper mirror in a symmetric manner such that the net electric field caused by the 2 charges at the upper mirror is 0. We assume that the lower mirror is far enough away from the upper mirror such that the electric potential (and electric field) caused by the 2 charges is 0 at the lower mirror. So, this creates an electric potential difference between upper and lower mirrors equal to formula_1, where formula_2 is the distance of the charges from the mirror and formula_3 is the electric constant. The electron is held there for a time formula_4, after which the charges are moved away and the electron is allowed to continue moving along its path. Assuming that the time we take to move the 2 charges to and from the mirror is much smaller than formula_4, this time that the electron spends at the mirror causes a phase shift equal to formula_5 where formula_6 is the elementary charge. When the 2 paths of the interferometer are recombined, we see a different interference pattern depending on whether we brought the charges near the upper mirror to create a potential difference. This is surprising, because no matter whether we brought the charges near the upper mirror to create a potential difference, the electron always remains at a location where the electric field is zero (to be more precise, the wavefunction of the electron is only ever nonzero at locations where the electric field is 0). This electric Aharonov-Bohm effect has not been experimentally observed, unlike the magnetic effect. It not generally feasible to trap an electron at a "mirror" in the interferometer while the potential is turned on and off, which is necessary in this setup to ensure that the electron stays in a region where the field is 0 while the potential is varied. Proposals for experimentally observing the effect instead involve shielding the electron from any electric field by having it travel through a conducting cylinder while the potential is varied. In contrast, one experiment proposal for the gravitational Aharonov-Bohm effect actually does involve trapping atoms (which play an analogous role to electrons in the experiment proposal) and holding them in a region where the gravitation field is zero using optical lattices. Gravitational effect. Just as there are many variants of the Aharonov-Bohm effect in electromagnetism, there are many variants of the gravitational effect. The simplest version of the gravitational effect is analogous to the electric effect above, with the electron replaced by a small test mass such as an atom, and the 2 charges that create an electric potential replaced by 2 masses that create a gravitational potential. In the above figure, an atom passes through an atomic "beamsplitter" that puts the atom in a superposition of taking the upper and lower paths. The atoms are then reflected by atomic "mirrors" that cause them to recombine at the detector on the right, where an interference pattern is detected. When the atom is at a "mirror", it is paused and held there while a potential is introduced. The potential is created by moving 2 massive objects, each with mass formula_7, to the left and right sides of the upper mirror, a distance formula_2 away from the mirror. The masses are brought towards the upper mirror in a symmetric manner such that the gravitational field caused by the masses is 0 at the upper mirror. We assume that the upper mirror is far enough away from the lower mirror such that the masses create zero potential (and zero field) at the lower mirror, which means they create a gravitational potential difference of formula_8 between the upper and lower mirrors. Despite this gravitational potential difference, the gravitational field at the upper and lower mirrors is 0, and the atom is never in any position with a nonzero gravitation field. Still, a time formula_4 spent at the mirrors with that potential difference causes a phase shift, formula_9 where formula_10 is the mass of the atom. This phase shift is detected by observing the interference pattern where the atom paths recombine, which will be different depending on whether the potential difference was applied. Instead of these idealized paths for the atom that involve "mirrors" that pause the atom in its place while a potential is applied, the atom could be moved in those paths by an optical lattice. This would allow precise control over the positions of the atom and the amount of time spent in the gravitational potential. The various electromagnetic versions of the Aharonv-Bohm effect can be described in a way that does not suggest any physical reality to the electromagnetic potentials and does not require any nonlocality, by treating the sources of the electromagnetic field and the electromagnetic field itself quantum mechanically, instead of treating the test charge (electron) quantum mechanically and the electromagnetic field and its sources classically. Without a theory of quantum gravity, we cannot appeal to a fully quantum treatment of the test mass (atom), the sources of the gravitational field, and the gravitational field itself in order to explain the gravitational Aharonov-Bohm effect in a fully local, gauge-independent manner. However, this effect can be explained in a local, gauge-independent manner by considering the gravitational time dilation experienced by the atom in the path with the nonzero potential, and taking into account that matter waves pick up a phase at the Compton frequency of the matter. Experimental observation. In January 2022, a team led by Mark Kasevich announced that they had experimentally observed a gravitational Aharonov-Bohm effect with an experiment broadly similar to the one outlined above. The source of the gravitational potential in their experiment was a single 1.25 kg tungsten mass. The test masses were rubidium-87 atoms. The tungsten mass was fixed, so the gravitational field caused by the tungsten mass was not zero everywhere along the paths of the 87Rb atoms. This means that the phase shift of the rubidium atoms between the 2 paths was not caused by a gravitational potential energy difference alone, but also by a difference in the gravitational force felt by the atoms in the 2 paths. By detecting a difference in the phase shift between when the tungsten mass is present and when it is not present, they observed a phase shift consistent with that predicted by the Aharonov-Bohm effect. The "beamsplitters" and "mirrors" used to make the 87Rb atoms interfere are not solid-state components as would be the case with standard interferometers with light. Rather, they consisted of laser pulses that coherently transfer momentum between the atoms and photons.
[ { "math_id": 0, "text": "Q" }, { "math_id": 1, "text": "\\Delta U=\\frac{2Q}{4\\pi\\epsilon_0r}" }, { "math_id": 2, "text": "r" }, { "math_id": 3, "text": "\\epsilon_0" }, { "math_id": 4, "text": "T" }, { "math_id": 5, "text": "\\Delta\\phi=-e\\Delta UT/\\hbar=-\\frac{2eQT}{4\\pi\\epsilon_0r\\hbar}" }, { "math_id": 6, "text": "e" }, { "math_id": 7, "text": "M" }, { "math_id": 8, "text": "\\Delta U=-\\frac{2GM}{r}" }, { "math_id": 9, "text": "\\Delta\\phi=\\Delta UmT/\\hbar=-\\frac{2GMmT}{r\\hbar}" }, { "math_id": 10, "text": "m" } ]
https://en.wikipedia.org/wiki?curid=75468396
75468672
Quasi-polynomial growth
Subexponential bound in computational complexity In theoretical computer science, a function formula_0 is said to exhibit quasi-polynomial growth when it has an upper bound of the form formula_1 for some constant formula_2, as expressed using big O notation. That is, it is bounded by an exponential function of a polylogarithmic function. This generalizes the polynomials and the functions of polynomial growth, for which one can take formula_3. A function with quasi-polynomial growth is also said to be quasi-polynomially bounded. Quasi-polynomial growth has been used in the analysis of algorithms to describe certain algorithms whose computational complexity is not polynomial, but is substantially smaller than exponential. In particular, algorithms whose worst-case running times exhibit quasi-polynomial growth are said to take quasi-polynomial time. As well as time complexity, some algorithms require quasi-polynomial space complexity, use a quasi-polynomial number of parallel processors, can be expressed as algebraic formulas of quasi-polynomial size or have a quasi-polynomial competitive ratio. In some other cases, quasi-polynomial growth is used to model restrictions on the inputs to a problem that, when present, lead to good performance from algorithms on those inputs. It can also bound the size of the output for some problems; for instance, for the shortest path problem with linearly varying edge weights, the number of distinct solutions can be quasipolynomial. Beyond theoretical computer science, quasi-polynomial growth bounds have also been used in mathematics, for instance in partial results on the Hirsch conjecture for the diameter of polytopes in polyhedral combinatorics, or relating the sizes of cliques and independent sets in certain classes of graphs. However, in polyhedral combinatorics and enumerative combinatorics, a different meaning of the same word also is used, for the quasi-polynomials, functions that generalize polynomials by having periodic coefficients. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "f(n)" }, { "math_id": 1, "text": "f(n)=2^{O\\bigl((\\log n)^c\\bigr)}" }, { "math_id": 2, "text": "c" }, { "math_id": 3, "text": "c=1" } ]
https://en.wikipedia.org/wiki?curid=75468672
75480859
Petz recovery map
In quantum information theory, a mix of quantum mechanics and information theory, the Petz recovery map can be thought of a quantum analog of Bayes theorem. Proposed by Dénes Petz, the Petz recovery map is a quantum channel associated with a given quantum channel and quantum state. This recovery map is designed in a manner that, when applied to an output state resulting from the given quantum channel acting on an input state, it enables the inference of the original input state. In essence, the Petz recovery map serves as a tool for reconstructing information about the initial quantum state from its transformed counterpart under the influence of the specified quantum channel. The Petz recovery map finds applications in various domains, including quantum retrodiction, quantum error correction, and entanglement wedge reconstruction for black hole physics. Definition. Suppose we have a quantum state which is described by a density operator formula_0 and a quantum channel formula_1, the Petz recovery map is defined as formula_2 Notice that formula_3is the Hilbert-Schmidt adjoint of formula_1. The Petz map has been generalized in various ways in the field of quantum information theory. Properties of the Petz recovery map. A crucial property of the Petz recovery map is its ability to function as a quantum channel in certain cases, making it an essential tool in quantum information theory. formula_7 From 1 and 2, when formula_8 is invertible, the Petz recovery map formula_6 is a quantum channel, viz., a completely positive trace-preserving (CPTP) map. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\sigma" }, { "math_id": 1, "text": "\\mathcal{E}" }, { "math_id": 2, "text": "\\mathcal{P}_{\\sigma,\\mathcal{E}}(\\rho)=\\sigma^{1/2}\\mathcal{E}^{\\dagger}(\\mathcal{E}(\\sigma)^{-1/2}\\rho \\mathcal{E}(\\sigma)^{-1/2})\\sigma^{1/2}." }, { "math_id": 3, "text": "\\mathcal{E}^{\\dagger}" }, { "math_id": 4, "text": "\\mathcal{E}(\\sigma)^{-1/2}(\\cdot) \\mathcal{E}(\\sigma)^{-1/2}" }, { "math_id": 5, "text": "\\sigma^{1/2}(\\cdot)\\sigma^{1/2}" }, { "math_id": 6, "text": "\\mathcal{P}_{\\sigma,\\mathcal{E}}" }, { "math_id": 7, "text": "\\begin{aligned}\n\\operatorname{Tr}\\left[\\mathcal{P}_{\\sigma, \\mathcal{N}}(X)\\right] & =\\operatorname{Tr}\\left[\\sigma^{\\frac{1}{2}} \\mathcal{E}^{\\dagger}\\left(\\mathcal{E}(\\sigma)^{-\\frac{1}{2}} X \\mathcal{E}(\\sigma)^{-\\frac{1}{2}}\\right) \\sigma^{\\frac{1}{2}}\\right] \\\\\n& =\\operatorname{Tr}\\left[\\sigma \\mathcal{E}^{\\dagger}\\left(\\mathcal{E}(\\sigma)^{-\\frac{1}{2}} X \\mathcal{E}(\\sigma)^{-\\frac{1}{2}}\\right)\\right] \\\\\n& =\\operatorname{Tr}\\left[\\mathcal{E}(\\sigma) \\mathcal{E}(\\sigma)^{-\\frac{1}{2}} X \\mathcal{E}(\\sigma)^{-\\frac{1}{2}}\\right] \\\\\n& =\\operatorname{Tr}\\left[\\mathcal{E}(\\sigma)^{-\\frac{1}{2}} \\mathcal{E}(\\sigma) \\mathcal{E}(\\sigma)^{-\\frac{1}{2}} X\\right] \\\\\n& =\\operatorname{Tr}\\left[\\Pi_{\\mathcal{E}(\\sigma)} X\\right] \\\\\n& \\leq \\operatorname{Tr}[X]\n\\end{aligned}\n" }, { "math_id": 8, "text": "\\mathcal{E}(\\sigma)\n" } ]
https://en.wikipedia.org/wiki?curid=75480859
75484173
Steinitz's theorem (field theory)
In field theory, Steinitz's theorem states that a finite extension of fields formula_0 is simple if and only if there are only finitely many intermediate fields between formula_1 and formula_2. Proof. Suppose first that formula_0 is simple, that is to say formula_3 for some formula_4. Let formula_5 be any intermediate field between formula_2 and formula_1, and let formula_6 be the minimal polynomial of formula_7 over formula_5. Let formula_8 be the field extension of formula_1 generated by all the coefficients of formula_6. Then formula_9 by definition of the minimal polynomial, but the degree of formula_2 over formula_8 is (like that of formula_2 over formula_5) simply the degree of formula_6. Therefore, by multiplicativity of degree, formula_10 and hence formula_11. But if formula_12 is the minimal polynomial of formula_7 over formula_1, then formula_13, and since there are only finitely many divisors of formula_12, the first direction follows. Conversely, if the number of intermediate fields between formula_2 and formula_1 is finite, we distinguish two cases: History. This theorem was found and proven in 1910 by Ernst Steinitz.
[ { "math_id": 0, "text": "L/K" }, { "math_id": 1, "text": "K" }, { "math_id": 2, "text": "L" }, { "math_id": 3, "text": "L = K(\\alpha)" }, { "math_id": 4, "text": "\\alpha \\in L" }, { "math_id": 5, "text": "M" }, { "math_id": 6, "text": "g" }, { "math_id": 7, "text": "\\alpha" }, { "math_id": 8, "text": "M'" }, { "math_id": 9, "text": "M' \\subseteq M" }, { "math_id": 10, "text": "[M:M'] = 1" }, { "math_id": 11, "text": "M = M'" }, { "math_id": 12, "text": "f" }, { "math_id": 13, "text": "g | f" } ]
https://en.wikipedia.org/wiki?curid=75484173
75484752
Nickel(II) selenate
&lt;templatestyles src="Chembox/styles.css"/&gt; Chemical compound Nickel(II) selenate is a selenate of nickel with the chemical formula NiSeO4. Preparation. Nickel(II) selenate can be produced by the reaction of nickel(II) carbonate and selenic acid. formula_0 Properties. Nickel(II) selenate hexahydrate is a green solid. It is tetragonal, space group P41212 (No. 92). At 100 °C, nickel(II) selenate hexahydrate slowly loses water to the tetrahydrate, with space group P21/n (No. 14). At 510 °C, nickel(II) selenate decomposes directly into nickel selenite, which on further heating decomposes into nickel oxide and selenium dioxide. formula_1 It and potassium selenate are cooled and crystallized in hot aqueous solution to obtain the blue-green [Ni(H2O)6](SeO4)2. formula_2 References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\mathrm{ NiCO_3 \\ + H_2SeO_4 \\rightarrow NiSeO_4 + H_2O + CO_2 \\uparrow}" }, { "math_id": 1, "text": "\\mathrm{ NiSeO_4\\cdot 6H_2O \\ \\xrightarrow[-H_2O]{100\\,^\\circ C}\\ NiSeO_4\\cdot 4H_2O \\ \\xrightarrow[-H_2O]{300\\,^\\circ C}\\ NiSeO_4\\cdot H_2O \\ \\xrightarrow[-H_2O]{390\\,^\\circ C}\\ NiSeO_4 \\ \\xrightarrow[-O_2]{510\\,^\\circ C}\\ NiSeO_3 \\ \\xrightarrow[-SeO_2]{690\\,^\\circ C}\\ NiO }" }, { "math_id": 2, "text": "\\mathrm{ K_2SeO_4 \\ + NiSeO_4 + 6H_2O \\rightarrow K_2[Ni(H_2O)_6](SeO_4)_2 }" } ]
https://en.wikipedia.org/wiki?curid=75484752
754851
One-parameter group
Lie group homomorphism from the real numbers In mathematics, a one-parameter group or one-parameter subgroup usually means a continuous group homomorphism formula_0 from the real line formula_1 (as an additive group) to some other topological group formula_2. If formula_3 is injective then formula_4, the image, will be a subgroup of formula_2 that is isomorphic to formula_1 as an additive group. One-parameter groups were introduced by Sophus Lie in 1893 to define infinitesimal transformations. According to Lie, an "infinitesimal transformation" is an infinitely small transformation of the one-parameter group that it generates. It is these infinitesimal transformations that generate a Lie algebra that is used to describe a Lie group of any dimension. The action of a one-parameter group on a set is known as a flow. A smooth vector field on a manifold, at a point, induces a "local flow" - a one parameter group of local diffeomorphisms, sending points along integral curves of the vector field. The local flow of a vector field is used to define the Lie derivative of tensor fields along the vector field. Definition. A curve formula_5 is called one-parameter subgroup of formula_6 if it satisfies the condition formula_7. Examples. In Lie theory, one-parameter groups correspond to one-dimensional subspaces of the associated Lie algebra. The Lie group–Lie algebra correspondence is the basis of a science begun by Sophus Lie in the 1890s. Another important case is seen in functional analysis, with formula_2 being the group of unitary operators on a Hilbert space. See Stone's theorem on one-parameter unitary groups. In his monograph "Lie Groups", P. M. Cohn gave the following theorem: Any connected 1-dimensional Lie group is analytically isomorphic either to the additive group of real numbers formula_8, or to formula_9, the additive group of real numbers formula_10. In particular, every 1-dimensional Lie group is locally isomorphic to formula_1. Physics. In physics, one-parameter groups describe dynamical systems. Furthermore, whenever a system of physical laws admits a one-parameter group of differentiable symmetries, then there is a conserved quantity, by Noether's theorem. In the study of spacetime the use of the unit hyperbola to calibrate spatio-temporal measurements has become common since Hermann Minkowski discussed it in 1908. The principle of relativity was reduced to arbitrariness of which diameter of the unit hyperbola was used to determine a world-line. Using the parametrization of the hyperbola with hyperbolic angle, the theory of special relativity provided a calculus of relative motion with the one-parameter group indexed by rapidity. The "rapidity" replaces the "velocity" in kinematics and dynamics of relativity theory. Since rapidity is unbounded, the one-parameter group it stands upon is non-compact. The rapidity concept was introduced by E.T. Whittaker in 1910, and named by Alfred Robb the next year. The rapidity parameter amounts to the length of a hyperbolic versor, a concept of the nineteenth century. Mathematical physicists James Cockle, William Kingdon Clifford, and Alexander Macfarlane had all employed in their writings an equivalent mapping of the Cartesian plane by operator formula_11, where formula_12 is the hyperbolic angle and formula_13. In GL(n,C). An important example in the theory of Lie groups arises when formula_2 is taken to be formula_14, the group of invertible formula_15 matrices with complex entries. In that case, a basic result is the following: Theorem: Suppose formula_16 is a one-parameter group. Then there exists a unique formula_15 matrix formula_17 such that formula_18 for all formula_19. It follows from this result that formula_3 is differentiable, even though this was not an assumption of the theorem. The matrix formula_17 can then be recovered from formula_3 as formula_20. This result can be used, for example, to show that any continuous homomorphism between matrix Lie groups is smooth. Topology. A technical complication is that formula_4 as a subspace of formula_2 may carry a topology that is coarser than that on formula_1; this may happen in cases where formula_3 is injective. Think for example of the case where formula_2 is a torus formula_21, and formula_3 is constructed by winding a straight line round formula_21 at an irrational slope. In that case the induced topology may not be the standard one of the real line. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\varphi : \\mathbb{R} \\rightarrow G" }, { "math_id": 1, "text": "\\mathbb{R}" }, { "math_id": 2, "text": "G" }, { "math_id": 3, "text": "\\varphi" }, { "math_id": 4, "text": "\\varphi(\\mathbb{R})" }, { "math_id": 5, "text": " \\phi:\\mathbb{R} \\rightarrow G " }, { "math_id": 6, "text": " G " }, { "math_id": 7, "text": " \\phi(t)\\phi(s) = \\phi(s+t) " }, { "math_id": 8, "text": "\\mathfrak{R}" }, { "math_id": 9, "text": "\\mathfrak{T}" }, { "math_id": 10, "text": "\\mod 1" }, { "math_id": 11, "text": "(\\cosh{a} + r\\sinh{a})" }, { "math_id": 12, "text": "a" }, { "math_id": 13, "text": "r^2 = +1" }, { "math_id": 14, "text": "\\mathrm{GL}(n;\\mathbb C)" }, { "math_id": 15, "text": "n\\times n" }, { "math_id": 16, "text": "\\varphi : \\mathbb{R} \\rightarrow\\mathrm{GL}(n;\\mathbb C)" }, { "math_id": 17, "text": "X" }, { "math_id": 18, "text": "\\varphi(t)=e^{tX}" }, { "math_id": 19, "text": "t\\in\\mathbb R" }, { "math_id": 20, "text": "\\left.\\frac{d\\varphi(t)}{dt}\\right|_{t=0} = \\left.\\frac{d}{dt}\\right|_{t=0}e^{tX}=\\left.(Xe^{tX})\\right|_{t=0} = Xe^0=X" }, { "math_id": 21, "text": "T" } ]
https://en.wikipedia.org/wiki?curid=754851
75492429
Activation strain model
Mathematical model for modelling chemical reactions The activation strain model, also referred to as the distortion/interaction model, is a computational tool for modeling and understanding the potential energy curves of a chemical reaction as a function of reaction coordinate (ζ), as portrayed in reaction coordinate diagrams. The activation strain model decomposes these energy curves into 2 terms: the strain of the reactant molecules as they undergo a distortion and the interaction between these reactant molecules. A particularly important aspect of this type of analysis compared others is that it describes the energetics of the reaction in terms of the original reactant molecules and describes their distortion and interaction using intuitive models such as molecular orbital theory that are capable using most quantum chemical programs. Such a model allows for the calculation of transition state energies, and hence the activation energy, of a particular reaction mechanism and allows the model to be used as a predictive tool for describing competitive mechanisms and relative preference for certain pathways. In chemistry literature, the activation strain model has been used for modeling bimolecular reactions like SN2 and E2 reactions, transition metal mediated C-H bond activation, 1,3-dipolar cycloaddition reactions, among others. Theory. The activation strain model was originally proposed and has been extensively developed by Bickelhaupt and coworkers. This model breaks the potential energy curve as a function of reaction coordinate, ζ, of a reaction into 2 components as shown in equation 1: the energy due to straining the original reactant molecules (∆Estrain) and the energy due to interaction between reactant molecules (∆Eint). The strain term ∆Estrain is usually destabilizing as it represents the distortion of a molecule from the equilibrium geometry. The interaction term, ∆Eint, is generally stabilizing as it represents the electronic interactions of reactants that typically drive the reaction. The interaction energy is further decomposed based on an energy decomposition scheme from an approach by Morokuma and the Transition State Method from by Ziegler and Rauk. This decomposition breaks the interaction energy into terms that are easily processed within the framework of Kohn-Sham molecular orbital model. These terms relate to the electrostatic interactions, steric repulsion, orbital interactions, and dispersion forces as shown in equation 2. formula_0 formula_1 The electrostatic interaction, ∆Velst, is the classical repulsion and attraction between the nuclei and electron densities of the approaching reactant molecules. The Pauli repulsion term, ∆Epauli, relates to the interaction between the filled orbitals of reactant molecules. In other words, it describes steric repulsion between approaching reactants. The orbital interaction, ∆Eoi, describes bond formation, HOMO-LUMO interactions, and polarization. Further, this term is well complimented by group theory and MO theory as a way to describe interaction between orbitals of the correct symmetry. The last term, formula_2, relates to dispersion forces between the reactants. The transition states, defined as local maxima of potential energy surface, are found where equation 3 is satisfied. At this point along the reaction coordinate, as long as the strain and interaction energies at ζ = 0 is set to zero, the transition state energy (formula_3) is the activation energy (formula_4) of the reaction. The activation energy can then be defined as the sum of the activation strain (formula_5) and the TS interaction energy (formula_6) as shown in equation 4. formula_7 formula_8 Select applications. The bimolecular elimination (E2) and substitution (SN2) reactions are often in competition with each other because of mechanistic similarities, mainly that both benefit from a good leaving group and that the E2 reaction uses strong bases, which are often good nucleophiles for an SN2 reaction. Bickelhaupt et. al used the activation strain model to analyze this competition between the two reactions in acidic and basic media using the 4 representative reactions below. Reactions [1] and [2] represent the E2 and SN2 reactions, respectively, in basic conditions while reactions [3] and [4] represent the E2 and SN2 reactions in acidic conditions. &lt;chem&gt;[1] \ OH^- \ +\ CH_3CH_2OH\ -&gt; H_2O\ {+}\ CH_2CH_2\ {+} \ OH^-&lt;/chem&gt; &lt;chem&gt;[2] \ OH^- \ +\ CH_3CH_2OH\ -&gt; CH_3CH_2OH \ + \ OH^-&lt;/chem&gt; &lt;chem&gt;[3] \ H_2O \ + \ CH3CH2OH_2+ -&gt; H_3O+\ +\ CH2CH2 \ + \ H2O &lt;/chem&gt; &lt;chem&gt;[4] \ H2O \ + \ CH3CH2OH2+-&gt; CH3CH2OH2+ \ + \ H2O&lt;/chem&gt; Initial calculations show that, in basic media, the transition state energy Δ"E"‡ of the E2 pathway is lower while acidic conditions favor the SN2. Closer observation of the interaction and strain energies show that, for the E2 mechanism, upon shifting from acidic to basic media, the strain energy becomes more destabilizing, yet the interaction energy becomes more even more stabilizing, making it the driving force for the preference of the E2 pathway in basic conditions. To rationalize this increase in stabilizing interaction upon shifting to basic conditions, it is useful to represent the interaction energy in terms of molecular orbital theory. The figure below shows the lowest unoccupied molecular orbitals (LUMO)s of ethanol (basic conditions) and protonated ethanol (acidic conditions), which can be visualized as a combinations of the fragment &lt;chem&gt;*CH_3&lt;/chem&gt; radical and either the &lt;chem&gt;*CH2OH&lt;/chem&gt; (basic conditions) or the &lt;chem&gt;*CH2OH2+&lt;/chem&gt; (acidic conditions) radical. Upon protonation of the &lt;chem&gt;*CH2OH&lt;/chem&gt; fragment, these orbitals are lowered in energy, resulting in the overall LUMO for each molecule having different parentage. This change in parentage in the linear combination of atomic orbitals results in the LUMO of &lt;chem&gt;CH3CH2OH2+&lt;/chem&gt; having bonding character between β-carbon and the hydrogen atom abstracted in the E2 pathway while the LUMO of &lt;chem&gt;CH3CH2OH&lt;/chem&gt; has antibonding character along this bond. In either the SN2 or the E2 pathway, the HOMO of the nucleophile/base will be donating electron density into this LUMO. As the LUMO for &lt;chem&gt;CH3CH2OH2+&lt;/chem&gt; has bonding character along the C(β)-H bond, putting electrons into this orbital should result in strengthening of this bond, dissuading its abstraction as necessary in the E2 reaction. The opposite goes for the LUMO of &lt;chem&gt;CH3CH2OH&lt;/chem&gt;, as donation into the orbital that is antibonding with respect to this bond will weaken the C(β)-H bond and allow it abstraction in the E2 reaction. This relatively intuitive comparison within MO theory shows how the increase in stabilizing interaction for the E2 mechanism arises when switching from acidic to basic conditions. Single point calculations. An issue in the interpretation of interaction (∆Eint) and strain (∆Estrain) curves arises when only single points along the reaction coordinate are considered. Such issues become apparent when two model reactions are considered, which have identical strain energy ∆Estrain curves that become more destabilizing along the reaction coordinate but have different interaction energy curves. If one of the reactions has a more stabilizing interaction energy curve with greater curvature, the transition state will be reached sooner along the reaction coordinate in order to satisfy the condition in equation 3, while a reaction with a less stabilizing interaction curve will reach the transition state later in the reaction coordinate with a higher transition state energy. If only the transition states are observed, it would appear that the transition state of the second representative reaction would have a higher energy due to the higher strain energy at the respective transition states. However, if one considers the entire curves for both of the reactions, it would become clear that the higher transition sate energy of the second reaction is due to the less stabilizing interaction energy at all points along the reaction coordinate, while they have identical strain energy curves. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "Equation\\ 1 \\colon\\ \\Delta E(\\zeta)=\\Delta E_{strain}+ \\Delta E_{int}\n" }, { "math_id": 1, "text": "Equation\\ 2\\colon\\ \\Delta E_{int}=\\Delta V_{elst.}+\\Delta E_{pauli}+\\Delta E_{oi}+\\Delta E_{disp} " }, { "math_id": 2, "text": "\\Delta E _{disp}" }, { "math_id": 3, "text": " \\Delta E (\\zeta_{TS}) " }, { "math_id": 4, "text": " \\Delta E ^\\ddagger " }, { "math_id": 5, "text": " \\Delta E ^\\ddagger_{strain} " }, { "math_id": 6, "text": " \\Delta E ^\\ddagger _{int} " }, { "math_id": 7, "text": " Equation \\ 3: {d \\Delta E (\\zeta) \\over d \\zeta}={d \\Delta E_{strain} (\\zeta) \\over d \\zeta} + {d \\Delta E_{int} (\\zeta) \\over d \\zeta} = 0 " }, { "math_id": 8, "text": " Equation \\ 4: \\ \\Delta E ^\\ddagger = \\Delta E _{strain} ^\\ddagger + \\Delta E _{int} ^ \\ddagger " } ]
https://en.wikipedia.org/wiki?curid=75492429
75493362
BQP (disambiguation)
BQP is a computational complexity class that represents problems that are easy to solve for quantum computers. BQP or bqp can also refer to: Topics referred to by the same term &lt;templatestyles src="Dmbox/styles.css" /&gt; This page lists associated with the title .
[ { "math_id": 0, "text": "BQP" } ]
https://en.wikipedia.org/wiki?curid=75493362
75500914
E-values
Statistical concept In statistical hypothesis testing, e-values quantify the evidence in the data against a null hypothesis (e.g., "the coin is fair", or, in a medical context, "this new treatment has no effect"). They serve as a more robust alternative to p-values, addressing some shortcomings of the latter. In contrast to p-values, e-values can deal with optional continuation: e-values of subsequent experiments (e.g. clinical trials concerning the same treatment) may simply be multiplied to provide a new, "product" e-value that represents the evidence in the joint experiment. This works even if, as often happens in practice, the decision to perform later experiments may depend in vague, unknown ways on the data observed in earlier experiments, and it is not known beforehand how many trials will be conducted: the product e-value remains a meaningful quantity, leading to tests with Type-I error control. For this reason, e-values and their sequential extension, the "e-process", are the fundamental building blocks for anytime-valid statistical methods (e.g. confidence sequences). Another advantage over p-values is that any weighted average of e-values remains an e-value, even if the individual e-values are arbitrarily dependent. This is one of the reasons why e-values have also turned out to be useful tools in multiple testing. E-values can be interpreted in a number of different ways: first, the reciprocal of any e-value is itself a p-value, but a special, conservative one, quite different from p-values used in practice. Second, they are broad generalizations of likelihood ratios and are also related to, yet distinct from, Bayes factors. Third, they have an interpretation as bets. Finally, in a sequential context, they can also be interpreted as increments of nonnegative supermartingales. Interest in e-values has exploded since 2019, when the term 'e-value' was coined and a number of breakthrough results were achieved by several research groups. The first overview article appeared in 2023. Definition and mathematical background. Let the null hypothesis formula_0 be given as a set of distributions for data formula_1. Usually formula_2 with each formula_3 a single outcome and formula_4 a fixed sample size or some stopping time. We shall refer to such formula_1, which represent the full sequence of outcomes of a statistical experiment, as a "sample" or "batch of outcomes." But in some cases formula_1 may also be an unordered bag of outcomes or a single outcome. An e-variable or e-statistic is a "nonnegative" random variable formula_5 such that under all formula_6, its expected value is bounded by 1: formula_7. The value taken by e-variable formula_8 is called the e-value"." In practice, the term "e-value" (a number) is often used when one is really referring to the underlying e-variable (a random variable, that is, a measurable function of the data). Interpretations. As conservative p-values. For any e-variable formula_8 and any formula_9 and all formula_6, it holds that formula_10 In words: formula_11 is a p-value, and the "e-value based test with significance level formula_12," which rejects formula_13 if formula_14, has Type-I error bounded by formula_15. But, whereas with standard p-values the inequality (*) above is usually an equality (with continuous-valued data) or near-equality (with discrete data), this is not the case with e-variables. This makes e-value-based tests more conservative (less power) than those based on standard p-values, and it is the price to pay for safety (i.e., retaining Type-I error guarantees) under optional continuation and averaging. As generalizations of likelihood ratios. Let formula_16 be a simple null hypothesis. Let formula_17 be any other distribution on formula_18, and let formula_19 be their likelihood ratio. Then formula_20 is an e-variable. Conversely, any e-variable relative to a simple null formula_16 can be written as a likelihood ratio with respect to "some" distribution formula_17. Thus, when the null is simple, e-variables coincide with likelihood ratios. E-variables exist for general composite nulls as well though, and they may then be thought of as generalizations of likelihood ratios. The two main ways of constructing e-variables, UI and RIPr (see below) both lead to expressions that are variations of likelihood ratios as well. Two other standard generalizations of the likelihood ratio are (a) the generalized likelihood ratio as used in the standard, classical likelihood ratio test and (b) the Bayes factor. Importantly, neither (a) nor (b) are e-variables in general: generalized likelihood ratios in sense (a) are not e-variables unless the alternative is simple (see below under "universal inference"). Bayes factors "are" e-variables if the null is simple. To see this, note that, if formula_21 represents a statistical model, and formula_22 a prior density on formula_23, then we can set formula_24 as above to be the Bayes marginal distribution with density formula_25 and then formula_26 is also a Bayes factor of formula_27 vs. formula_28. If the null is composite, then some special e-variables can be written as Bayes factors with some very special priors, but most Bayes factors one encounters in practice are not e-variables and many e-variables one encounters in practice are not Bayes factors. As bets. Suppose you can buy a ticket for 1 monetary unit, with nonnegative pay-off formula_29. The statements "formula_8 is an e-variable" and "if the null hypothesis is true, you do not expect to gain any money if you engage in this bet" are logically equivalent. This is because formula_8 being an e-variable means that the expected gain of buying the ticket is the pay-off minus the cost, i.e. formula_30, which has expectation formula_31. Based on this interpretation, the product e-value for a sequence of tests can be interpreted as the amount of money you have gained by sequentially betting with pay-offs given by the individual e-variables and always re-investing all your gains. The betting interpretation becomes particularly visible if we rewrite an e-variable as formula_32 where formula_33 has expectation formula_31 under all formula_34 and formula_35 is chosen so that formula_36 a.s. Any e-variable can be written in the formula_37 form although with parametric nulls, writing it as a likelihood ratio is usually mathematically more convenient. The formula_37 form on the other hand is often more convenient in nonparametric settings. As a prototypical example, consider the case that formula_38 with the formula_3 taking values in the bounded interval formula_39. According to formula_27, the formula_3 are i.i.d. according to a distribution formula_40 with mean formula_41; no other assumptions about formula_40 are made. Then we may first construct a family of e-variables for single outcomes, formula_42, for any formula_43 (these are the formula_44 for which formula_45 is guaranteed to be nonnegative). We may then define a new e-variable for the complete data vector formula_1 by taking the product formula_46, where formula_47 is an estimate for formula_48, based only on past data formula_49, and designed to make formula_50 as large as possible in the "e-power" or "GRO" sense (see below). Waudby-Smith and Ramdas use this approach to construct "nonparametric" confidence intervals for the mean that tend to be significantly narrower than those based on more classical methods such as Chernoff, Hoeffding and Bernstein bounds. A fundamental property: optional continuation. E-values are more suitable than p-value when one expects follow-up tests involving the same null hypothesis with different data or experimental set-ups. This includes, for example, combining individual results in a meta-analysis. The advantage of e-values in this setting is that they allow for "optional continuation." Indeed, they have been employed in what may be the world's first fully 'online' meta-analysis with explicit Type-I error control. Informally, optional continuation implies that the product of any number of e-values, formula_51, defined on independent samples formula_52, is itself an e-value, even if the "definition" of each e-value is allowed to depend on all previous outcomes, and no matter what rule is used to decide when to stop gathering new samples (e.g. to perform new trials). It follows that, for any significance level formula_53, if the null is true, then the probability that a product of e-values will "ever" become larger than formula_54 is bounded by formula_15. Thus if we decide to combine the samples observed so far and reject the null if the product e-value is larger than formula_54, then our Type-I error probability remains bounded by formula_12. We say that testing based on e-values "remains safe (Type-I valid) under optional continuation". Mathematically, this is shown by first showing that the product e-variables form a nonnegative discrete-time martingale in the filtration generated by formula_52 (the individual e-variables are then increments of this martingale). The results then follow as a consequence of Doob's optional stopping theorem and Ville's inequality. We already implicitly used product e-variables in the example above, where we defined e-variables on individual outcomes formula_55 and designed a new e-value by taking products. Thus, in the example, the individual outcomes formula_55 play the role of 'batches' (full samples) formula_56 above, and we can therefore even engage in "optional stopping" "within" the original batch formula_57: we may stop the data analysis at any individual "outcome" (not just "batch of outcomes") we like, for whatever reason, and reject if the product so far exceeds formula_58. Not all e-variables defined for batches of outcomes formula_57 can be decomposed as a product of per-outcome e-values in this way though. If this is not possible, we cannot use them for optional stopping (within a sample formula_57) but only for optional continuation (from one sample formula_56to the next formula_59 and so on). Construction and optimality. If we set formula_60 independently of the data we get a "trivial" e-value: it is an e-variable by definition, but it will never allow us to reject the null hypothesis. This example shows that some e-variables may be better than others, in a sense to be defined below. Intuitively, a good e-variable is one that tends to be large (much larger than 1) if the alternative is true. This is analogous to the situation with p-values: both e-values and p-values can be defined without referring to an alternative, but "if" an alternative is available, we would like them to be small (p-values) or large (e-values) with high probability. In standard hypothesis tests, the quality of a valid test is formalized by the notion of "statistical power" but this notion has to be suitably modified in the context of e-values. The standard notion of quality of an e-variable relative to a given alternative formula_61, used by most authors in the field, is a generalization of the Kelly criterion in economics and (since it does exhibit close relations to classical power) is sometimes called "e-power"; the optimal e-variable in this sense is known as "log-optimal" or "growth-rate optimal" (often abbreviated to GRO). In the case of a simple alternative formula_62, the e-power of a given e-variable formula_63 is simply defined as the expectation formula_64; in case of composite alternatives, there are various versions (e.g. worst-case absolute, worst-case relative) of e-power and GRO. Simple alternative, simple null: likelihood ratio. Let formula_16 and formula_62 both be simple. Then the likelihood ratio e-variable formula_65 has maximal e-power in the sense above, i.e. it is GRO. Simple alternative, composite null: reverse information projection (RIPr). Let formula_62 be simple and formula_66 be composite, such that all elements of formula_67 have densities (denoted by lower-case letters) relative to the same underlying measure. Grünwald et al. show that under weak regularity conditions, the GRO e-variable exists, is essentially unique, and is given by formula_68 where formula_69is the Reverse Information Projection (RIPr) of formula_24 unto the convex hull of formula_27. Under further regularity conditions (and in all practically relevant cases encountered so far), formula_69is given by a Bayes marginal density: there exists a specific, unique distribution formula_70 on formula_71 such that formula_72. Simple alternative, composite null: universal inference (UI). In the same setting as above, show that, under no regularity conditions at all, formula_73 is an e-variable (with the second equality holding if the MLE (maximum likelihood estimator) formula_74 based on data formula_57 is always well-defined). This way of constructing e-variables has been called the universal inference (UI) method, "universal" referring to the fact that no regularity conditions are required. Composite alternative, simple null. Now let formula_75 be simple and formula_76 be composite, such that all elements of formula_67 have densities relative to the same underlying measure. There are now two generic, closely related ways of obtaining e-variables that are close to growth-optimal (appropriately redefined for composite formula_61): Robbins' method of mixtures and the plug-in method, originally due to Wald but, in essence, re-discovered by Philip Dawid as "prequential plug-in" and Jorma Rissanen as "predictive MDL". The method of mixtures essentially amounts to "being Bayesian about the numerator" (the reason it is not called "Bayesian method" is that, when both null and alternative are composite, the numerator may often not be a Bayes marginal): we posit any prior distribution formula_70 on formula_77 and set formula_78 and use the e-variable formula_79. To explicate the plug-in method, suppose that formula_80 where formula_81 constitute a stochastic process and let formula_82 be an estimator of formula_83 based on data formula_84 for formula_85. In practice one usually takes a "smoothed" maximum likelihood estimator (such as, for example, the regression coefficients in ridge regression), initially set to some "default value" formula_86. One now recursively constructs a density formula_87 for formula_88 by setting formula_89 . Effectively, both the method of mixtures and the plug-in method can be thought of "learning" a specific instantiation of the alternative that explains the data well. Composite null and alternative. In parametric settings, we can simply combine the main methods for the composite alternative (obtaining formula_90 or formula_91) with the main methods for the composite null (UI or RIPr, using the single distribution formula_90 or formula_91 as an alternative). Note in particular that when using the plug-in method together with the UI method, the resulting e-variable will look like formula_92 which resembles, but is still fundamentally different from, the generalized likelihood ratio as used in the classical likelihood ratio test. The advantage of the UI method compared to RIPr is that (a) it can be applied whenever the MLE can be efficiently computed - in many such cases, it is not known whether/how the reverse information projection can be calculated; and (b) that it 'automatically' gives not just an e-variable but a full e-process (see below): if we replace formula_93 in the formula above by a general stopping time formula_4, the resulting ratio is still an e-variable; for the reverse information projection this automatic e-process generation only holds in special cases. Its main disadvantage compared to RIPr is that it can be substantially sub-optimal in terms of the e-power/GRO criterion, which means that it leads to tests which also have less classical statistical power than RIPr-based methods. Thus, for settings in which the RIPr-method is computationally feasible and leads to e-processes, it is to be preferred. These include the z-test, t-test and corresponding linear regressions, k-sample tests with Bernoulli, Gaussian and Poisson distributions and the logrank test (an R package is available for a subset of these), as well as conditional independence testing under a "model-X assumption". However, in many other statistical testing problems, it is currently (2023) unknown whether fast implementations of the reverse information projection exist, and they may very well not exist (e.g. generalized linear models without the model-X assumption). In nonparametric settings (such as testing a mean as in the example above, or nonparametric 2-sample testing), it is often more natural to consider e-variables of the formula_94 type. However, while these superficially look very different from likelihood ratios, they can often still be interpreted as such and sometimes can even be re-interpreted as implementing a version of the RIPr-construction. Finally, in practice, one sometimes resorts to mathematically or computationally convenient combinations of RIPr, UI and other methods. For example, RIPr is applied to get optimal e-variables for small blocks of outcomes and these are then multiplied to obtain e-variables for larger samples - these e-variables work well in practice but cannot be considered optimal anymore. A third construction method: p-to-e (and e-to-p) calibration. There exist functions that convert p-values into e-values. Such functions are called "p-to-e calibrators". Formally, a calibrator is a nonnegative decreasing function formula_95 which, when applied to a p-variable (a random variable whose value is a p-value), yields an e-variable. A calibrator formula_96 is said to dominate another calibrator formula_97 if formula_98, and this domination is strict if the inequality is strict. An admissible calibrator is one that is not strictly dominated by any other calibrator. One can show that for a function to be a calibrator, it must have an integral of at most 1 over the uniform probability measure. One family of admissible calibrators is given by the set of functions formula_99 with formula_100. Another calibrator is given by integrating out formula_101: formula_102 Conversely, an e-to-p calibrator transforms e-values back into p-variables. Interestingly, the following calibrator dominates all other e-to-p calibrators: formula_103. While of theoretical importance, calibration is not much used in the practical design of e-variables since the resulting e-variables are often far from growth-optimal for any given formula_104. E-Processes. Definition. Now consider data formula_105 arriving sequentially, constituting a discrete-time stochastic process. Let formula_106 be another discrete-time process where for each formula_107 can be written as a (measurable) function of the first formula_108 outcomes. We call formula_106 an e-process if for any stopping time formula_109 is an e-variable, i.e. for all formula_110. In basic cases, the stopping time can be defined by any rule that determines, at each sample size formula_111, based only on the data observed so far, whether to stop collecting data or not. For example, this could be "stop when you have seen four consecutive outcomes larger than 1", "stop at formula_112", or the level-formula_15-aggressive rule, "stop as soon as you can reject at level formula_15-level, i.e. at the smallest formula_111 such that formula_113", and so on. With e-processes, we obtain an e-variable with any such rule. Crucially, the data analyst may not know the rule used for stopping. For example, her boss may tell her to stop data collecting and she may not know exactly why - nevertheless, she gets a valid e-variable and Type-I error control. This is in sharp contrast to data analysis based on p-values (which becomes invalid if stopping rules are not determined in advance) or in classical Wald-style sequential analysis (which works with data of varying length but again, with stopping times that need to be determined in advance). In more complex cases, the stopping time has to be defined relative to some slightly reduced filtration, but this is not a big restriction in practice. In particular, the level-formula_15-aggressive rule is always allowed. Because of this validity under optional stopping, e-processes are the fundamental building block of confidence sequences, also known as anytime-valid confidence intervals. Technically, e-processes are generalizations of test supermartingales, which are nonnegative supermartingales with starting value 1: any test supermartingale constitutes an e-process but not vice versa. Construction. E-processes can be constructed in a number of ways. Often, one starts with an e-value formula_114 for formula_115 whose definition is allowed to depend on previous data, i.e., for all formula_116 (again, in complex testing problems this definition needs to be modified a bit using reduced filtrations). Then the product process formula_117 with formula_118 is a test supermartingale, and hence also an e-process (note that we already used this construction in the example described under "e-values as bets" above: for fixed formula_119 , the e-values formula_120 were not dependent on past-data, but by using formula_121 depending on the past, they became dependent on past data). Another way to construct an e-process is to use the universal inference construction described above for sample sizes formula_122 The resulting sequence of e-values formula_106 will then always be an e-process. History. Historically, e-values implicitly appear as building blocks of nonnegative supermartingales in the pioneering work on anytime-valid confidence methods by well-known mathematician Herbert Robbins and some of his students. The first time e-values (or something very much like them) are treated as a quantity of independent interest is by another well-known mathematician, Leonid Levin, in 1976, within the theory of algorithmic randomness. With the exception of contributions by pioneer V. Vovk in various papers with various collaborators (e.g.), and an independent re-invention of the concept in an entirely different field, the concept did not catch on at all until 2019, when, within just a few months, several pioneering papers by several research groups appeared on arXiv (the corresponding journal publications referenced below sometimes coming years later). In these, the concept was finally given a proper name ("S-Value" and "E-Value"; in later versions of their paper, also adapted "E-Value"); describing their general properties, two generic ways to construct them, and their intimate relation to betting). Since then, interest by researchers around the world has been surging. In 2023 the first overview paper on "safe, anytime-valid methods", in which e-values play a central role, appeared. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "H_0\n" }, { "math_id": 1, "text": "Y" }, { "math_id": 2, "text": "Y= (X_1, \\ldots, X_\\tau ) " }, { "math_id": 3, "text": "X_i" }, { "math_id": 4, "text": "\\tau" }, { "math_id": 5, "text": "E= E(Y)" }, { "math_id": 6, "text": "P \\in H_0" }, { "math_id": 7, "text": "{\\mathbb E}_P[E] \\leq 1\n" }, { "math_id": 8, "text": "E" }, { "math_id": 9, "text": "0< \\alpha \\leq 1 " }, { "math_id": 10, "text": "P\\left(E \\geq \\frac{1}{\\alpha}\\right)= P(1/E \\leq \\alpha ) \\ \\overset{(*)}{\\leq}\\ \\alpha " }, { "math_id": 11, "text": "1/E" }, { "math_id": 12, "text": "\\alpha " }, { "math_id": 13, "text": "P_0" }, { "math_id": 14, "text": "1/E \\leq \\alpha " }, { "math_id": 15, "text": "\\alpha " }, { "math_id": 16, "text": "H_0 = \\{ P_0 \\} " }, { "math_id": 17, "text": "Q " }, { "math_id": 18, "text": "Y " }, { "math_id": 19, "text": "E:= \\frac{q(Y)}{p_0(Y)} " }, { "math_id": 20, "text": "E " }, { "math_id": 21, "text": "{\\mathcal Q} = \\{Q_{\\theta}: \\theta \\in \\Theta \\}" }, { "math_id": 22, "text": "w" }, { "math_id": 23, "text": "\\Theta" }, { "math_id": 24, "text": "Q" }, { "math_id": 25, "text": "q(Y) = \\int q_{\\theta}(Y) w(\\theta) d \\theta" }, { "math_id": 26, "text": "E= q(Y)/p_0(Y)" }, { "math_id": 27, "text": "H_0" }, { "math_id": 28, "text": "H_1 := {\\mathcal Q}" }, { "math_id": 29, "text": "E=E(Y)" }, { "math_id": 30, "text": "E-1" }, { "math_id": 31, "text": "\\leq 0" }, { "math_id": 32, "text": "E := 1 + \\lambda U" }, { "math_id": 33, "text": "U" }, { "math_id": 34, "text": "P \\in H_0 " }, { "math_id": 35, "text": " \\lambda\\in {\\mathbb R} " }, { "math_id": 36, "text": "E \\geq 0" }, { "math_id": 37, "text": "1 + \\lambda U" }, { "math_id": 38, "text": "Y= (X_1, \\ldots, X_n)" }, { "math_id": 39, "text": "[0,1]" }, { "math_id": 40, "text": "P" }, { "math_id": 41, "text": "\\mu" }, { "math_id": 42, "text": "E_{i,\\lambda} := 1+ \\lambda (X_i - \\mu) " }, { "math_id": 43, "text": "\\lambda \\in [-1/(1-\\mu),1/\\mu] " }, { "math_id": 44, "text": " \\lambda " }, { "math_id": 45, "text": "E_{i,\\lambda}" }, { "math_id": 46, "text": "E:= \\prod_{i=1}^n E_{i,\\breve{\\lambda}|X^{i-1}} " }, { "math_id": 47, "text": "\\breve{\\lambda}|X^{i-1} " }, { "math_id": 48, "text": "{\\lambda} " }, { "math_id": 49, "text": "X^{i-1}= (X_1, \\ldots,X_{i-1}) " }, { "math_id": 50, "text": "E_{i,\\lambda} " }, { "math_id": 51, "text": " E_{(1)}, E_{(2)},\\ldots " }, { "math_id": 52, "text": "Y_{(1)}, Y_{(2)},\\ldots " }, { "math_id": 53, "text": "0 <\\alpha < 1 " }, { "math_id": 54, "text": "1/\\alpha " }, { "math_id": 55, "text": "X_i " }, { "math_id": 56, "text": "Y_{(j)} " }, { "math_id": 57, "text": "Y " }, { "math_id": 58, "text": "1/\\alpha " }, { "math_id": 59, "text": "Y_{(j+1)} " }, { "math_id": 60, "text": "E:=1" }, { "math_id": 61, "text": "H_1 " }, { "math_id": 62, "text": "H_1 = \\{ Q \\} " }, { "math_id": 63, "text": "S " }, { "math_id": 64, "text": "{\\mathbb E}_Q[\\log E] " }, { "math_id": 65, "text": "E = q(Y)/p_0(Y) " }, { "math_id": 66, "text": "H_0 = \\{P_{\\theta}: \\theta \\in \\Theta_0 \\}" }, { "math_id": 67, "text": "H_0 \\cup H_1" }, { "math_id": 68, "text": "E:= \\frac{q(Y)}{p_{\\curvearrowleft Q} (Y) } " }, { "math_id": 69, "text": "p_{\\curvearrowleft Q} " }, { "math_id": 70, "text": "W" }, { "math_id": 71, "text": "\\Theta_0" }, { "math_id": 72, "text": "p_{\\curvearrowleft Q}(Y) = \\int_{\\Theta_0} p_{\\theta}(Y) dW(\\theta) " }, { "math_id": 73, "text": "E= \\frac{q(Y)}{\\sup_{P \\in H_0} p(Y)} \\left( = \\frac{q(Y)}{{p}_{\\hat{\\theta} \\mid Y } (Y) } \\right) " }, { "math_id": 74, "text": "\\hat\\theta \\mid Y " }, { "math_id": 75, "text": "H_0 = \\{ P \\} " }, { "math_id": 76, "text": "H_1 = \\{Q_{\\theta}: \\theta \\in \\Theta_1 \\}" }, { "math_id": 77, "text": "\\Theta_1" }, { "math_id": 78, "text": "\\bar{q}_W(Y) := \\int_{\\Theta_1} q_{\\theta} (Y) dW(\\theta) " }, { "math_id": 79, "text": "\\bar{q}_W(Y)/p(Y) " }, { "math_id": 80, "text": "Y= (X_1, \\ldots, X_n) " }, { "math_id": 81, "text": "X_1, X_2, \\ldots " }, { "math_id": 82, "text": "\\breve\\theta \\mid X^{i} " }, { "math_id": 83, "text": "\\theta \\in \\Theta_1 " }, { "math_id": 84, "text": "X^i=(X_1, \\ldots, X_i) " }, { "math_id": 85, "text": "i \\geq 0 " }, { "math_id": 86, "text": "\\breve\\theta \\mid X^{0}:= \\theta_0 " }, { "math_id": 87, "text": "\\bar{q}_{\\breve\\theta} " }, { "math_id": 88, "text": "X^n " }, { "math_id": 89, "text": "\\bar{q}_{\\breve\\theta}(X^n) = \\prod_{i=1}^n q_{\\breve\\theta \\mid X^{i-1}}(X_i \\mid X^{i-1}) " }, { "math_id": 90, "text": "\\bar{q}_{\\breve\\theta} " }, { "math_id": 91, "text": "\\bar{q}_{W} " }, { "math_id": 92, "text": "\\frac{\\prod_{i=1}^n q_{\\breve\\theta \\mid X^{i-1}}(X_i)}{q_{\\hat\\theta \\mid X^n} (X^n)}" }, { "math_id": 93, "text": "n" }, { "math_id": 94, "text": "1+ \\lambda U " }, { "math_id": 95, "text": "f : [0, 1] \\rightarrow [0, \\infty]" }, { "math_id": 96, "text": "f" }, { "math_id": 97, "text": "g" }, { "math_id": 98, "text": "f \\geq g" }, { "math_id": 99, "text": "\\{f_{\\kappa} : 0 < \\kappa < 1 \\} " }, { "math_id": 100, "text": "f_\\kappa(p) := \\kappa p^{\\kappa -1}" }, { "math_id": 101, "text": " \\kappa" }, { "math_id": 102, "text": "\\int_0^1 \\kappa p^{\\kappa -1} d\\kappa = \\frac{1-p+p \\log p}{p(-\\log p)^2} " }, { "math_id": 103, "text": "f(t) := \\min(1, 1/t)" }, { "math_id": 104, "text": "H_1" }, { "math_id": 105, "text": "X_1, X_2, \\ldots " }, { "math_id": 106, "text": "E_1, E_2, \\ldots " }, { "math_id": 107, "text": "n, E_n " }, { "math_id": 108, "text": "(X_1, \\ldots, X_n) " }, { "math_id": 109, "text": "\\tau, E_{\\tau} " }, { "math_id": 110, "text": "P \\in H_0: {\\mathbb E}_P[ E_{\\tau} ] \\leq 1 " }, { "math_id": 111, "text": "n " }, { "math_id": 112, "text": "n=100 " }, { "math_id": 113, "text": "E_n \\geq 1/\\alpha " }, { "math_id": 114, "text": "S_i " }, { "math_id": 115, "text": "X_i " }, { "math_id": 116, "text": "P \\in H_0: {\\mathbb E}_P[ E_{i} | X_1, \\ldots, X_{i-1} ] \\leq 1 " }, { "math_id": 117, "text": "M_1, M_2, \\ldots " }, { "math_id": 118, "text": "M_n = E_1 \\times E_2 \\cdots \\times E_n " }, { "math_id": 119, "text": "\\lambda " }, { "math_id": 120, "text": "E_{i,\\lambda} " }, { "math_id": 121, "text": "\\lambda = \\breve{\\lambda}|X^{i-1} " }, { "math_id": 122, "text": "1, 2, \\ldots " } ]
https://en.wikipedia.org/wiki?curid=75500914
75504247
Constrained Horn clauses
Fragment of first-order logic Constrained Horn clauses (CHCs) are a fragment of first-order logic with applications to program verification and synthesis. Constrained Horn clauses can be seen as a form of constraint logic programming. Definition. A constrained Horn clause is a formula of the form formula_0 where formula_1 is a "constraint" in some first-order theory, formula_2 are predicates, and formula_3 are universally-quantified variables. Decidability. The satisfiability of constrained Horn clauses with constraints from linear integer arithmetic is undecidable. Solvers. There are several automated solvers for CHCs, including the SPACER engine of Z3. CHC-COMP is an annual competition of CHC solvers. CHC-COMP has run every year since 2018. Applications. Constrained Horn clauses are a convenient language in which to specify problems in program verification. The SeaHorn verifier for LLVM represents verification conditions as constrained Horn clauses, as does the JayHorn verifier for Java. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\n \\phi\\land P_1(\\mathbf{x}_1)\\land \\ldots\\land P_n(\\mathbf{x_n})\\to P(\\mathbf{x})\n" }, { "math_id": 1, "text": "\\phi" }, { "math_id": 2, "text": "P_i" }, { "math_id": 3, "text": "\\mathbf{x}_i" } ]
https://en.wikipedia.org/wiki?curid=75504247
75508474
Line-cylinder intersection
Geometry calculation Line-cylinder intersection is the calculation of any points of intersection, given an analytic geometry description of a line and a cylinder in 3d space. An arbitrary line and cylinder may have no intersection at all. Or there may be one or two points of intersection. Or a line may lie along the surface of a cylinder, parallel to its axis, resulting in infinitely many points of intersection. The method described here distinguishes between these cases, and when intersections exist, computes their positions. The term “cylinder” can refer to a three-dimensional solid or, as in this article, only the curved external surface of the solid. This is why a line piercing a cylinder's volume is considered to have two points of intersection: the surface point where it enters and the one where it leaves. See § end caps. A key intuition of this sort of intersection problem is to represent each shape as an equation which is true for all points on the shape. Solving them as a system of two simultaneous equations finds the points which belong to both shapes, which is the intersection. The equations below were solved using Maple. This method has applications in computational geometry, graphics rendering, shape modeling, physics-based modeling, and related types of computational 3d simulations. This has led to various implementations. This method is closely related to Line–sphere intersection. Cylinder equation, end caps excluded. Let formula_0 be the cylinder base (or one endpoint), formula_1 be the cylinder axis unit vector, cylinder radius formula_2, and height (or axis length) formula_3. The cylinder may be in any orientation. The equation for an infinite cylinder can be written as formula_4 where formula_5 is any point on the cylinder surface. The equation simply states that points formula_6 are exactly at Euclidean distance formula_2 from the axis formula_7 starting from point formula_8, where formula_2 is measured in units of formula_9. Note that formula_10 if formula_7 is a unit vector. Because both sides of the equation are always positive or zero, we can square it, and eliminate the square root operation in the Euclidean norm on the left side: formula_11 Point formula_6 is at signed distance formula_12 from the base along the axis. Therefore, the two equations defining the cylinder, excluding the end caps, is formula_11 formula_13 The line. Let formula_14 be a line through origin, formula_15 being the unit vector, and formula_16 the distance from origin. If your line does not pass through origin but point formula_17, i.e. your line is formula_18, replace formula_8 with formula_19 everywhere; distance formula_16 is then the distance from formula_17. The intersection problem. The intersection between the line and the cylinder is formula_20 formula_21 where the signed distance along the axis formula_22 is formula_23 Solution. Rearranging the first equation gives a quadratic equation for formula_16. Solving that for formula_16 gives formula_24 where formula_25 if formula_7 is a unit vector. If formula_26 the line is parallel to the axis, and there is no intersection, or the intersection is a line. If formula_27 the line does not intersect the cylinder. Solving formula_16 only gives you the distance at which the line intersects the "infinite" cylinder. To see if the intersection occurs within the part we consider the actual cylinder, we need to check if the signed distance formula_22 from the cylinder base formula_8 along the axis formula_7 to the intersection formula_14 is within zero and the length of the cylinder: formula_28 where formula_22 is still formula_23 End caps. The above assumes that the cylinder does not have end caps; they must be checked for separately. The seam where the end cap meets the cylinder is assumed to belong to the cylinder, and is excluded from the end cap. Hemispherical end caps. Hemispherical end caps are just half-spheres at both ends of the cylinder. This object is sometimes called a capsule, or possibly fixed-radius linearly-swept sphere. Cylinder height formula_3 does not include the end caps. If formula_29 is the cylinder height including both hemispherical end caps, then formula_30. Check if the line formula_14 intersects either sphere: center formula_31 or formula_32 and radius formula_2: formula_33 If formula_34 the line does not intersect the end cap sphere. If there are solutions formula_16, accept only those that hit the actual end cap hemisphere: formula_35   or   formula_36 where, once again, formula_23 Planar end caps. Planar end caps are circular regions, radius formula_2, in planes centered at formula_31 and formula_37, with unit normal vectors formula_38 and formula_7, respectively. The line formula_39 intersects the plane if and only if formula_40 Solving d is simple, formula_41 Note that if formula_42 the line is parallel to the end cap plane (and also perpendicular to the cylinder axis). Finally, if and only if formula_43 the intersection point formula_39 is within the actual end cap (the circular region in the plane). Unit normal vector at an intersection point. One of the many applications for this algorithm is in ray tracing, where the cylinder unit normal vector formula_44 at the intersection formula_45 is needed for refracted and reflected rays and lighting. The equations below use the signed distance formula_22 to the intersection point formula_45 from base formula_8 along the axis formula_7, which is always formula_23 For the cylinder surface (excluding the end caps, but including the seam), formula_28: formula_46 For a spherical end cap at the base, formula_35: formula_47 for a spherical end cap at the other end, formula_48: formula_49 For a planar end cap at the base, formula_50: formula_51 for a planar end cap at the other end, formula_52: formula_53 References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "{\\bar b} = ( b_x , b_y , b_z )" }, { "math_id": 1, "text": "{\\hat a} = ( a_x , a_y , a_z )" }, { "math_id": 2, "text": "r" }, { "math_id": 3, "text": "h" }, { "math_id": 4, "text": "\\lVert {\\hat a} \\times ({\\bar p} - {\\bar b}) \\rVert = r" }, { "math_id": 5, "text": "{\\bar p} = (x , \\ y , \\ z )" }, { "math_id": 6, "text": "{\\bar p}" }, { "math_id": 7, "text": "{\\hat a}" }, { "math_id": 8, "text": "{\\bar b}" }, { "math_id": 9, "text": "\\lVert{\\hat a}\\rVert" }, { "math_id": 10, "text": "\\lVert{\\hat a}\\rVert = 1" }, { "math_id": 11, "text": "{\\lVert {\\hat a} \\times ({\\bar p} - {\\bar b}) \\rVert}^2 = r^2" }, { "math_id": 12, "text": "t = {\\hat a} \\cdot ({\\bar p} - {\\bar b})" }, { "math_id": 13, "text": "0 \\leq {\\hat a} \\cdot ({\\bar p} - {\\bar b}) \\leq h" }, { "math_id": 14, "text": "{\\bar p} = {\\hat n} d" }, { "math_id": 15, "text": "{\\hat n}" }, { "math_id": 16, "text": "d" }, { "math_id": 17, "text": "{\\bar o}" }, { "math_id": 18, "text": "{\\bar o} + {\\hat n} d" }, { "math_id": 19, "text": "({\\bar b}-{\\bar o})" }, { "math_id": 20, "text": "{\\lVert {\\hat a} \\times ({\\hat n} d - {\\bar b}) \\rVert}^2 = r^2" }, { "math_id": 21, "text": "0 \\leq t \\leq h" }, { "math_id": 22, "text": "t" }, { "math_id": 23, "text": "t = {\\hat a} \\cdot ({\\hat n} d - {\\bar b})" }, { "math_id": 24, "text": "d = \\frac{({\\hat n}\\times{\\hat a})\\cdot({\\bar b}\\times{\\hat a}) \\pm\\sqrt{ ({\\hat n}\\times{\\hat a})\\cdot({\\hat n}\\times{\\hat a}) r^2 - ({\\hat a}\\cdot{\\hat a})({\\bar b}\\cdot({\\hat n}\\times{\\hat a}))^2}}{({\\hat n}\\times{\\hat a})\\cdot({\\hat n}\\times{\\hat a})}" }, { "math_id": 25, "text": "({\\hat a}\\cdot{\\hat a}) = 1" }, { "math_id": 26, "text": "\\lVert {\\hat n} \\times {\\hat a} \\rVert = 0" }, { "math_id": 27, "text": "({\\hat n}\\times{\\hat a})\\cdot({\\hat n}\\times{\\hat a}) r^2 - ({\\hat a}\\cdot{\\hat a})({\\bar b}\\cdot({\\hat n}\\times{\\hat a}))^2 < 0" }, { "math_id": 28, "text": "0 \\le t \\le h" }, { "math_id": 29, "text": "H" }, { "math_id": 30, "text": "h = H - 2 r" }, { "math_id": 31, "text": "{\\bar c} = {\\bar b}" }, { "math_id": 32, "text": "{\\bar c} = {\\bar b} + {\\hat a} h" }, { "math_id": 33, "text": "d = {\\hat n}\\cdot{\\bar c} \\pm \\sqrt{({\\hat n}\\cdot{\\bar c})^2 + r^2 - ({\\bar c}\\cdot{\\bar c})}" }, { "math_id": 34, "text": "({\\hat n}\\cdot{\\bar c})^2 + r^2 - ({\\bar c}\\cdot{\\bar c}) < 0" }, { "math_id": 35, "text": "-r \\le t < 0" }, { "math_id": 36, "text": "h < t \\le h+r" }, { "math_id": 37, "text": "{\\bar c} = {\\bar b} + {\\hat a}h" }, { "math_id": 38, "text": "-{\\hat a}" }, { "math_id": 39, "text": "{\\hat n}d" }, { "math_id": 40, "text": "({\\hat n} d - {\\bar c}) \\cdot {\\hat a} = 0" }, { "math_id": 41, "text": "d = \\frac{ {\\hat a} \\cdot {\\bar c} }{ {\\hat a} \\cdot {\\hat n} }" }, { "math_id": 42, "text": "{\\hat a} \\cdot {\\hat n} = 0" }, { "math_id": 43, "text": "({\\hat n} d - {\\bar c}) \\cdot ({\\hat n} d - {\\bar c}) < r^2" }, { "math_id": 44, "text": "{\\hat v}" }, { "math_id": 45, "text": "{\\hat n} d" }, { "math_id": 46, "text": "{\\hat v} = \\frac{ {\\hat n} d - {\\hat a} t - {\\bar b} }{ \\lVert {\\hat n} d - {\\hat a} t - {\\bar b} \\rVert }" }, { "math_id": 47, "text": "{\\hat v} = \\frac{ {\\hat n} d - {\\bar b} }{ r }" }, { "math_id": 48, "text": "h < t \\le h + r" }, { "math_id": 49, "text": "{\\hat v} = \\frac{ {\\hat n} d - {\\bar b} - {\\hat a} h }{ r }" }, { "math_id": 50, "text": "t = 0" }, { "math_id": 51, "text": "{\\hat v} = -{\\hat a}" }, { "math_id": 52, "text": "t = h" }, { "math_id": 53, "text": "{\\hat v} = {\\hat a}" } ]
https://en.wikipedia.org/wiki?curid=75508474
75521240
GRSI model
Possible explanation by general relativity for Dark matter and Dark energy &lt;templatestyles src="Unsolved/styles.css" /&gt; Unsolved problem in physics: What is dark matter? What is dark energy? The GRSI model is an attempt to explain astrophysical and cosmological observations without dark matter, dark energy or modifying the laws of gravity as they are currently established. This model is an alternative to Lambda-CDM, the standard model of cosmology. History and description. The model was proposed in a series of articles, the first dating from 2003. The basic point is that since within General Relativity, gravitational fields couple to each other, this can effectively increase the gravitational interaction between massive objects. The additional gravitational strength then avoid the need for dark matter. This field coupling is the origin of General Relativity's non-linear behavior. It can be understood, in particle language, as gravitons interacting with each other (despite being massless) because they carry energy-momentum. A natural implication of this model is its explanation of the accelerating expansion of the universe without resorting to dark energy. The increased binding energy within a galaxy requires, by energy conservation, a weakening of gravitational attraction outside said galaxy. This mimics the repulsion of dark energy. The GRSI model is inspired from the Strong Nuclear Force, where a comparable phenomenon occurs. The interaction between gluons emitted by static or nearly static quarks dramatically strengthens quark-quark interaction, ultimately leading to quark confinement on the one hand (analogous to the need of stronger gravity to explain away dark matter) and the suppression of the Strong Nuclear Force outside hadrons (analogous to the repulsion of dark energy that balances gravitational attraction at large scales.) Two other parallel phenomena are the Tully-Fisher relation in galaxy dynamics that is analogous to the Regge trajectories emerging from the strong force. In both cases, the phenomenological formulas describing these observations are similar, albeit with different numerical factors. These parallels are expected from a theoretical point of view: General Relativity and the Strong Interaction Lagrangians have the same form. The validity of the GRSI model then simply hinges on whether the coupling of the gravitational fields is large enough so that the same effects that occur in hadrons also occur in very massive systems. This coupling is effectively given by formula_0, where formula_1 is the gravitational constant, formula_2 is the mass of the system, and formula_3 is a characteristic length of the system. The claim of the GRSI proponents, based either on lattice calculations, a background-field model. or the coincidental phenomenologies in galactic or hadronic dynamics mentioned in the previous paragraph, is that formula_0 is indeed sufficiently large for large systems such as galaxies. List of topics studied in the Model. The main observations that appear to require dark matter and/or dark energy can be explained within this model. Namely, Additionally, the model explains observations that are currently challenging to understand within Lambda-CDM: Finally, the model made a prediction that the amount of missing mass (i.e., the dark mass in dark matter approaches) in elliptical galaxies correlates with the ellipticity of the galaxies. This was tested and verified. Footnotes. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\sqrt{GM/L}" }, { "math_id": 1, "text": "G" }, { "math_id": 2, "text": "M" }, { "math_id": 3, "text": "L" } ]
https://en.wikipedia.org/wiki?curid=75521240
755268
Working fluid
Pressurized gas or liquid in a heat engine For fluid power, a working fluid is a gas or liquid that primarily transfers force, motion, or mechanical energy. In hydraulics, water or hydraulic fluid transfers force between hydraulic components such as hydraulic pumps, hydraulic cylinders, and hydraulic motors that are assembled into hydraulic machinery, hydraulic drive systems, etc. In pneumatics, the working fluid is air or another gas which transfers force between pneumatic components such as compressors, vacuum pumps, pneumatic cylinders, and pneumatic motors. In pneumatic systems, the working gas also stores energy because it is compressible. (Gases also heat up as they are compressed and cool as they expand; this incidental heat pump is rarely exploited.) (Some gases also condense into liquids as they are compressed and boil as pressure is reduced.) For passive heat transfer, a working fluid is a gas or liquid, usually called a coolant or heat transfer fluid, that primarily transfers heat into or out of a region of interest by conduction, convection, and/or forced convection (pumped liquid cooling, air cooling, etc.). The working fluid of a heat engine or heat pump is a gas or liquid, usually called a refrigerant, coolant, or working gas, that primarily converts thermal energy (temperature change) into mechanical energy (or vice versa) by phase change and/or heat of compression and expansion. Examples using phase change include water↔steam in steam engines, and refrigerants in vapor-compression refrigeration and air conditioning systems. Examples without phase change include air or hydrogen in hot air engines such as the Stirling engine, air or gases in gas-cycle heat pumps, etc. (Some heat pumps and heat engines use "working solids", such as rubber bands, for elastocaloric refrigeration or thermoelastic cooling and nickel titanium in a prototype heat engine.) Working fluids other than air or water are necessarily recirculated in a loop. Some hydraulic and passive heat-transfer systems are open to the water supply and/or atmosphere, sometimes through breather filters. Heat engines, heat pumps, and systems using volatile liquids or special gases are usually sealed behind relief valves. Properties and states. The working fluid's properties are essential for the full description of thermodynamic systems. Although working fluids have many physical properties which can be defined, the thermodynamic properties which are often required in engineering design and analysis are few. Pressure, temperature, enthalpy, entropy, specific volume, and internal energy are the most common. If at least two thermodynamic properties are known, the state of the working fluid can be defined. This is usually done on a property diagram which is simply a plot of one property versus another. When the working fluid passes through engineering components such as turbines and compressors, the point on a property diagram moves due to the possible changes of certain properties. In theory therefore it is possible to draw a line/curve which fully describes the thermodynamic properties of the fluid. In reality however this can only be done if the process is reversible. If not, the changes in property are represented as a dotted line on a property diagram. This issue does not really affect thermodynamic analysis since in most cases it is the end states of a process which are sought after. Work. The working fluid can be used to output useful work if used in a turbine. Also, in thermodynamic cycles energy may be input to the working fluid by means of a compressor. The mathematical formulation for this may be quite simple if we consider a cylinder in which a working fluid resides. A piston is used to input useful work to the fluid. From mechanics, the work done from state 1 to state 2 of the process is given by: formula_0 where "ds" is the incremental distance from one state to the next and "F" is the force applied. The negative sign is introduced since in this case a decrease in volume is being considered. The situation is shown in the following figure: The force is given by the product of the pressure in the cylinder and its cross sectional area such that formula_1 Where "A⋅ds = dV" is the elemental change of cylinder volume. If from state 1 to 2 the volume increases then the working fluid actually does work on its surroundings and this is commonly denoted by a negative work. If the volume decreases the work is positive. By the definition given with the above integral the work done is represented by the area under a pressure–volume diagram. If we consider the case where we have a constant pressure process then the work is simply given by formula_2 Selection. Depending on the application, various types of working fluids are used. In a thermodynamic cycle it may be the case that the working fluid changes state from gas to liquid or vice versa. Certain gases such as helium can be treated as ideal gases. This is not generally the case for superheated steam and the ideal gas equation does not really hold. At much higher temperatures however it still yields relatively accurate results. The physical and chemical properties of the working fluid are extremely important when designing thermodynamic systems. For instance, in a refrigeration unit, the working fluid is called the refrigerant. Ammonia is a typical refrigerant and may be used as the primary working fluid. Compared with water (which can also be used as a refrigerant), ammonia makes use of relatively high pressures requiring more robust and expensive equipment. In air standard cycles as in gas turbine cycles, the working fluid is air. In the open cycle gas turbine, air enters a compressor where its pressure is increased. The compressor therefore inputs work to the working fluid (positive work). The fluid is then transferred to a combustion chamber where this time heat energy is input by means of the burning of a fuel. The air then expands in a turbine thus doing work against the surroundings (negative work). Different working fluids have different properties and in choosing one in particular the designer must identify the major requirements. In refrigeration units, high latent heats are required to provide large refrigeration capacities. Applications and examples. The following table gives typical applications of working fluids and examples for each: References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": " W = -\\int_{1}^{2} \\mathbf{F} \\cdot \\mathrm{d}\\mathbf{s}" }, { "math_id": 1, "text": "\\begin{align}\nW &= -\\int_{1}^{2} PA \\cdot \\mathrm{d}\\mathbf{s} \\\\\n&= -\\int_{1}^{2} P \\cdot \\mathrm{d}V\n\\end{align}" }, { "math_id": 2, "text": "\\begin{align}\nW &= -P \\int_{1}^{2} \\mathrm{d}V \\\\\n&= -P \\cdot \\left(V_2 - V_1\\right)\n\\end{align}" } ]
https://en.wikipedia.org/wiki?curid=755268
75527363
Angular mechanics
&lt;templatestyles src="Hlist/styles.css"/&gt; In physics, angular mechanics is a field of mechanics which studies rotational movement. It studies things such as angular momentum, angular velocity, and torque. It also studies more advanced things such as Coriolis force and Angular aerodynamics. It is used in many fields such as toy making, aerospace engineering, and aviation. Applications. Aviation. In aviation, angular mechanics is used. Propellers spin, which generates angular momentum. Because of the momentum, it directs the air back and keeps the plane up while also propelling it forward. This uses angular mechanics, especially torque and angular momentum. Toy making. Many toys are made with angular mechanics in mind. These toys include gyroscopes, tops, and yo-yos. When you spin a toy, you apply force to both sides (Push and pull respectively). This makes the top spin. According to newtons third law of motion, the top would continue to spin until a force is acted upon it. Because of all of the forces cancelling out gravity, it will stay upright. Aerospace engineering. In aerospace engineering, angular mechanics is put to mind. Where the ISS is located, there is around 90% the gravity of the ground. The reason the ISS does not fall down is due to angular momentum. Equations. In angular mechanics, there are many equations. Most of which explain the nature of rotational movement. Torque. The equation for torque is very important in angular mechanics. Torque is rotational force and is determined by a cross product. This makes it a pseudovector. formula_0 where formula_1 is torque, r is radius, and formula_2 is a cross product. Another variation of this equation is: formula_3 Where formula_1 is torque, r is radius, F is force and formula_4 is the angle between the two vectors. Angular velocity. The equation for angular velocity is widely used in understanding rotational mechanics. formula_5 where formula_6 is angular velocity and formula_4 is angle. Angular acceleration. formula_7 where formula_8 is angular acceleration, and formula_6 is angular velocity Planetary motion. When planets spin, they generate angular momentum. This does things such as cause the planet to be slightly oval-shaped, and cause deformities in the planet. Another example of angular mechanics in planetary motion is orbiting around a star. Because of the speed of the orbit, they do not go plummeting into their star. Earth. The earth moves 1667.9239 kilometers per hour around its axis. Because of this, you weigh less on the equator than the poles due to the Coriolis effect. Another thing caused by the Coriolis effect on earth is the deformation of the earth. Because of this, you are farther from the center of the earth on the equator than the poles. The orbital speed of the earth is about approximately 30 (More precisely, 29.80565528) kilometers per second. This causes the earth to perfectly orbit the sun. Moon. The moon orbits the earth at around a kilometer a second (or more specifically, 0.9204818658 km/s). But it is also tidally locked. It generates enough rotational momentum to be at the exact distance that it rotates as fast as it spins. History. Angular mechanics has a rich history. ~500 BCE-323 BCE. In ancient Greece, people were found playing with yo-yos. Whilst the ancient Greeks did not know much about angular momentum, they were fascinated by its ability to stand up while spinning. ~1295-1358. Jean Buridan, French philosopher discovered momentum, including angular momentum in his lifetime. ~1642-1727. When Isaac Newton discovered his laws of motion, other people built off his laws to make the laws of rotation. 1743. Inspired by the laws of rotation, John Serson invented the gyroscope in 1743. Rotational laws. Eulers second law. Eulers second law states that the rate of change of rotational momentum about a point that is fixed at any inertial reference frame is equal to the sum of any external torques acting on that body at that point in space Newtons laws of motion. Sources: Newtons laws of motion can translate to rotational laws. First law. An object at rest tends to remain at rest, but an object in rotational motion will keep rotating unless a force is acted upon it. Second law. Angular acceleration is equal to the net torque and inversely proportional to the moment of inertia. Third law. For every action there is an equal and opposite reaction.
[ { "math_id": 0, "text": "\\tau=r\\times f" }, { "math_id": 1, "text": "\\tau" }, { "math_id": 2, "text": "\\times" }, { "math_id": 3, "text": "\\tau=rF \\sin (\\theta)" }, { "math_id": 4, "text": "\\theta" }, { "math_id": 5, "text": "\\omega= d\\theta / dt" }, { "math_id": 6, "text": "\\omega" }, { "math_id": 7, "text": "\\alpha = d \\omega / dt" }, { "math_id": 8, "text": "\\alpha" } ]
https://en.wikipedia.org/wiki?curid=75527363
755300
Curvilinear coordinates
Coordinate system whose directions vary in space In geometry, curvilinear coordinates are a coordinate system for Euclidean space in which the coordinate lines may be curved. These coordinates may be derived from a set of Cartesian coordinates by using a transformation that is locally invertible (a one-to-one map) at each point. This means that one can convert a point given in a Cartesian coordinate system to its curvilinear coordinates and back. The name "curvilinear coordinates", coined by the French mathematician Lamé, derives from the fact that the coordinate surfaces of the curvilinear systems are curved. Well-known examples of curvilinear coordinate systems in three-dimensional Euclidean space (R3) are cylindrical and spherical coordinates. A Cartesian coordinate surface in this space is a coordinate plane; for example "z" = 0 defines the "x"-"y" plane. In the same space, the coordinate surface "r" = 1 in spherical coordinates is the surface of a unit sphere, which is curved. The formalism of curvilinear coordinates provides a unified and general description of the standard coordinate systems. Curvilinear coordinates are often used to define the location or distribution of physical quantities which may be, for example, scalars, vectors, or tensors. Mathematical expressions involving these quantities in vector calculus and tensor analysis (such as the gradient, divergence, curl, and Laplacian) can be transformed from one coordinate system to another, according to transformation rules for scalars, vectors, and tensors. Such expressions then become valid for any curvilinear coordinate system. A curvilinear coordinate system may be simpler to use than the Cartesian coordinate system for some applications. The motion of particles under the influence of central forces is usually easier to solve in spherical coordinates than in Cartesian coordinates; this is true of many physical problems with spherical symmetry defined in R3. Equations with boundary conditions that follow coordinate surfaces for a particular curvilinear coordinate system may be easier to solve in that system. While one might describe the motion of a particle in a rectangular box using Cartesian coordinates, it is easier to describe the motion in a sphere with spherical coordinates. Spherical coordinates are the most common curvilinear coordinate systems and are used in Earth sciences, cartography, quantum mechanics, relativity, and engineering. Orthogonal curvilinear coordinates in 3 dimensions. Coordinates, basis, and vectors. For now, consider 3-D space. A point "P" in 3-D space (or its position vector r) can be defined using Cartesian coordinates ("x", "y", "z") [equivalently written ("x"1, "x"2, "x"3)], by formula_0, where e"x", e"y", e"z" are the "standard basis vectors". It can also be defined by its curvilinear coordinates ("q"1, "q"2, "q"3) if this triplet of numbers defines a single point in an unambiguous way. The relation between the coordinates is then given by the invertible transformation functions: formula_1 formula_2 The surfaces "q"1 = constant, "q"2 = constant, "q"3 = constant are called the coordinate surfaces; and the space curves formed by their intersection in pairs are called the coordinate curves. The coordinate axes are determined by the tangents to the coordinate curves at the intersection of three surfaces. They are not in general fixed directions in space, which happens to be the case for simple Cartesian coordinates, and thus there is generally no natural global basis for curvilinear coordinates. In the Cartesian system, the standard basis vectors can be derived from the derivative of the location of point "P" with respect to the local coordinate formula_3 Applying the same derivatives to the curvilinear system locally at point "P" defines the natural basis vectors: formula_4 Such a basis, whose vectors change their direction and/or magnitude from point to point is called a local basis. All bases associated with curvilinear coordinates are necessarily local. Basis vectors that are the same at all points are global bases, and can be associated only with linear or affine coordinate systems. For this article e is reserved for the standard basis (Cartesian) and h or b is for the curvilinear basis. These may not have unit length, and may also not be orthogonal. In the case that they "are" orthogonal at all points where the derivatives are well-defined, we define the Lamé coefficients (after Gabriel Lamé) by formula_5 and the curvilinear orthonormal basis vectors by formula_6 These basis vectors may well depend upon the position of "P"; it is therefore necessary that they are not assumed to be constant over a region. (They technically form a basis for the tangent bundle of formula_7 at "P", and so are local to "P".) In general, curvilinear coordinates allow the natural basis vectors hi not all mutually perpendicular to each other, and not required to be of unit length: they can be of arbitrary magnitude and direction. The use of an orthogonal basis makes vector manipulations simpler than for non-orthogonal. However, some areas of physics and engineering, particularly fluid mechanics and continuum mechanics, require non-orthogonal bases to describe deformations and fluid transport to account for complicated directional dependences of physical quantities. A discussion of the general case appears later on this page. Vector calculus. Differential elements. In orthogonal curvilinear coordinates, since the total differential change in r is formula_8 so scale factors are formula_9 In non-orthogonal coordinates the length of formula_10 is the positive square root of formula_11 (with Einstein summation convention). The six independent scalar products "gij"=h"i".h"j" of the natural basis vectors generalize the three scale factors defined above for orthogonal coordinates. The nine "gij" are the components of the metric tensor, which has only three non zero components in orthogonal coordinates: "g"11="h"1"h"1, "g"22="h"2"h"2, "g"33="h"3"h"3. Covariant and contravariant bases. Spatial gradients, distances, time derivatives and scale factors are interrelated within a coordinate system by two groups of basis vectors: Note that, because of Einstein's summation convention, the position of the indices of the vectors is the opposite of that of the coordinates. Consequently, a general curvilinear coordinate system has two sets of basis vectors for every point: {b1, b2, b3} is the contravariant basis, and {b1, b2, b3} is the covariant (a.k.a. reciprocal) basis. The covariant and contravariant basis vectors types have identical direction for orthogonal curvilinear coordinate systems, but as usual have inverted units with respect to each other. Note the following important equality: formula_14 wherein formula_15 denotes the generalized Kronecker delta. &lt;templatestyles src="Math_proof/styles.css" /&gt;Proof In the Cartesian coordinate system formula_16, we can write the dot product as: formula_17 Consider an infinitesimal displacement formula_18. Let dq1, dq2 and dq3 denote the corresponding infinitesimal changes in curvilinear coordinates q1, q2 and q3 respectively. By the chain rule, dq1 can be expressed as: formula_19 If the displacement "dr" is such that "dq"2 = "dq"3 = 0, i.e. the position vector r moves by an infinitesimal amount along the coordinate axis "q"2=const and "q"3=const, then: formula_20 Dividing by "dq"1, and taking the limit "dq"1 → 0: formula_21 or equivalently: formula_22 Now if the displacement dr is such that "dq"1="dq"3=0, i.e. the position vector r moves by an infinitesimal amount along the coordinate axis q1=const and q3=const, then: formula_23 Dividing by dq2, and taking the limit dq2 → 0: formula_24 or equivalently: formula_25 And so forth for the other dot products. Alternative Proof: formula_26 and the Einstein summation convention is implied. A vector v can be specified in terms of either basis, i.e., formula_27 Using the Einstein summation convention, the basis vectors relate to the components by formula_28 formula_29 and formula_30 formula_31 where "g" is the metric tensor (see below). A vector can be specified with covariant coordinates (lowered indices, written "vk") or contravariant coordinates (raised indices, written "vk"). From the above vector sums, it can be seen that contravariant coordinates are associated with covariant basis vectors, and covariant coordinates are associated with contravariant basis vectors. A key feature of the representation of vectors and tensors in terms of indexed components and basis vectors is "invariance" in the sense that vector components which transform in a covariant manner (or contravariant manner) are paired with basis vectors that transform in a contravariant manner (or covariant manner). Integration. Constructing a covariant basis in one dimension. Consider the one-dimensional curve shown in Fig. 3. At point "P", taken as an origin, "x" is one of the Cartesian coordinates, and "q"1 is one of the curvilinear coordinates. The local (non-unit) basis vector is b1 (notated h1 above, with b reserved for unit vectors) and it is built on the "q"1 axis which is a tangent to that coordinate line at the point "P". The axis "q"1 and thus the vector b1 form an angle formula_32 with the Cartesian "x" axis and the Cartesian basis vector e1. It can be seen from triangle "PAB" that formula_33 where |e1|, |b1| are the magnitudes of the two basis vectors, i.e., the scalar intercepts "PB" and "PA". "PA" is also the projection of b1 on the "x" axis. However, this method for basis vector transformations using "directional cosines" is inapplicable to curvilinear coordinates for the following reasons: The angles that the "q"1 line and that axis form with the "x" axis become closer in value the closer one moves towards point "P" and become exactly equal at "P". Let point "E" be located very close to "P", so close that the distance "PE" is infinitesimally small. Then "PE" measured on the "q"1 axis almost coincides with "PE" measured on the "q"1 line. At the same time, the ratio "PD/PE" ("PD" being the projection of "PE" on the "x" axis) becomes almost exactly equal to formula_34. Let the infinitesimally small intercepts "PD" and "PE" be labelled, respectively, as "dx" and d"q"1. Then formula_35. Thus, the directional cosines can be substituted in transformations with the more exact ratios between infinitesimally small coordinate intercepts. It follows that the component (projection) of b1 on the "x" axis is formula_36. If "qi" = "qi"("x"1, "x"2, "x"3) and "xi" = "xi"("q"1, "q"2, "q"3) are smooth (continuously differentiable) functions the transformation ratios can be written as formula_37 and formula_38. That is, those ratios are partial derivatives of coordinates belonging to one system with respect to coordinates belonging to the other system. Constructing a covariant basis in three dimensions. Doing the same for the coordinates in the other 2 dimensions, b1 can be expressed as: formula_39 Similar equations hold for b2 and b3 so that the standard basis {e1, e2, e3} is transformed to a local (ordered and normalised) basis {b1, b2, b3} by the following system of equations: formula_40 By analogous reasoning, one can obtain the inverse transformation from local basis to standard basis: formula_41 Jacobian of the transformation. The above systems of linear equations can be written in matrix form using the Einstein summation convention as formula_42. This coefficient matrix of the linear system is the Jacobian matrix (and its inverse) of the transformation. These are the equations that can be used to transform a Cartesian basis into a curvilinear basis, and vice versa. In three dimensions, the expanded forms of these matrices are formula_43 In the inverse transformation (second equation system), the unknowns are the curvilinear basis vectors. For any specific location there can only exist one and only one set of basis vectors (else the basis is not well defined at that point). This condition is satisfied if and only if the equation system has a single solution. In linear algebra, a linear equation system has a single solution (non-trivial) only if the determinant of its system matrix is non-zero: formula_44 which shows the rationale behind the above requirement concerning the inverse Jacobian determinant. Generalization to "n" dimensions. The formalism extends to any finite dimension as follows. Consider the real Euclidean "n"-dimensional space, that is R"n" = R × R × ... × R ("n" times) where R is the set of real numbers and × denotes the Cartesian product, which is a vector space. The coordinates of this space can be denoted by: x = ("x"1, "x"2...,"xn"). Since this is a vector (an element of the vector space), it can be written as: formula_45 where e1 = (1,0,0...,0), e2 = (0,1,0...,0), e3 = (0,0,1...,0)...,e"n" = (0,0,0...,1) is the "standard basis set of vectors" for the space R"n", and "i" = 1, 2..."n" is an index labelling components. Each vector has exactly one component in each dimension (or "axis") and they are mutually orthogonal (perpendicular) and normalized (has unit magnitude). More generally, we can define basis vectors b"i" so that they depend on q = ("q"1, "q"2...,"qn"), i.e. they change from point to point: b"i" = b"i"(q). In which case to define the same point x in terms of this alternative basis: the "coordinates" with respect to this basis "vi" also necessarily depend on x also, that is "vi" = "vi"(x). Then a vector v in this space, with respect to these alternative coordinates and basis vectors, can be expanded as a linear combination in this basis (which simply means to multiply each basis vector e"i" by a number "v""i" – scalar multiplication): formula_46 The vector sum that describes v in the new basis is composed of different vectors, although the sum itself remains the same. Transformation of coordinates. From a more general and abstract perspective, a curvilinear coordinate system is simply a coordinate patch on the differentiable manifold En (n-dimensional Euclidean space) that is diffeomorphic to the Cartesian coordinate patch on the manifold. Two diffeomorphic coordinate patches on a differential manifold need not overlap differentiably. With this simple definition of a curvilinear coordinate system, all the results that follow below are simply applications of standard theorems in differential topology. The transformation functions are such that there's a one-to-one relationship between points in the "old" and "new" coordinates, that is, those functions are bijections, and fulfil the following requirements within their domains: Vector and tensor algebra in three-dimensional curvilinear coordinates. Elementary vector and tensor algebra in curvilinear coordinates is used in some of the older scientific literature in mechanics and physics and can be indispensable to understanding work from the early and mid-1900s, for example the text by Green and Zerna. Some useful relations in the algebra of vectors and second-order tensors in curvilinear coordinates are given in this section. The notation and contents are primarily from Ogden, Naghdi, Simmonds, Green and Zerna, Basar and Weichert, and Ciarlet. Tensors in curvilinear coordinates. A second-order tensor can be expressed as formula_47 where formula_48 denotes the tensor product. The components "Sij" are called the contravariant components, "Si j" the mixed right-covariant components, "Si j" the mixed left-covariant components, and "Sij" the covariant components of the second-order tensor. The components of the second-order tensor are related by formula_49 The metric tensor in orthogonal curvilinear coordinates. At each point, one can construct a small line element dx, so the square of the length of the line element is the scalar product dx • dx and is called the metric of the space, given by: formula_50. The following portion of the above equation formula_51 is a "symmetric" tensor called the fundamental (or metric) tensor of the Euclidean space in curvilinear coordinates. Indices can be raised and lowered by the metric: formula_52 Relation to Lamé coefficients. Defining the scale factors "hi" by formula_53 gives a relation between the metric tensor and the Lamé coefficients, and formula_54 where "hij" are the Lamé coefficients. For an orthogonal basis we also have: formula_55 Example: Polar coordinates. If we consider polar coordinates for R2, formula_56 (r, θ) are the curvilinear coordinates, and the Jacobian determinant of the transformation ("r",θ) → ("r" cos θ, "r" sin θ) is "r". The orthogonal basis vectors are b"r" = (cos θ, sin θ), bθ = (−r sin θ, r cos θ). The scale factors are "h""r" = 1 and "h"θ= "r". The fundamental tensor is "g"11 =1, "g"22 ="r"2, "g"12 = "g"21 =0. The alternating tensor. In an orthonormal right-handed basis, the third-order alternating tensor is defined as formula_57 In a general curvilinear basis the same tensor may be expressed as formula_58 It can also be shown that formula_59 formula_61 Christoffel symbols. where the comma denotes a partial derivative (see Ricci calculus). To express Γ"kij" in terms of "gij", formula_62 Since formula_63 using these to rearrange the above relations gives formula_64 formula_66 This implies that formula_67 since formula_68. Other relations that follow are formula_69 Vector and tensor calculus in three-dimensional curvilinear coordinates. Adjustments need to be made in the calculation of line, surface and volume integrals. For simplicity, the following restricts to three dimensions and orthogonal curvilinear coordinates. However, the same arguments apply for "n"-dimensional spaces. When the coordinate system is not orthogonal, there are some additional terms in the expressions. Simmonds, in his book on tensor analysis, quotes Albert Einstein saying The magic of this theory will hardly fail to impose itself on anybody who has truly understood it; it represents a genuine triumph of the method of absolute differential calculus, founded by Gauss, Riemann, Ricci, and Levi-Civita. Vector and tensor calculus in general curvilinear coordinates is used in tensor analysis on four-dimensional curvilinear manifolds in general relativity, in the mechanics of curved shells, in examining the invariance properties of Maxwell's equations which has been of interest in metamaterials and in many other fields. Some useful relations in the calculus of vectors and second-order tensors in curvilinear coordinates are given in this section. The notation and contents are primarily from Ogden, Simmonds, Green and Zerna, Basar and Weichert, and Ciarlet. Let φ = φ(x) be a well defined scalar field and v = v(x) a well-defined vector field, and "λ"1, "λ"2... be parameters of the coordinates Differentiation. The expressions for the gradient, divergence, and Laplacian can be directly extended to "n"-dimensions, however the curl is only defined in 3D. The vector field b"i" is tangent to the "qi" coordinate curve and forms a natural basis at each point on the curve. This basis, as discussed at the beginning of this article, is also called the covariant curvilinear basis. We can also define a reciprocal basis, or contravariant curvilinear basis, b"i". All the algebraic relations between the basis vectors, as discussed in the section on tensor algebra, apply for the natural basis and its reciprocal at each point x. Fictitious forces in general curvilinear coordinates. By definition, if a particle with no forces acting on it has its position expressed in an inertial coordinate system, ("x"1, "x"2, "x"3, "t"), then there it will have no acceleration (d2"x""j"/d"t"2 = 0). In this context, a coordinate system can fail to be "inertial" either due to non-straight time axis or non-straight space axes (or both). In other words, the basis vectors of the coordinates may vary in time at fixed positions, or they may vary with position at fixed times, or both. When equations of motion are expressed in terms of any non-inertial coordinate system (in this sense), extra terms appear, called Christoffel symbols. Strictly speaking, these terms represent components of the absolute acceleration (in classical mechanics), but we may also choose to continue to regard d2"x""j"/d"t"2 as the acceleration (as if the coordinates were inertial) and treat the extra terms as if they were forces, in which case they are called fictitious forces. The component of any such fictitious force normal to the path of the particle and in the plane of the path's curvature is then called centrifugal force. This more general context makes clear the correspondence between the concepts of centrifugal force in rotating coordinate systems and in stationary curvilinear coordinate systems. (Both of these concepts appear frequently in the literature.) For a simple example, consider a particle of mass "m" moving in a circle of radius "r" with angular speed "w" relative to a system of polar coordinates rotating with angular speed "W". The radial equation of motion is "mr"” = "F""r" + "mr"("w" + "W")2. Thus the centrifugal force is "mr" times the square of the absolute rotational speed "A" = "w" + "W" of the particle. If we choose a coordinate system rotating at the speed of the particle, then "W" = "A" and "w" = 0, in which case the centrifugal force is "mrA"2, whereas if we choose a stationary coordinate system we have "W" = 0 and "w" = "A", in which case the centrifugal force is again "mrA"2. The reason for this equality of results is that in both cases the basis vectors at the particle's location are changing in time in exactly the same way. Hence these are really just two different ways of describing exactly the same thing, one description being in terms of rotating coordinates and the other being in terms of stationary curvilinear coordinates, both of which are non-inertial according to the more abstract meaning of that term. When describing general motion, the actual forces acting on a particle are often referred to the instantaneous osculating circle tangent to the path of motion, and this circle in the general case is not centered at a fixed location, and so the decomposition into centrifugal and Coriolis components is constantly changing. This is true regardless of whether the motion is described in terms of stationary or rotating coordinates. See also. &lt;templatestyles src="Div col/styles.css"/&gt; References. &lt;templatestyles src="Reflist/styles.css" /&gt; Further reading. &lt;templatestyles src="Refbegin/styles.css" /&gt;
[ { "math_id": 0, "text": "\\mathbf{r} = x \\mathbf{e}_x + y\\mathbf{e}_y + z\\mathbf{e}_z" }, { "math_id": 1, "text": " x = f^1(q^1, q^2, q^3),\\, y = f^2(q^1, q^2, q^3),\\, z = f^3(q^1, q^2, q^3)" }, { "math_id": 2, "text": " q^1 = g^1(x,y,z),\\, q^2 = g^2(x,y,z),\\, q^3 = g^3(x,y,z)" }, { "math_id": 3, "text": "\\mathbf{e}_x = \\dfrac{\\partial\\mathbf{r}}{\\partial x}; \\;\n\\mathbf{e}_y = \\dfrac{\\partial\\mathbf{r}}{\\partial y}; \\;\n\\mathbf{e}_z = \\dfrac{\\partial\\mathbf{r}}{\\partial z}." }, { "math_id": 4, "text": "\\mathbf{h}_1 = \\dfrac{\\partial\\mathbf{r}}{\\partial q^1}; \\;\n\\mathbf{h}_2 = \\dfrac{\\partial\\mathbf{r}}{\\partial q^2}; \\;\n\\mathbf{h}_3 = \\dfrac{\\partial\\mathbf{r}}{\\partial q^3}." }, { "math_id": 5, "text": "h_1 = |\\mathbf{h}_1|; \\; h_2 = |\\mathbf{h}_2|; \\; h_3 = |\\mathbf{h}_3|" }, { "math_id": 6, "text": "\\mathbf{b}_1 = \\dfrac{\\mathbf{h}_1}{h_1}; \\;\n\\mathbf{b}_2 = \\dfrac{\\mathbf{h}_2}{h_2}; \\;\n\\mathbf{b}_3 = \\dfrac{\\mathbf{h}_3}{h_3}." }, { "math_id": 7, "text": "\\mathbb{R}^3" }, { "math_id": 8, "text": "d\\mathbf{r}=\\dfrac{\\partial\\mathbf{r}}{\\partial q^1}dq^1 + \\dfrac{\\partial\\mathbf{r}}{\\partial q^2}dq^2 + \\dfrac{\\partial\\mathbf{r}}{\\partial q^3}dq^3 = h_1 dq^1 \\mathbf{b}_1 + h_2 dq^2 \\mathbf{b}_2 + h_3 dq^3 \\mathbf{b}_3 " }, { "math_id": 9, "text": "h_i = \\left|\\frac{\\partial\\mathbf{r}}{\\partial q^i}\\right|" }, { "math_id": 10, "text": "d\\mathbf{r}= dq^1 \\mathbf{h}_1 + dq^2 \\mathbf{h}_2 + dq^3 \\mathbf{h}_3 " }, { "math_id": 11, "text": "d\\mathbf{r} \\cdot d\\mathbf{r} = dq^i dq^j \\mathbf{h}_i \\cdot \\mathbf{h}_j " }, { "math_id": 12, "text": "\\mathbf{b}_i=\\dfrac{\\partial\\mathbf{r}}{\\partial q^i}" }, { "math_id": 13, "text": "\\mathbf{b}^i=\\nabla q^i " }, { "math_id": 14, "text": " \\mathbf{b}^i\\cdot\\mathbf{b}_j = \\delta^i_j " }, { "math_id": 15, "text": " \\delta^i_j " }, { "math_id": 16, "text": " ( \\mathbf{e}_x , \\mathbf{e}_y, \\mathbf{e}_z ) " }, { "math_id": 17, "text": " \\mathbf{b}_i\\cdot\\mathbf{b}^j = \\left( \\dfrac {\\partial x} {\\partial q_i} , \\dfrac {\\partial y} {\\partial q_i} , \\dfrac {\\partial z} {\\partial q_i} \\right) \\cdot \\left( \\dfrac {\\partial q_j} {\\partial x} , \\dfrac {\\partial q_j} {\\partial y} , \\dfrac {\\partial q_j} {\\partial z} \\right) = \\dfrac {\\partial x} {\\partial q_i} \\dfrac {\\partial q_j} {\\partial x} + \\dfrac {\\partial y} {\\partial q_i} \\dfrac {\\partial q_j} {\\partial y} + \\dfrac {\\partial z} {\\partial q_i} \\dfrac {\\partial q_j} {\\partial z} " }, { "math_id": 18, "text": " d \\mathbf{r} = dx \\cdot \\mathbf{e}_x + dy \\cdot \\mathbf{e}_y + dz \\cdot \\mathbf{e}_z " }, { "math_id": 19, "text": " dq_1 = \\dfrac {\\partial q_1} {\\partial x} dx + \\dfrac {\\partial q_1} {\\partial y} dy + \\dfrac {\\partial q_1} {\\partial z} dz\n= \\dfrac {\\partial q_1} {\\partial x} dx + \\dfrac {\\partial q_1} {\\partial y} \\left(\\dfrac {\\partial y} {\\partial q_1} dq_1 + \\dfrac {\\partial y} {\\partial q_2} dq_2 + \\dfrac {\\partial y} {\\partial q_3} dq_3\\right) + \\dfrac {\\partial q_1} {\\partial z} \\left(\\dfrac {\\partial z} {\\partial q_1} dq_1 + \\dfrac {\\partial z} {\\partial q_2} dq_2 + \\dfrac {\\partial z} {\\partial q_3} dq_3\\right) " }, { "math_id": 20, "text": " dq_1 = \\dfrac {\\partial q_1} {\\partial x} dx + \\dfrac {\\partial q_1} {\\partial y} \\dfrac {\\partial y} {\\partial q_1} dq_1 + \\dfrac {\\partial q_1} {\\partial z} \\dfrac {\\partial z} {\\partial q_1} dq_1 " }, { "math_id": 21, "text": " 1 = \\dfrac {\\partial q_1} {\\partial x} \\dfrac {\\partial x} {\\partial q_1} + \\dfrac {\\partial q_1} {\\partial y} \\dfrac {\\partial y} {\\partial q_1} + \\dfrac {\\partial q_1} {\\partial z} \\dfrac {\\partial z} {\\partial q_1} = \\dfrac {\\partial x} {\\partial q_1} \\dfrac {\\partial q_1} {\\partial x} + \\dfrac {\\partial y} {\\partial q_1} \\dfrac {\\partial q_1} {\\partial y} + \\dfrac {\\partial z} {\\partial q_1} \\dfrac {\\partial q_1} {\\partial z} " }, { "math_id": 22, "text": " \\mathbf{b}_1\\cdot\\mathbf{b}^1 = 1 " }, { "math_id": 23, "text": " 0 = \\dfrac {\\partial q_1} {\\partial x} dx + \\dfrac {\\partial q_1} {\\partial y} \\dfrac {\\partial y} {\\partial q_2} dq_2 + \\dfrac {\\partial q_1} {\\partial z} \\dfrac {\\partial z} {\\partial q_2} dq_2 " }, { "math_id": 24, "text": " 0 = \\dfrac {\\partial q_1} {\\partial x} \\dfrac {\\partial x} {\\partial q_2} + \\dfrac {\\partial q_1} {\\partial y} \\dfrac {\\partial y} {\\partial q_2} + \\dfrac {\\partial q_1} {\\partial z} \\dfrac {\\partial z} {\\partial q_2} = \\dfrac {\\partial x} {\\partial q_2} \\dfrac {\\partial q_1} {\\partial x} + \\dfrac {\\partial y} {\\partial q_2} \\dfrac {\\partial q_1} {\\partial y} + \\dfrac {\\partial z} {\\partial q_2} \\dfrac {\\partial q_1} {\\partial z} " }, { "math_id": 25, "text": " \\mathbf{b}_2 \\cdot \\mathbf{b}^1 = 0 " }, { "math_id": 26, "text": "\\delta^i_j dq^j=dq^i=\\nabla q^i \\cdot d\\mathbf{r}=\\mathbf{b}^i \\cdot \\dfrac{\\partial\\mathbf{r}}{\\partial q^j} dq^j = \\mathbf{b}^i \\cdot \\mathbf{b}_j dq^j" }, { "math_id": 27, "text": " \\mathbf{v} = v^1\\mathbf{b}_1 + v^2\\mathbf{b}_2 + v^3\\mathbf{b}_3 = v_1\\mathbf{b}^1 + v_2\\mathbf{b}^2 + v_3\\mathbf{b}^3 " }, { "math_id": 28, "text": " \\mathbf{v}\\cdot\\mathbf{b}^i = v^k\\mathbf{b}_k\\cdot\\mathbf{b}^i = v^k\\delta^i_k = v^i " }, { "math_id": 29, "text": " \\mathbf{v}\\cdot\\mathbf{b}_i = v_k\\mathbf{b}^k\\cdot\\mathbf{b}_i = v_k\\delta_i^k = v_i " }, { "math_id": 30, "text": " \\mathbf{v}\\cdot\\mathbf{b}_i = v^k\\mathbf{b}_k\\cdot\\mathbf{b}_i = g_{ki}v^k " }, { "math_id": 31, "text": " \\mathbf{v}\\cdot\\mathbf{b}^i = v_k\\mathbf{b}^k\\cdot\\mathbf{b}^i = g^{ki}v_k " }, { "math_id": 32, "text": "\\alpha" }, { "math_id": 33, "text": " \\cos \\alpha = \\cfrac{|\\mathbf{e}_1|}{|\\mathbf{b}_1|} \\quad \\Rightarrow \\quad |\\mathbf{e}_1| = |\\mathbf{b}_1|\\cos \\alpha" }, { "math_id": 34, "text": "\\cos\\alpha" }, { "math_id": 35, "text": "\\cos \\alpha = \\cfrac{dx}{dq^1} = \\frac{|\\mathbf{e}_1|}{|\\mathbf{b}_1|}" }, { "math_id": 36, "text": "p^1 = \\mathbf{b}_1\\cdot\\cfrac{\\mathbf{e}_1}{|\\mathbf{e}_1|} = |\\mathbf{b}_1|\\cfrac{|\\mathbf{e}_1|}{|\\mathbf{e}_1|}\\cos\\alpha = |\\mathbf{b}_1|\\cfrac{dx}{dq^1} \\quad \\Rightarrow \\quad \\cfrac{p^1}{|\\mathbf{b}_1|} = \\cfrac{dx}{dq^1}" }, { "math_id": 37, "text": "\\cfrac{\\partial q^i}{\\partial x_j}" }, { "math_id": 38, "text": "\\cfrac{\\partial x_i}{\\partial q^j}" }, { "math_id": 39, "text": "\n\\mathbf{b}_1 = p^1\\mathbf{e}_1 + p^2\\mathbf{e}_2 + p^3\\mathbf{e}_3 = \\cfrac{\\partial x_1}{\\partial q^1} \\mathbf{e}_1 + \\cfrac{\\partial x_2}{\\partial q^1} \\mathbf{e}_2 + \\cfrac{\\partial x_3}{\\partial q^1} \\mathbf{e}_3\n" }, { "math_id": 40, "text": "\\begin{align}\n \\mathbf{b}_1 & = \\cfrac{\\partial x_1}{\\partial q^1} \\mathbf{e}_1 + \\cfrac{\\partial x_2}{\\partial q^1} \\mathbf{e}_2 + \\cfrac{\\partial x_3}{\\partial q^1} \\mathbf{e}_3 \\\\\n \\mathbf{b}_2 & = \\cfrac{\\partial x_1}{\\partial q^2} \\mathbf{e}_1 + \\cfrac{\\partial x_2}{\\partial q^2} \\mathbf{e}_2 + \\cfrac{\\partial x_3}{\\partial q^2} \\mathbf{e}_3 \\\\\n \\mathbf{b}_3 & = \\cfrac{\\partial x_1}{\\partial q^3} \\mathbf{e}_1 + \\cfrac{\\partial x_2}{\\partial q^3} \\mathbf{e}_2 + \\cfrac{\\partial x_3}{\\partial q^3} \\mathbf{e}_3\n\\end{align}" }, { "math_id": 41, "text": "\\begin{align}\n \\mathbf{e}_1 & = \\cfrac{\\partial q^1}{\\partial x_1} \\mathbf{b}_1 + \\cfrac{\\partial q^2}{\\partial x_1} \\mathbf{b}_2 + \\cfrac{\\partial q^3}{\\partial x_1} \\mathbf{b}_3 \\\\\n \\mathbf{e}_2 & = \\cfrac{\\partial q^1}{\\partial x_2} \\mathbf{b}_1 + \\cfrac{\\partial q^2}{\\partial x_2} \\mathbf{b}_2 + \\cfrac{\\partial q^3}{\\partial x_2} \\mathbf{b}_3 \\\\\n \\mathbf{e}_3 & = \\cfrac{\\partial q^1}{\\partial x_3} \\mathbf{b}_1 + \\cfrac{\\partial q^2}{\\partial x_3} \\mathbf{b}_2 + \\cfrac{\\partial q^3}{\\partial x_3} \\mathbf{b}_3\n\\end{align}" }, { "math_id": 42, "text": "\\cfrac{\\partial x_i}{\\partial q^k} \\mathbf{e}_i = \\mathbf{b}_k, \\quad \\cfrac{\\partial q^i}{\\partial x_k} \\mathbf{b}_i = \\mathbf{e}_k" }, { "math_id": 43, "text": "\n\\mathbf{J} = \\begin{bmatrix}\n \\cfrac{\\partial x_1}{\\partial q^1} & \\cfrac{\\partial x_1}{\\partial q^2} & \\cfrac{\\partial x_1}{\\partial q^3} \\\\\n \\cfrac{\\partial x_2}{\\partial q^1} & \\cfrac{\\partial x_2}{\\partial q^2} & \\cfrac{\\partial x_2}{\\partial q^3} \\\\\n \\cfrac{\\partial x_3}{\\partial q^1} & \\cfrac{\\partial x_3}{\\partial q^2} & \\cfrac{\\partial x_3}{\\partial q^3} \\\\\n \\end{bmatrix},\\quad\n\\mathbf{J}^{-1} = \\begin{bmatrix}\n \\cfrac{\\partial q^1}{\\partial x_1} & \\cfrac{\\partial q^1}{\\partial x_2} & \\cfrac{\\partial q^1}{\\partial x_3} \\\\\n \\cfrac{\\partial q^2}{\\partial x_1} & \\cfrac{\\partial q^2}{\\partial x_2} & \\cfrac{\\partial q^2}{\\partial x_3} \\\\\n \\cfrac{\\partial q^3}{\\partial x_1} & \\cfrac{\\partial q^3}{\\partial x_2} & \\cfrac{\\partial q^3}{\\partial x_3} \\\\\n \\end{bmatrix}\n " }, { "math_id": 44, "text": " \\det(\\mathbf{J}^{-1}) \\neq 0" }, { "math_id": 45, "text": " \\mathbf{x} = \\sum_{i=1}^n x_i\\mathbf{e}^i " }, { "math_id": 46, "text": " \\mathbf{v} = \\sum_{j=1}^n \\bar{v}^j\\mathbf{b}_j = \\sum_{j=1}^n \\bar{v}^j(\\mathbf{q})\\mathbf{b}_j(\\mathbf{q}) " }, { "math_id": 47, "text": "\n \\boldsymbol{S} = S^{ij}\\mathbf{b}_i\\otimes\\mathbf{b}_j = S^i{}_j\\mathbf{b}_i\\otimes\\mathbf{b}^j = S_i{}^j\\mathbf{b}^i\\otimes\\mathbf{b}_j = S_{ij}\\mathbf{b}^i\\otimes\\mathbf{b}^j\n " }, { "math_id": 48, "text": "\\scriptstyle\\otimes" }, { "math_id": 49, "text": " S^{ij} = g^{ik}S_k{}^j = g^{jk}S^i{}_k = g^{ik}g^{j\\ell}S_{k\\ell} " }, { "math_id": 50, "text": "d\\mathbf{x}\\cdot d\\mathbf{x} = \\cfrac{\\partial x_i}{\\partial q^j}\\cfrac{\\partial x_i}{\\partial q^k}dq^jdq^k\n " }, { "math_id": 51, "text": " \\cfrac{\\partial x_k}{\\partial q^i}\\cfrac{\\partial x_k}{\\partial q^j} = g_{ij}(q^i,q^j) = \\mathbf{b}_i\\cdot\\mathbf{b}_j " }, { "math_id": 52, "text": " v^i = g^{ik}v_k " }, { "math_id": 53, "text": " h_ih_j = g_{ij} = \\mathbf{b}_i\\cdot\\mathbf{b}_j \\quad \\Rightarrow \\quad h_i =\\sqrt{g_{ii}}= \\left|\\mathbf{b}_i\\right|=\\left|\\cfrac{\\partial\\mathbf{x}}{\\partial q^i}\\right| " }, { "math_id": 54, "text": " g_{ij} = \\cfrac{\\partial\\mathbf{x}}{\\partial q^i}\\cdot\\cfrac{\\partial\\mathbf{x}}{\\partial q^j}\n= \\left( h_{ki}\\mathbf{e}_k\\right)\\cdot\\left( h_{mj}\\mathbf{e}_m\\right)\n= h_{ki}h_{kj} " }, { "math_id": 55, "text": " g = g_{11}g_{22}g_{33} = h_1^2h_2^2h_3^2 \\quad \\Rightarrow \\quad \\sqrt{g} = h_1h_2h_3 = J " }, { "math_id": 56, "text": " (x, y)=(r \\cos \\theta, r \\sin \\theta) " }, { "math_id": 57, "text": " \\boldsymbol{\\mathcal{E}} = \\varepsilon_{ijk}\\mathbf{e}^i\\otimes\\mathbf{e}^j\\otimes\\mathbf{e}^k " }, { "math_id": 58, "text": "\n \\boldsymbol{\\mathcal{E}} = \\mathcal{E}_{ijk}\\mathbf{b}^i\\otimes\\mathbf{b}^j\\otimes\\mathbf{b}^k\n = \\mathcal{E}^{ijk}\\mathbf{b}_i\\otimes\\mathbf{b}_j\\otimes\\mathbf{b}_k\n" }, { "math_id": 59, "text": "\n \\mathcal{E}^{ijk} = \\cfrac{1}{J}\\varepsilon_{ijk} = \\cfrac{1}{+\\sqrt{g}}\\varepsilon_{ijk}\n" }, { "math_id": 60, "text": "\\Gamma_{kij}" }, { "math_id": 61, "text": "\n\\mathbf{b}_{i,j} = \\frac{\\partial \\mathbf{b}_i}{\\partial q^j} = \\mathbf{b}^k \\Gamma_{kij} \\quad \\Rightarrow \\quad\n\\mathbf{b}_k \\cdot \\mathbf{b}_{i,j} = \\Gamma_{kij}\n" }, { "math_id": 62, "text": "\n\\begin{align}\ng_{ij,k} & = (\\mathbf{b}_i\\cdot\\mathbf{b}_j)_{,k} = \\mathbf{b}_{i,k}\\cdot\\mathbf{b}_j + \\mathbf{b}_i\\cdot\\mathbf{b}_{j,k}\n= \\Gamma_{jik} + \\Gamma_{ijk}\\\\\ng_{ik,j} & = (\\mathbf{b}_i\\cdot\\mathbf{b}_k)_{,j} = \\mathbf{b}_{i,j}\\cdot\\mathbf{b}_k + \\mathbf{b}_i\\cdot\\mathbf{b}_{k,j}\n= \\Gamma_{kij} + \\Gamma_{ikj}\\\\\ng_{jk,i} & = (\\mathbf{b}_j\\cdot\\mathbf{b}_k)_{,i} = \\mathbf{b}_{j,i}\\cdot\\mathbf{b}_k + \\mathbf{b}_j\\cdot\\mathbf{b}_{k,i}\n= \\Gamma_{kji} + \\Gamma_{jki}\n\\end{align}\n" }, { "math_id": 63, "text": "\\mathbf{b}_{i,j} = \\mathbf{b}_{j,i}\\quad\\Rightarrow\\quad\\Gamma_{kij} = \\Gamma_{kji}" }, { "math_id": 64, "text": "\\Gamma_{kij} = \\frac{1}{2}(g_{ik,j} + g_{jk,i} - g_{ij,k}) = \\frac{1}{2}[(\\mathbf{b}_i\\cdot\\mathbf{b}_k)_{,j} + (\\mathbf{b}_j\\cdot\\mathbf{b}_k)_{,i} - (\\mathbf{b}_i\\cdot\\mathbf{b}_j)_{,k}]\n" }, { "math_id": 65, "text": "\\Gamma^k{}_{ji}" }, { "math_id": 66, "text": "\\Gamma^k{}_{ij} = g^{kl}\\Gamma_{lij} = \\Gamma^k{}_{ji},\\quad \\cfrac{\\partial \\mathbf{b}_i}{\\partial q^j} = \\mathbf{b}_k \\Gamma^k{}_{ij} " }, { "math_id": 67, "text": " \\Gamma^k{}_{ij} = \\cfrac{\\partial \\mathbf{b}_i}{\\partial q^j}\\cdot\\mathbf{b}^k = -\\mathbf{b}_i\\cdot\\cfrac{\\partial \\mathbf{b}^k}{\\partial q^j}\\quad " }, { "math_id": 68, "text": " \\quad\\cfrac{\\partial}{\\partial q^j}(\\mathbf{b}_i\\cdot\\mathbf{b}^k)=0" }, { "math_id": 69, "text": "\n\\cfrac{\\partial \\mathbf{b}^i}{\\partial q^j} = -\\Gamma^i{}_{jk}\\mathbf{b}^k,\\quad\n\\boldsymbol{\\nabla}\\mathbf{b}_i = \\Gamma^k{}_{ij}\\mathbf{b}_k\\otimes\\mathbf{b}^j,\\quad\n\\boldsymbol{\\nabla}\\mathbf{b}^i = -\\Gamma^i{}_{jk}\\mathbf{b}^k\\otimes\\mathbf{b}^j\n" } ]
https://en.wikipedia.org/wiki?curid=755300
75532618
99 Variations on a Proof
2019 book by Philip Ording 99 Variations on a Proof is a mathematics book by Philip Ording, in which he proves the same result in 99 different ways. Ording takes an example of a cubic equation, formula_0 and shows that its solutions are formula_1 and formula_2 using a different method in each chapter. The structure of the book was inspired by Oulipo co-founder Raymond Queneau's "Exercises de style" (1947). The book was published in 2019 by Princeton University Press. Reception. Writing in "The Mathematical Intelligencer," John J. Watkins described the book as "marvelous" and said that "Ording's inventiveness seems boundless". Watkins praised several of the proofs, particularly the visual proof in Chapter 10, while noting that some of the others left him "cold" by appealing to topics outside his own interests or exhausting his patience. While Watkins found the origami-based proof in Chapter 39 perplexing, Dan Rockmore's review in the "New York Review of Books" called the same proof "a delight". Reviewing the book for the Mathematical Association of America, Geoffrey Dietz also gave a positive evaluation, saying that he "learned something new" from several proofs and found some of them quite comedic. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "x^3 - 6x^2 + 11x - 6 = 2x - 2," }, { "math_id": 1, "text": "x = 1" }, { "math_id": 2, "text": "x = 4" } ]
https://en.wikipedia.org/wiki?curid=75532618
755329
Moufang loop
Algebraic structure In mathematics, a Moufang loop is a special kind of algebraic structure. It is similar to a group in many ways but need not be associative. Moufang loops were introduced by Ruth Moufang (1935). Smooth Moufang loops have an associated algebra, the Malcev algebra, similar in some ways to how a Lie group has an associated Lie algebra. Definition. A Moufang loop is a loop formula_0 that satisfies the four following equivalent identities for all formula_1, formula_2, formula_3 in formula_0 (the binary operation in formula_0 is denoted by juxtaposition): These identities are known as Moufang identities. It follows that formula_12 and formula_13. With the above product "M"("G",2) is a Moufang loop. It is associative if and only if "G" is abelian. where |"A"| is the number of elements of the code word "A", and so on. For more details see Conway, J. H.; Curtis, R. T.; Norton, S. P.; Parker, R. A.; and Wilson, R. A.: "Atlas of Finite Groups: Maximal Subgroups and Ordinary Characters for Simple Groups." Oxford, England. Properties. Associativity. Moufang loops differ from groups in that they need not be associative. A Moufang loop that is associative is a group. The Moufang identities may be viewed as weaker forms of associativity. By setting various elements to the identity, the Moufang identities imply Moufang's theorem states that when three elements "x", "y", and "z" in a Moufang loop obey the associative law: ("xy")"z" = "x"("yz") then they generate an associative subloop; that is, a group. A corollary of this is that all Moufang loops are "di-associative" (i.e. the subloop generated by any two elements of a Moufang loop is associative and therefore a group). In particular, Moufang loops are power associative, so that powers "x""n" are well-defined. When working with Moufang loops, it is common to drop the parenthesis in expressions with only two distinct elements. For example, the Moufang identities may be written unambiguously as Left and right multiplication. The Moufang identities can be written in terms of the left and right multiplication operators on "Q". The first two identities state that while the third identity says for all formula_17 in formula_0. Here formula_18 is bimultiplication by formula_3. The third Moufang identity is therefore equivalent to the statement that the triple formula_19 is an autotopy of formula_0 for all formula_3 in formula_0. Inverse properties. All Moufang loops have the inverse property, which means that each element "x" has a two-sided inverse "x"−1 that satisfies the identities: formula_20 for all "x" and "y". It follows that formula_21 and formula_22 if and only if formula_23. Moufang loops are universal among inverse property loops; that is, a loop "Q" is a Moufang loop if and only if every loop isotope of "Q" has the inverse property. It follows that every loop isotope of a Moufang loop is a Moufang loop. One can use inverses to rewrite the left and right Moufang identities in a more useful form: Lagrange property. A finite loop "Q" is said to have the "Lagrange property" if the order of every subloop of "Q" divides the order of "Q". Lagrange's theorem in group theory states that every finite group has the Lagrange property. It was an open question for many years whether or not finite Moufang loops had Lagrange property. The question was finally resolved by Alexander Grishkov and Andrei Zavarnitsine, and independently by Stephen Gagola III and Jonathan Hall, in 2003: Every finite Moufang loop does have the Lagrange property. More results for the theory of finite groups have been generalized to Moufang loops by Stephen Gagola III in recent years. Moufang quasigroups. Any quasigroup satisfying one of the Moufang identities must, in fact, have an identity element and therefore be a Moufang loop. We give a proof here for the third identity: Let "a" be any element of "Q", and let "e" be the unique element such that "ae" = "a". Then for any "x" in "Q", ("xa")"x" = ("x"("ae"))"x" = ("xa")("ex"). Cancelling "xa" on the left gives "x" = "ex" so that "e" is a left identity element. Now for any "y" in "Q", "ye" = ("ey")("ee") =("e"("ye"))"e" = ("ye")"e". Cancelling "e" on the right gives "y" = "ye", so "e" is also a right identity element. Therefore, "e" is a two-sided identity element. The proofs for the first two identities are somewhat more difficult (Kunen 1996). Open problems. Phillips' problem is an open problem in the theory presented by J. D. Phillips at Loops '03 in Prague. It asks whether there exists a finite Moufang loop of odd order with a trivial nucleus. Recall that the nucleus of a loop (or more generally a quasigroup) is the set of formula_1 such that formula_26, formula_27 and formula_28 hold for all formula_29 in the loop. "See also": Problems in loop theory and quasigroup theory References. &lt;templatestyles src="Refbegin/styles.css" /&gt;
[ { "math_id": 0, "text": "Q" }, { "math_id": 1, "text": "x" }, { "math_id": 2, "text": "y" }, { "math_id": 3, "text": "z" }, { "math_id": 4, "text": "z(x(zy)) = ((zx)z)y" }, { "math_id": 5, "text": "x(z(yz)) = ((xz)y)z" }, { "math_id": 6, "text": "(zx)(yz) = (z(xy))z" }, { "math_id": 7, "text": "(zx)(yz) = z((xy)z)" }, { "math_id": 8, "text": "1\\cdot u=u" }, { "math_id": 9, "text": "(gu)h = (gh^{-1})u" }, { "math_id": 10, "text": "g(hu) = (hg)u" }, { "math_id": 11, "text": "(gu)(hu) = h^{-1}g." }, { "math_id": 12, "text": "u^2 = 1" }, { "math_id": 13, "text": "ug = g^{-1}u" }, { "math_id": 14, "text": "L_zL_xL_z(y) = L_{zxz}(y)" }, { "math_id": 15, "text": "R_zR_yR_z(x) = R_{zyz}(x)" }, { "math_id": 16, "text": "L_z(x)R_z(y) = B_z(xy)" }, { "math_id": 17, "text": "x,y,z" }, { "math_id": 18, "text": "B_z = L_zR_z = R_zL_z" }, { "math_id": 19, "text": "(L_z, R_z, B_z)" }, { "math_id": 20, "text": "x^{-1}(xy) = y = (yx)x^{-1}" }, { "math_id": 21, "text": "(xy)^{-1} = y^{-1}x^{-1}" }, { "math_id": 22, "text": "x(yz) = e" }, { "math_id": 23, "text": "(xy)z = e" }, { "math_id": 24, "text": "(xy)z = (xz^{-1})(zyz)" }, { "math_id": 25, "text": "x(yz) = (xyx)(x^{-1}z)." }, { "math_id": 26, "text": "x(yz)=(xy)z" }, { "math_id": 27, "text": "y(xz)=(yx)z" }, { "math_id": 28, "text": "y(zx)=(yz)x" }, { "math_id": 29, "text": "y,z" } ]
https://en.wikipedia.org/wiki?curid=755329
755400
Orthogonal basis
In mathematics, particularly linear algebra, an orthogonal basis for an inner product space formula_0 is a basis for formula_0 whose vectors are mutually orthogonal. If the vectors of an orthogonal basis are normalized, the resulting basis is an orthonormal basis. As coordinates. Any orthogonal basis can be used to define a system of orthogonal coordinates formula_1 Orthogonal (not necessarily orthonormal) bases are important due to their appearance from curvilinear orthogonal coordinates in Euclidean spaces, as well as in Riemannian and pseudo-Riemannian manifolds. In functional analysis. In functional analysis, an orthogonal basis is any basis obtained from an orthonormal basis (or Hilbert basis) using multiplication by nonzero scalars. Extensions. Symmetric bilinear form. The concept of an orthogonal basis is applicable to a vector space formula_0 (over any field) equipped with a symmetric bilinear form &amp;NoBreak;&amp;NoBreak;, where "orthogonality" of two vectors formula_2 and formula_3 means &amp;NoBreak;&amp;NoBreak;. For an orthogonal basis &amp;NoBreak;}&amp;NoBreak;: formula_4 where formula_5 is a quadratic form associated with formula_6 formula_7 (in an inner product space, &amp;NoBreak;&amp;NoBreak;). Hence for an orthogonal basis &amp;NoBreak;}&amp;NoBreak;, formula_8 where formula_9 and formula_10 are components of formula_2 and formula_3 in the basis. Quadratic form. The concept of orthogonality may be extended to a vector space over any field of characteristic not 2 equipped with a quadratic form &amp;NoBreak;&amp;NoBreak;. Starting from the observation that, when the characteristic of the underlying field is not 2, the associated symmetric bilinear form formula_11 allows vectors formula_2 and formula_3 to be defined as being orthogonal with respect to formula_5 when &amp;NoBreak;&amp;NoBreak;. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "V" }, { "math_id": 1, "text": "V." }, { "math_id": 2, "text": "v" }, { "math_id": 3, "text": "w" }, { "math_id": 4, "text": "\\langle e_j, e_k\\rangle =\n\\begin{cases}\nq(e_k) & j = k \\\\\n0 & j \\neq k,\n\\end{cases}" }, { "math_id": 5, "text": "q" }, { "math_id": 6, "text": "\\langle \\cdot, \\cdot \\rangle:" }, { "math_id": 7, "text": "q(v) = \\langle v, v \\rangle" }, { "math_id": 8, "text": "\\langle v, w \\rangle = \\sum_k q(e_k) v^k w^k," }, { "math_id": 9, "text": "v_k" }, { "math_id": 10, "text": "w_k" }, { "math_id": 11, "text": "\\langle v, w \\rangle = \\tfrac{1}{2}(q(v+w) - q(v) - q(w))" } ]
https://en.wikipedia.org/wiki?curid=755400