id
stringlengths
2
8
title
stringlengths
1
130
text
stringlengths
0
252k
formulas
listlengths
1
823
url
stringlengths
38
44
12695
Group representation
Group homomorphism into the general linear group over a vector space In the mathematical field of representation theory, group representations describe abstract groups in terms of bijective linear transformations of a vector space to itself (i.e. vector space automorphisms); in particular, they can be used to represent group elements as invertible matrices so that the group operation can be represented by matrix multiplication. In chemistry, a group representation can relate mathematical group elements to symmetric rotations and reflections of molecules. Representations of groups allow many group-theoretic problems to be reduced to problems in linear algebra. In physics, they describe how the symmetry group of a physical system affects the solutions of equations describing that system. The term "representation of a group" is also used in a more general sense to mean any "description" of a group as a group of transformations of some mathematical object. More formally, a "representation" means a homomorphism from the group to the automorphism group of an object. If the object is a vector space we have a "linear representation". Some people use "realization" for the general notion and reserve the term "representation" for the special case of linear representations. The bulk of this article describes linear representation theory; see the last section for generalizations. Branches of group representation theory. The representation theory of groups divides into subtheories depending on the kind of group being represented. The various theories are quite different in detail, though some basic definitions and concepts are similar. The most important divisions are: Representation theory also depends heavily on the type of vector space on which the group acts. One distinguishes between finite-dimensional representations and infinite-dimensional ones. In the infinite-dimensional case, additional structures are important (e.g. whether or not the space is a Hilbert space, Banach space, etc.). One must also consider the type of field over which the vector space is defined. The most important case is the field of complex numbers. The other important cases are the field of real numbers, finite fields, and fields of p-adic numbers. In general, algebraically closed fields are easier to handle than non-algebraically closed ones. The characteristic of the field is also significant; many theorems for finite groups depend on the characteristic of the field not dividing the order of the group. Definitions. A representation of a group "G" on a vector space "V" over a field "K" is a group homomorphism from "G" to GL("V"), the general linear group on "V". That is, a representation is a map formula_0 such that formula_1 Here "V" is called the representation space and the dimension of "V" is called the dimension or degree of the representation. It is common practice to refer to "V" itself as the representation when the homomorphism is clear from the context. In the case where "V" is of finite dimension "n" it is common to choose a basis for "V" and identify GL("V") with GL("n", "K"), the group of formula_2 invertible matrices on the field "K". formula_3 A faithful representation is one in which the homomorphism "G" → GL("V") is injective; in other words, one whose kernel is the trivial subgroup {"e"} consisting only of the group's identity element. formula_4 Examples. Consider the complex number "u" = e2πi / 3 which has the property "u"3 = 1. The set "C"3 = {1, "u", "u"2} forms a cyclic group under multiplication. This group has a representation ρ on formula_5 given by: formula_6 This representation is faithful because ρ is a one-to-one map. Another representation for "C"3 on formula_5, isomorphic to the previous one, is σ given by: formula_7 The group "C"3 may also be faithfully represented on formula_8 by τ given by: formula_9 where formula_10 Another example: Let formula_11 be the space of homogeneous degree-3 polynomials over the complex numbers in variables formula_12 Then formula_13 acts on formula_11 by permutation of the three variables. For instance, formula_14 sends formula_15 to formula_16. Reducibility. A subspace "W" of "V" that is invariant under the group action is called a "subrepresentation". If "V" has exactly two subrepresentations, namely the zero-dimensional subspace and "V" itself, then the representation is said to be irreducible; if it has a proper subrepresentation of nonzero dimension, the representation is said to be reducible. The representation of dimension zero is considered to be neither reducible nor irreducible, just as the number 1 is considered to be neither composite nor prime. Under the assumption that the characteristic of the field "K" does not divide the size of the group, representations of finite groups can be decomposed into a direct sum of irreducible subrepresentations (see Maschke's theorem). This holds in particular for any representation of a finite group over the complex numbers, since the characteristic of the complex numbers is zero, which never divides the size of a group. In the example above, the first two representations given (ρ and σ) are both decomposable into two 1-dimensional subrepresentations (given by span{(1,0)} and span{(0,1)}), while the third representation (τ) is irreducible. Generalizations. Set-theoretical representations. A "set-theoretic representation" (also known as a group action or "permutation representation") of a group "G" on a set "X" is given by a function ρ : "G" → "X""X", the set of functions from "X" to "X", such that for all "g"1, "g"2 in "G" and all "x" in "X": formula_17 formula_18 where formula_19 is the identity element of "G". This condition and the axioms for a group imply that ρ("g") is a bijection (or permutation) for all "g" in "G". Thus we may equivalently define a permutation representation to be a group homomorphism from G to the symmetric group S"X" of "X". For more information on this topic see the article on group action. Representations in other categories. Every group "G" can be viewed as a category with a single object; morphisms in this category are just the elements of "G". Given an arbitrary category "C", a "representation" of "G" in "C" is a functor from "G" to "C". Such a functor selects an object "X" in "C" and a group homomorphism from "G" to Aut("X"), the automorphism group of "X". In the case where "C" is Vect"K", the category of vector spaces over a field "K", this definition is equivalent to a linear representation. Likewise, a set-theoretic representation is just a representation of "G" in the category of sets. When "C" is Ab, the category of abelian groups, the objects obtained are called "G"-modules. For another example consider the category of topological spaces, Top. Representations in Top are homomorphisms from "G" to the homeomorphism group of a topological space "X". Two types of representations closely related to linear representations are: Notes. <templatestyles src="Reflist/styles.css" />
[ { "math_id": 0, "text": "\\rho \\colon G \\to \\mathrm{GL}\\left(V \\right)" }, { "math_id": 1, "text": "\\rho(g_1 g_2) = \\rho(g_1) \\rho(g_2) , \\qquad \\text{for all }g_1,g_2 \\in G." }, { "math_id": 2, "text": "n \\times n" }, { "math_id": 3, "text": "\\ker \\rho = \\left\\{g \\in G \\mid \\rho(g) = \\mathrm{id}\\right\\}." }, { "math_id": 4, "text": "\\alpha \\circ \\rho(g) \\circ \\alpha^{-1} = \\pi(g)." }, { "math_id": 5, "text": "\\mathbb{C}^2" }, { "math_id": 6, "text": "\n\\rho \\left( 1 \\right) =\n\\begin{bmatrix}\n1 & 0 \\\\\n0 & 1 \\\\\n\\end{bmatrix}\n\\qquad\n\\rho \\left( u \\right) =\n\\begin{bmatrix}\n1 & 0 \\\\\n0 & u \\\\\n\\end{bmatrix}\n\\qquad\n\\rho \\left( u^2 \\right) =\n\\begin{bmatrix}\n1 & 0 \\\\\n0 & u^2 \\\\\n\\end{bmatrix}.\n" }, { "math_id": 7, "text": "\n\\sigma \\left( 1 \\right) =\n\\begin{bmatrix}\n1 & 0 \\\\\n0 & 1 \\\\\n\\end{bmatrix}\n\\qquad\n\\sigma \\left( u \\right) =\n\\begin{bmatrix}\nu & 0 \\\\\n0 & 1 \\\\\n\\end{bmatrix}\n\\qquad\n\\sigma \\left( u^2 \\right) =\n\\begin{bmatrix}\nu^2 & 0 \\\\\n0 & 1 \\\\\n\\end{bmatrix}.\n" }, { "math_id": 8, "text": "\\mathbb{R}^2" }, { "math_id": 9, "text": "\n\\tau \\left( 1 \\right) =\n\\begin{bmatrix}\n1 & 0 \\\\\n0 & 1 \\\\\n\\end{bmatrix}\n\\qquad\n\\tau \\left( u \\right) =\n\\begin{bmatrix}\na & -b \\\\\nb & a \\\\\n\\end{bmatrix}\n\\qquad\n\\tau \\left( u^2 \\right) =\n\\begin{bmatrix}\na & b \\\\\n-b & a \\\\\n\\end{bmatrix}\n" }, { "math_id": 10, "text": "a=\\text{Re}(u)=-\\tfrac{1}{2}, \\qquad b=\\text{Im}(u)=\\tfrac{\\sqrt{3}}{2}." }, { "math_id": 11, "text": "V" }, { "math_id": 12, "text": "x_1, x_2, x_3. " }, { "math_id": 13, "text": "S_3" }, { "math_id": 14, "text": "(12)" }, { "math_id": 15, "text": "x_{1}^3" }, { "math_id": 16, "text": "x_{2}^3" }, { "math_id": 17, "text": "\\rho(1)[x] = x" }, { "math_id": 18, "text": "\\rho(g_1 g_2)[x]=\\rho(g_1)[\\rho(g_2)[x]]," }, { "math_id": 19, "text": "1" } ]
https://en.wikipedia.org/wiki?curid=12695
1269900
Legendre chi function
Mathematical Function In mathematics, the Legendre chi function is a special function whose Taylor series is also a Dirichlet series, given by formula_0 As such, it resembles the Dirichlet series for the polylogarithm, and, indeed, is trivially expressible in terms of the polylogarithm as formula_1 The Legendre chi function appears as the discrete Fourier transform, with respect to the order ν, of the Hurwitz zeta function, and also of the Euler polynomials, with the explicit relationships given in those articles. The Legendre chi function is a special case of the Lerch transcendent, and is given by formula_2 Identities. formula_3 formula_4 Integral relations. formula_5 formula_6 formula_7 formula_8
[ { "math_id": 0, "text": "\\chi_\\nu(z) = \\sum_{k=0}^\\infty \\frac{z^{2k+1}}{(2k+1)^\\nu}." }, { "math_id": 1, "text": "\\chi_\\nu(z) = \\frac{1}{2}\\left[\\operatorname{Li}_\\nu(z) - \\operatorname{Li}_\\nu(-z)\\right]." }, { "math_id": 2, "text": "\\chi_\\nu(z)=2^{-\\nu}z\\,\\Phi (z^2,\\nu,1/2)." }, { "math_id": 3, "text": "\\chi_2(x) + \\chi_2(1/x)= \\frac{\\pi^2}{4}-\\frac{i \\pi}{2}\\ln |x| ." }, { "math_id": 4, "text": "\\frac{d}{dx}\\chi_2(x) = \\frac{{\\rm arctanh\\,} x}{x}." }, { "math_id": 5, "text": "\\int_0^{\\pi/2} \\arcsin (r \\sin \\theta) d\\theta \n= \\chi_2\\left(r\\right)" }, { "math_id": 6, "text": "\\int_0^{\\pi/2} \\arctan (r \\sin \\theta) d\\theta \n= -\\frac{1}{2}\\int_0^{\\pi} \\frac{ r \\theta \\cos \\theta}{1+ r^2 \\sin^2 \\theta} d\\theta \n= 2 \\chi_2\\left(\\frac{\\sqrt{1+r^2}- 1}{r}\\right)" }, { "math_id": 7, "text": "\\int_0^{\\pi/2} \\arctan (p \\sin \\theta)\\arctan (q \\sin \\theta) d\\theta = \\pi \\chi_2\\left(\\frac{\\sqrt{1+p^2}- 1}{p}\\cdot\\frac{\\sqrt{1+q^2}- 1}{q}\\right)" }, { "math_id": 8, "text": "\\int_0^{\\alpha}\\int_0^{\\beta} \\frac{dx dy}{1-x^2 y^2} = \\chi_2(\\alpha\\beta)\\qquad {\\rm if}~~|\\alpha\\beta|\\leq 1" } ]
https://en.wikipedia.org/wiki?curid=1269900
12700262
Toroidal and poloidal coordinates
Coordinate system relative to a torus The terms toroidal and poloidal refer to directions relative to a torus of reference. They describe a three-dimensional coordinate system in which the poloidal direction follows a small circular ring around the surface, while the toroidal direction follows a large circular ring around the torus, encircling the central void. The earliest use of these terms cited by the Oxford English Dictionary is by Walter M. Elsasser (1946) in the context of the generation of the Earth's magnetic field by currents in the core, with "toroidal" being parallel to lines of constant latitude and "poloidal" being in the direction of the magnetic field (i.e. towards the poles). The OED also records the later usage of these terms in the context of toroidally confined plasmas, as encountered in magnetic confinement fusion. In the plasma context, the toroidal direction is the long way around the torus, the corresponding coordinate being denoted by z in the slab approximation or ζ or φ in magnetic coordinates; the poloidal direction is the short way around the torus, the corresponding coordinate being denoted by y in the slab approximation or θ in magnetic coordinates. (The third direction, normal to the magnetic surfaces, is often called the "radial direction", denoted by x in the slab approximation and variously ψ, χ, r, ρ, or s in magnetic coordinates.) Example. As a simple example from the physics of magnetically confined plasmas, consider an axisymmetric system with circular, concentric magnetic flux surfaces of radius formula_0 (a crude approximation to the magnetic field geometry in an early tokamak but topologically equivalent to any toroidal magnetic confinement system with nested flux surfaces) and denote the toroidal angle by formula_1 and the poloidal angle by formula_2. Then the toroidal/poloidal coordinate system relates to standard Cartesian coordinates by these transformation rules: formula_3 formula_4 formula_5 where formula_6. The natural choice geometrically is to take formula_7, giving the toroidal and poloidal directions shown by the arrows in the figure above, but this makes formula_8 a left-handed curvilinear coordinate system. As it is usually assumed in setting up "flux coordinates" for describing magnetically confined plasmas that the set formula_8 forms a "right"-handed coordinate system, formula_9, we must either reverse the poloidal direction by taking formula_10, or reverse the toroidal direction by taking formula_11. Both choices are used in the literature. Kinematics. To study single particle motion in toroidally confined plasma devices, velocity and acceleration vectors must be known. Considering the natural choice formula_7, the unit vectors of toroidal and poloidal coordinates system formula_12 can be expressed as: formula_13 according to Cartesian coordinates. The position vector is expressed as: formula_14 The velocity vector is then given by: formula_15 and the acceleration vector is: formula_16
[ { "math_id": 0, "text": "r" }, { "math_id": 1, "text": "\\zeta" }, { "math_id": 2, "text": "\\theta" }, { "math_id": 3, "text": " x = (R_0 +r \\cos \\theta) \\cos\\zeta " }, { "math_id": 4, "text": " y = s_\\zeta (R_0 + r \\cos \\theta) \\sin\\zeta " }, { "math_id": 5, "text": " z = s_\\theta r \\sin \\theta. " }, { "math_id": 6, "text": "s_\\theta = \\pm 1, s_\\zeta = \\pm 1" }, { "math_id": 7, "text": "s_\\theta = s_\\zeta = +1" }, { "math_id": 8, "text": "r,\\theta,\\zeta" }, { "math_id": 9, "text": " \\nabla r\\cdot\\nabla\\theta\\times\\nabla\\zeta > 0" }, { "math_id": 10, "text": "s_\\theta = -1, s_\\zeta = +1" }, { "math_id": 11, "text": "s_\\theta = +1, s_\\zeta = -1" }, { "math_id": 12, "text": "\\left(r,\\theta,\\zeta\\right)" }, { "math_id": 13, "text": "\\mathbf{e}_r = \\begin{pmatrix}\n \\cos\\theta \\cos\\zeta \\\\\n \\cos\\theta \\sin\\zeta \\\\\n \\sin\\theta\n\\end{pmatrix} \\quad\n\\mathbf{e}_\\theta = \\begin{pmatrix}\n -\\sin\\theta \\cos\\zeta \\\\\n -\\sin\\theta \\sin\\zeta \\\\\n \\cos\\theta\n\\end{pmatrix} \\quad\n\\mathbf{e}_\\zeta = \\begin{pmatrix}\n -\\sin\\zeta \\\\\n \\cos\\zeta \\\\\n 0\n\\end{pmatrix}" }, { "math_id": 14, "text": " \\mathbf{r} = \\left( r + R_0 \\cos\\theta \\right) \\mathbf{e}_r - R_0 \\sin\\theta \\mathbf{e}_\\theta " }, { "math_id": 15, "text": " \\mathbf{\\dot{r}} = \\dot{r} \\mathbf{e}_r + r\\dot{\\theta} \\mathbf{e}_\\theta + \\dot{\\zeta} \\left( R_0 + r \\cos\\theta \\right) \\mathbf{e}_\\zeta " }, { "math_id": 16, "text": "\n\\begin{align}\n\\mathbf{\\ddot{r}} = {} & \\left( \\ddot{r} - r \\dot{\\theta}^2 - r \\dot{\\zeta}^2 \\cos^2\\theta - R_0 \\dot{\\zeta}^2 \\cos\\theta \\right) \\mathbf{e}_r \\\\[5pt]\n& {} + \\left( 2\\dot{r}\\dot{\\theta} + r\\ddot{\\theta} + r\\dot{\\zeta}^2\\cos\\theta\\sin\\theta + R_0 \\dot{\\zeta}^2 \\sin\\theta \\right) \\mathbf{e}_\\theta \\\\[5pt]\n& {} + \\left( 2 \\dot{r}\\dot{\\zeta}\\cos\\theta - 2 r \\dot{\\theta}\\dot{\\zeta} \\sin\\theta + \\ddot{\\zeta} \\left( R_0 + r\\cos\\theta \\right) \\right) \\mathbf{e}_\\zeta\n\\end{align}\n" } ]
https://en.wikipedia.org/wiki?curid=12700262
1270068
Pierre-Louis Lions
French mathematician (born 1956) Pierre-Louis Lions (; born 11 August 1956) is a French mathematician. He is known for a number of contributions to the fields of partial differential equations and the calculus of variations. He was a recipient of the 1994 Fields Medal and the 1991 Prize of the Philip Morris tobacco and cigarette company. Biography. Lions entered the École normale supérieure in 1975, and received his doctorate from the University of Pierre and Marie Curie in 1979. He holds the position of Professor of "Partial differential equations and their applications" at the Collège de France in Paris as well as a position at École Polytechnique. Since 2014, he has also been a visiting professor at the University of Chicago. In 1979, Lions married Lila Laurenti, with whom he has one son. Lions' parents were Andrée Olivier and the renowned mathematician Jacques-Louis Lions, at the time a professor at the University of Nancy, and from 1991 through 1994 the President of the International Mathematical Union. Awards and honors. In 1994, while working at the Paris Dauphine University, Lions received the International Mathematical Union's prestigious Fields Medal. He was cited for his contributions to viscosity solutions, the Boltzmann equation, and the calculus of variations. He has also received the French Academy of Science's Prix Paul Doistau–Émile Blutet (in 1986) and Ampère Prize (in 1992). He was an invited professor at the Conservatoire national des arts et métiers (2000). He is a doctor honoris causa of Heriot-Watt University (Edinburgh), EPFL (2010), Narvik University College (2014), and of the City University of Hong-Kong and is listed as an ISI highly cited researcher. Mathematical work. Operator theory. Lions' earliest work dealt with the functional analysis of Hilbert spaces. His first published article, in 1977, was a contribution to the vast literature on convergence of certain iterative algorithms to fixed points of a given nonexpansive self-map of a closed convex subset of Hilbert space.[L77] In collaboration with his thesis advisor Haïm Brézis, Lions gave new results about maximal monotone operators in Hilbert space, proving one of the first convergence results for Bernard Martinet and R. Tyrrell Rockafellar's proximal point algorithm.[BL78] In the time since, there have been a large number of modifications and improvements of such results. With Bertrand Mercier, Lions proposed a "forward-backward splitting algorithm" for finding a zero of the sum of two maximal monotone operators.[LM79] Their algorithm can be viewed as an abstract version of the well-known Douglas−Rachford and Peaceman−Rachford numerical algorithms for computation of solutions to parabolic partial differential equations. The Lions−Mercier algorithms and their proof of convergence have been particularly influential in the literature on operator theory and its applications to numerical analysis. A similar method was studied at the same time by Gregory Passty. Calculus of variations. The mathematical study of the steady-state Schrödinger–Newton equation, also called the "Choquard equation", was initiated in a seminal article of Elliott Lieb. It is inspired by plasma physics via a standard approximation technique in quantum chemistry. Lions showed that one could apply standard methods such as the mountain pass theorem, together with some technical work of Walter Strauss, in order to show that a generalized steady-state Schrödinger–Newton equation with a radially symmetric generalization of the gravitational potential is necessarily solvable by a radially symmetric function.[L80] The partial differential equation formula_0 has received a great deal of attention in the mathematical literature. Lions' extensive work on this equation is concerned with the existence of rotationally symmetric solutions as well as estimates and existence for boundary value problems of various type.[L82a] In the interest of studying solutions on all of Euclidean space, where standard compactness theory does not apply, Lions established a number of compactness results for functions with symmetry.[L82b] With Henri Berestycki and , Lions used standard ODE shooting methods to directly study the existence of rotationally symmetric solutions.[BLP81] However, sharper results were obtained two years later by Berestycki and Lions by variational methods. They considered the solutions of the equation as rescalings of minima of a constrained optimization problem, based upon a modified Dirichlet energy. Making use of the Schwarz symmetrization, there exists a minimizing sequence for the infimization problem which consists of positive and rotationally symmetric functions. So they were able to show that there is a minimum which is also rotationally symmetric and nonnegative.[BL83a] By adapting the critical point methods of Felix Browder, Paul Rabinowitz, and others, Berestycki and Lions also demonstrated the existence of infinitely many (not always positive) radially symmetric solutions to the PDE.[BL83b] Maria Esteban and Lions investigated the nonexistence of solutions in a number of unbounded domains with Dirichlet boundary data.[EL82] Their basic tool is a Pohozaev-type identity, as previously reworked by Berestycki and Lions.[BL83a] They showed that such identities can be effectively used with Nachman Aronszajn's unique continuation theorem to obtain the triviality of solutions under some general conditions. Significant "a priori" estimates for solutions were found by Lions in collaboration with Djairo Guedes de Figueiredo and Roger Nussbaum.[FLN82] In more general settings, Lions introduced the "concentration-compactness principle", which characterizes when minimizing sequences of functionals may fail to subsequentially converge. His first work dealt with the case of translation-invariance, with applications to several problems of applied mathematics, including the Choquard equation.[L84a] He was also able to extend parts of his work with Berestycki to settings without any rotational symmetry.[L84b] By making use of Abbas Bahri's topological methods and min-max theory, Bahri and Lions were able to establish multiplicity results for these problems.[BL88] Lions also considered the problem of dilation invariance, with natural applications to optimizing functions for dilation-invariant functional inequalities such as the Sobolev inequality.[L85a] He was able to apply his methods to give a new perspective on previous works on geometric problems such as the Yamabe problem and harmonic maps.[L85b] With Thierry Cazenave, Lions applied his concentration-compactness results to establish orbital stability of certain symmetric solutions of nonlinear Schrödinger equations which admit variational interpretations and energy-conserving solutions.[CL82] Transport and Boltzmann equations. In 1988, François Golse, Lions, Benoît Perthame, and Rémi Sentis studied the transport equation, which is a first-order linear partial differential equation.[GLPS88] They showed that if the first-order coefficients are randomly chosen according to some probability distribution, then the corresponding function values are distributed with regularity which is enhanced from the original probability distribution. These results were later extended by DiPerna, Lions, and Meyer.[DLM91] In the physical sense, such results, known as "velocity-averaging lemmas", correspond to the fact that macroscopic observables have greater smoothness than their microscopic rules directly indicate. According to Cédric Villani, it is unknown if it is possible to instead use the explicit representation of solutions of the transport equation to derive these properties. The classical Picard–Lindelöf theorem deals with integral curves of Lipschitz-continuous vector fields. By viewing integral curves as characteristic curves for a transport equation in multiple dimensions, Lions and Ronald DiPerna initiated the broader study of integral curves of Sobolev vector fields.[DL89a] DiPerna and Lions' results on the transport equation were later extended by Luigi Ambrosio to the setting of bounded variation, and by Alessio Figalli to the context of stochastic processes. DiPerna and Lions were able to prove the global existence of solutions to the Boltzmann equation.[DL89b] Later, by applying the methods of Fourier integral operators, Lions established estimates for the Boltzmann collision operator, thereby finding compactness results for solutions of the Boltzmann equation.[L94] As a particular application of his compactness theory, he was able to show that solutions subsequentially converge at infinite time to Maxwell distributions. DiPerna and Lions also established a similar result for the Maxwell−Vlasov equations.[DL89c] Viscosity solutions. Michael Crandall and Lions introduced the notion of viscosity solution, which is a kind of generalized solution of Hamilton–Jacobi equations. Their definition is significant since they were able to establish a well-posedness theory in such a generalized context.[CL83] The basic theory of viscosity solutions was further worked out in collaboration with Lawrence Evans.[CEL84] Using a min-max quantity, Lions and considered mollification of functions on Hilbert space which preserve analytic phenomena.[LL86] Their approximations are naturally applicable to Hamilton-Jacobi equations, by regularizing sub- or super-solutions. Using such techniques, Crandall and Lions extended their analysis of Hamilton-Jacobi equations to the infinite-dimensional case, proving a comparison principle and a corresponding uniqueness theorem.[CL85] Crandall and Lions investigated the numerical analysis of their viscosity solutions, proving convergence results both for a finite difference scheme and artificial viscosity.[CL84] The comparison principle underlying Crandall and Lions' notion of viscosity solution makes their definition naturally applicable to second-order elliptic partial differential equations, given the maximum principle.[IL90] Crandall, Ishii, and Lions' survey article on viscosity solutions for such equations has become a standard reference work.[CIL92] Mean field games. With Jean-Michel Lasry, Lions has contributed to the development of mean-field game theory.[LL07] Major publications. <templatestyles src="Refbegin/styles.css" /> Articles. <templatestyles src="Refbegin/styles.css" /> display: inline-block; line-height: 1.2em; padding: .1em 0; width: 100%; display: inline-block; line-height: 1.2em; padding: .1em 0; width: 100%; display: inline-block; line-height: 1.2em; padding: .1em 0; width: 100%; display: inline-block; line-height: 1.2em; padding: .1em 0; width: 100%; display: inline-block; line-height: 1.2em; padding: .1em 0; width: 100%; display: inline-block; line-height: 1.2em; padding: .1em 0; width: 100%; display: inline-block; line-height: 1.2em; padding: .1em 0; width: 100%; display: inline-block; line-height: 1.2em; padding: .1em 0; width: 100%; display: inline-block; line-height: 1.2em; padding: .1em 0; width: 100%; display: inline-block; line-height: 1.2em; padding: .1em 0; width: 100%; display: inline-block; line-height: 1.2em; padding: .1em 0; width: 100%; display: inline-block; line-height: 1.2em; padding: .1em 0; width: 100%; display: inline-block; line-height: 1.2em; padding: .1em 0; width: 100%; display: inline-block; line-height: 1.2em; padding: .1em 0; width: 100%; display: inline-block; line-height: 1.2em; padding: .1em 0; width: 100%; display: inline-block; line-height: 1.2em; padding: .1em 0; width: 100%; display: inline-block; line-height: 1.2em; padding: .1em 0; width: 100%; display: inline-block; line-height: 1.2em; padding: .1em 0; width: 100%; display: inline-block; line-height: 1.2em; padding: .1em 0; width: 100%; display: inline-block; line-height: 1.2em; padding: .1em 0; width: 100%; display: inline-block; line-height: 1.2em; padding: .1em 0; width: 100%; display: inline-block; line-height: 1.2em; padding: .1em 0; width: 100%; display: inline-block; line-height: 1.2em; padding: .1em 0; width: 100%; display: inline-block; line-height: 1.2em; padding: .1em 0; width: 100%; display: inline-block; line-height: 1.2em; padding: .1em 0; width: 100%; display: inline-block; line-height: 1.2em; padding: .1em 0; width: 100%; display: inline-block; line-height: 1.2em; padding: .1em 0; width: 100%; display: inline-block; line-height: 1.2em; padding: .1em 0; width: 100%; display: inline-block; line-height: 1.2em; padding: .1em 0; width: 100%; display: inline-block; line-height: 1.2em; padding: .1em 0; width: 100%; display: inline-block; line-height: 1.2em; padding: .1em 0; width: 100%; display: inline-block; line-height: 1.2em; padding: .1em 0; width: 100%; display: inline-block; line-height: 1.2em; padding: .1em 0; width: 100%; display: inline-block; line-height: 1.2em; padding: .1em 0; width: 100%; display: inline-block; line-height: 1.2em; padding: .1em 0; width: 100%; display: inline-block; line-height: 1.2em; padding: .1em 0; width: 100%; Textbooks. <templatestyles src="Refbegin/styles.css" /> display: inline-block; line-height: 1.2em; padding: .1em 0; width: 100%; display: inline-block; line-height: 1.2em; padding: .1em 0; width: 100%; display: inline-block; line-height: 1.2em; padding: .1em 0; width: 100%; display: inline-block; line-height: 1.2em; padding: .1em 0; width: 100%; display: inline-block; line-height: 1.2em; padding: .1em 0; width: 100%; display: inline-block; line-height: 1.2em; padding: .1em 0; width: 100%; References. <templatestyles src="Reflist/styles.css" />
[ { "math_id": 0, "text": "\\frac{\\partial^2u}{\\partial x_1^2}+\\cdots+\\frac{\\partial^2u}{\\partial x_n^2}=f(u)" } ]
https://en.wikipedia.org/wiki?curid=1270068
1270246
Federated database system
A federated database system (FDBS) is a type of meta-database management system (DBMS), which transparently maps multiple autonomous database systems into a single federated database. The constituent databases are interconnected via a computer network and may be geographically decentralized. Since the constituent database systems remain autonomous, a federated database system is a contrastable alternative to the (sometimes daunting) task of merging several disparate databases. A federated database, or virtual database, is a composite of all constituent databases in a federated database system. There is no actual data integration in the constituent disparate databases as a result of data federation. Through data abstraction, federated database systems can provide a uniform user interface, enabling users and clients to store and retrieve data from multiple noncontiguous databases with a single query—even if the constituent databases are heterogeneous. To this end, a federated database system must be able to decompose the query into subqueries for submission to the relevant constituent DBMSs, after which the system must composite the result sets of the subqueries. Because various database management systems employ different query languages, federated database systems can apply wrappers to the subqueries to translate them into the appropriate query languages. Definition. McLeod and Heimbigner were among the first to define a federated database system in the mid-1980s. A FDBS is one which "define[s] the architecture and interconnect[s] databases that minimize central authority yet support partial sharing and coordination among database systems". This description might not accurately reflect the McLeod/Heimbigner definition of a federated database. Rather, this description fits what McLeod/Heimbigner called a "composite" database. McLeod/Heimbigner's federated database is a collection of autonomous components that make their data available to other members of the federation through the publication of an export schema and access operations; there is no unified, central schema that encompasses the information available from the members of the federation. Among other surveys, practitioners define a Federated Database as a collection of cooperating component systems which are autonomous and are possibly heterogeneous. The three important components of an FDBS are autonomy, heterogeneity and distribution. Another dimension which has also been considered is the Networking Environment Computer Network, e.g., many DBSs over a LAN or many DBSs over a WAN update related functions of participating DBSs (e.g., no updates, nonatomic transitions, atomic updates). FDBS architecture. A DBMS can be classified as either centralized or distributed. A centralized system manages a single database while distributed manages multiple databases. A component DBS in a DBMS may be centralized or distributed. A multiple DBS (MDBS) can be classified into two types depending on the autonomy of the component DBS as federated and non federated. A nonfederated database system is an integration of component DBMS that are not autonomous. A federated database system consists of component DBS that are autonomous yet participate in a federation to allow partial and controlled sharing of their data. Federated architectures differ based on levels of integration with the component database systems and the extent of services offered by the federation. A FDBS can be categorized as loosely or tightly coupled systems. Multiple DBS of which FDBS are a specific type can be characterized along three dimensions: Distribution, Heterogeneity and Autonomy. Another characterization could be based on the dimension of networking, for example single databases or multiple databases in a LAN or WAN. Distribution. Distribution of data in an FDBS is due to the existence of a multiple DBS before an FDBS is built. Data can be distributed among multiple databases which could be stored in a single computer or multiple computers. These computers could be geographically located in different places but interconnected by a network. The benefits of data distribution help in increased availability and reliability as well as improved access times. Heterogeneity. Heterogeneities in databases arise due to factors such as differences in structures, semantics of data, the constraints supported or query language. Differences in structure occur when two data models provide different primitives such as object oriented (OO) models that support specialization and inheritance and relational models that do not. Differences due to constraints occur when two models support two different constraints. For example, the set type in CODASYL schema may be partially modeled as a referential integrity constraint in a relationship schema. CODASYL supports insertion and retention that are not captured by referential integrity alone. The query language supported by one DBMS can also contribute to heterogeneity between other component DBMSs. For example, differences in query languages with the same data models or different versions of query languages could contribute to heterogeneity. Semantic heterogeneities arise when there is a disagreement about meaning, interpretation or intended use of data. At the schema and data level, classification of possible heterogeneities include: In creating a federated schema, one has to resolve such heterogeneities before integrating the component DB schemas. Schema matching, schema mapping. Dealing with incompatible data types or query syntax is not the only obstacle to a concrete implementation of an FDBS. In systems that are not planned top-down, a generic problem lies in matching semantically equivalent, but differently named parts from different schemas (=data models) (tables, attributes). A pairwise mapping between "n" attributes would result in formula_0 mapping rules (given equivalence mappings) - a number that quickly gets too large for practical purposes. A common way out is to provide a global schema that comprises the relevant parts of all member schemas and provide mappings in the form of database views. Two principal approaches depend on the direction of the mapping: Both are examples of data integration, called the schema matching problem. Autonomy. Fundamental to the difference between an MDBS and an FDBS is the concept of autonomy. It is important to understand the aspects of autonomy for component databases and how they can be addressed when a component DBS participates in an FDBS. There are four kinds of autonomies addressed: Heterogeneities in an FDBS are primarily due to design autonomy. The ANSI/X3/SPARC Study Group outlined a three level data description architecture, the components of which are the conceptual schema, internal schema and external schema of databases. The three level architecture is however inadequate to describing the architectures of an FDBS. It was therefore extended to support the three dimensions of the FDBS namely Distribution, Autonomy and Heterogeneity. The five level schema architecture is explained below. Concurrency control. The "Heterogeneity" and "Autonomy" requirements pose special challenges concerning concurrency control in an FDBS, which is crucial for the correct execution of its concurrent transactions (see also Global concurrency control). Achieving global serializability, the major correctness criterion, under these requirements has been characterized as very difficult and unsolved. Five level schema architecture for FDBSs. The five level schema architecture includes the following: While accurately representing the state of the art in data integration, the Five Level Schema Architecture above does suffer from a major drawback, namely IT imposed look and feel. Modern data users demand control over how data is presented; their needs are somewhat in conflict with such bottom-up approaches to data integration.
[ { "math_id": 0, "text": "n (n-1) \\over 2" } ]
https://en.wikipedia.org/wiki?curid=1270246
1270458
Eisenstein series
Series representing modular forms Eisenstein series, named after German mathematician Gotthold Eisenstein, are particular modular forms with infinite series expansions that may be written down directly. Originally defined for the modular group, Eisenstein series can be generalized in the theory of automorphic forms. Eisenstein series for the modular group. Let τ be a complex number with strictly positive imaginary part. Define the holomorphic Eisenstein series "G"2"k"("τ") of weight 2"k", where "k" ≥ 2 is an integer, by the following series: formula_0 This series absolutely converges to a holomorphic function of τ in the upper half-plane and its Fourier expansion given below shows that it extends to a holomorphic function at "τ" "i"∞. It is a remarkable fact that the Eisenstein series is a modular form. Indeed, the key property is its SL(2, formula_1)-covariance. Explicitly if "a", "b", "c", "d" ∈ formula_1 and "ad" − "bc" 1 then formula_2 <templatestyles src="Template:Hidden begin/styles.css"/>(Proof) formula_3 If "ad" − "bc" 1 then formula_4 so that formula_5 is a bijection formula_12 → formula_12, i.e.: formula_6 Overall, if "ad" − "bc" 1 then formula_7 and "G"2"k" is therefore a modular form of weight 2"k". Note that it is important to assume that "k" ≥ 2, otherwise it would be illegitimate to change the order of summation, and the SL(2, formula_1)-invariance would not hold. In fact, there are no nontrivial modular forms of weight 2. Nevertheless, an analogue of the holomorphic Eisenstein series can be defined even for "k" 1, although it would only be a quasimodular form. Note that "k" ≥ 2 is necessary such that the series converges absolutely, whereas "k" needs to be even otherwise the sum vanishes because the (-"m", -"n") and ("m", "n") terms cancel out. For "k" 2 the series converges but it is not a modular form. Relation to modular invariants. The modular invariants "g"2 and "g"3 of an elliptic curve are given by the first two Eisenstein series: formula_8 The article on modular invariants provides expressions for these two functions in terms of theta functions. Recurrence relation. Any holomorphic modular form for the modular group can be written as a polynomial in "G"4 and "G"6. Specifically, the higher order "G"2"k" can be written in terms of "G"4 and "G"6 through a recurrence relation. Let "dk" (2"k" + 3)"k"! "G"2"k" + 4, so for example, "d"0 3"G"4 and "d"1 5"G"6. Then the dk satisfy the relation formula_9 for all "n" ≥ 0. Here, formula_10 is the binomial coefficient. The "d""k" occur in the series expansion for the Weierstrass's elliptic functions: formula_11 Fourier series. Define "q" "e"2π"iτ". (Some older books define q to be the nome "q" "e"π"iτ", but "q" "e"2π"iτ" is now standard in number theory.) Then the Fourier series of the Eisenstein series is formula_12 where the coefficients "c"2"k" are given by formula_13 Here, "B""n" are the Bernoulli numbers, "ζ"("z") is Riemann's zeta function and "σ""p"("n") is the divisor sum function, the sum of the pth powers of the divisors of n. In particular, one has formula_14 The summation over q can be resummed as a Lambert series; that is, one has formula_15 for arbitrary complex and a. When working with the q-expansion of the Eisenstein series, this alternate notation is frequently introduced: formula_16 Identities involving Eisenstein series. As theta functions. Source: Given "q" "e"2π"iτ", let formula_17 and define the Jacobi theta functions which normally uses the nome "e"π"iτ", formula_18 where "θm" and "ϑij" are alternative notations. Then we have the symmetric relations, formula_19 Basic algebra immediately implies formula_20 an expression related to the modular discriminant, formula_21 The third symmetric relation, on the other hand, is a consequence of "E"8 "E" and "a"4 − "b"4 + "c"4 0. Products of Eisenstein series. Eisenstein series form the most explicit examples of modular forms for the full modular group SL(2, formula_1). Since the space of modular forms of weight 2"k" has dimension 1 for 2"k" 4, 6, 8, 10, 14, different products of Eisenstein series having those weights have to be equal up to a scalar multiple. In fact, we obtain the identities: formula_22 Using the q-expansions of the Eisenstein series given above, they may be restated as identities involving the sums of powers of divisors: formula_23 hence formula_24 and similarly for the others. The theta function of an eight-dimensional even unimodular lattice Γ is a modular form of weight 4 for the full modular group, which gives the following identities: formula_25 for the number "r"Γ("n") of vectors of the squared length 2"n" in the root lattice of the type "E"8. Similar techniques involving holomorphic Eisenstein series twisted by a Dirichlet character produce formulas for the number of representations of a positive integer n' as a sum of two, four, or eight squares in terms of the divisors of n. Using the above recurrence relation, all higher "E"2"k" can be expressed as polynomials in "E"4 and "E"6. For example: formula_26 Many relationships between products of Eisenstein series can be written in an elegant way using Hankel determinants, e.g. Garvan's identity formula_27 where formula_28 is the modular discriminant. Ramanujan identities. Srinivasa Ramanujan gave several interesting identities between the first few Eisenstein series involving differentiation. Let formula_29 then formula_30 These identities, like the identities between the series, yield arithmetical convolution identities involving the sum-of-divisor function. Following Ramanujan, to put these identities in the simplest form it is necessary to extend the domain of "σ""p"("n") to include zero, by setting formula_31 Then, for example formula_32 Other identities of this type, but not directly related to the preceding relations between L, M and N functions, have been proved by Ramanujan and Giuseppe Melfi, as for example formula_33 Generalizations. Automorphic forms generalize the idea of modular forms for general Lie groups; and Eisenstein series generalize in a similar fashion. Defining "OK" to be the ring of integers of a totally real algebraic number field K, one then defines the Hilbert–Blumenthal modular group as PSL(2,"OK"). One can then associate an Eisenstein series to every cusp of the Hilbert–Blumenthal modular group. References. <templatestyles src="Reflist/styles.css" />
[ { "math_id": 0, "text": "G_{2k}(\\tau) = \\sum_{ (m,n)\\in\\Z^2\\setminus\\{(0,0)\\}} \\frac{1}{(m+n\\tau )^{2k}}." }, { "math_id": 1, "text": "\\mathbb{Z}" }, { "math_id": 2, "text": "G_{2k} \\left( \\frac{ a\\tau +b}{ c\\tau + d} \\right) = (c\\tau +d)^{2k} G_{2k}(\\tau)" }, { "math_id": 3, "text": "\\begin{align}\nG_{2k}\\left(\\frac{a\\tau+b}{c\\tau+d}\\right) &= \\sum_{(m,n) \\in \\Z^2 \\setminus \\{(0,0)\\}} \\frac{1}{\\left(m+n\\frac{a\\tau+b}{c\\tau+d}\\right)^{2k}} \\\\\n&= \\sum_{(m,n) \\in \\Z^2 \\setminus \\{(0,0)\\}} \\frac{(c\\tau+d)^{2k}}{(md+nb+(mc+na)\\tau)^{2k}} \\\\\n&= \\sum_{\\left(m',n'\\right) = (m,n)\\begin{pmatrix}d \\ \\ c\\\\b \\ \\ a\\end{pmatrix}\\atop (m,n)\\in \\Z^2 \\setminus \\{(0,0)\\}} \\frac{(c\\tau+d)^{2k}}{\\left(m'+n'\\tau\\right)^{2k}}\n\\end{align}" }, { "math_id": 4, "text": "\\begin{pmatrix}d & c\\\\b & a\\end{pmatrix}^{-1} = \\begin{pmatrix}\\ a & -c\\\\-b & \\ d\\end{pmatrix}" }, { "math_id": 5, "text": "(m,n) \\mapsto (m,n)\\begin{pmatrix}d & c\\\\b & a\\end{pmatrix}" }, { "math_id": 6, "text": "\\sum_{\\left(m',n'\\right) = (m,n)\\begin{pmatrix}d \\ \\ c\\\\b \\ \\ a\\end{pmatrix}\\atop (m,n)\\in \\Z^2 \\setminus \\{(0,0)\\}} \\frac{1}{\\left(m'+n'\\tau\\right)^{2k}} = \\sum_{\\left(m',n'\\right)\\in \\mathbb{Z}^2 \\setminus \\{(0,0)\\}} \\frac{1}{(m'+n'\\tau)^{2k}} = G_{2k}(\\tau)" }, { "math_id": 7, "text": "G_{2k}\\left(\\frac{a\\tau+b}{c\\tau+d}\\right) = (c\\tau+d)^{2k} G_{2k}(\\tau)" }, { "math_id": 8, "text": "\\begin{align} g_2 &= 60 G_4 \\\\ g_3 &= 140 G_6 .\\end{align}" }, { "math_id": 9, "text": "\\sum_{k=0}^n {n \\choose k} d_k d_{n-k} = \\frac{2n+9}{3n+6}d_{n+2}" }, { "math_id": 10, "text": "n \\choose k" }, { "math_id": 11, "text": "\\begin{align}\n\\wp(z) &=\\frac{1}{z^2} + z^2 \\sum_{k=0}^\\infty \\frac {d_k z^{2k}}{k!} \\\\\n&=\\frac{1}{z^2} + \\sum_{k=1}^\\infty (2k+1) G_{2k+2} z^{2k}.\n\\end{align}" }, { "math_id": 12, "text": "G_{2k}(\\tau) = 2\\zeta(2k) \\left(1+c_{2k}\\sum_{n=1}^\\infty \\sigma_{2k-1}(n)q^n \\right)" }, { "math_id": 13, "text": "\\begin{align}\nc_{2k} &= \\frac{(2\\pi i)^{2k}}{(2k-1)! \\zeta(2k)} \\\\[4pt]\n&= \\frac {-4k}{B_{2k}} = \\frac 2 {\\zeta(1-2k)}.\n\\end{align}" }, { "math_id": 14, "text": "\\begin{align}\nG_4(\\tau)&=\\frac{\\pi^4}{45} \\left( 1+ 240\\sum_{n=1}^\\infty \\sigma_3(n) q^{n} \\right) \\\\[4pt]\nG_6(\\tau)&=\\frac{2\\pi^6}{945} \\left( 1- 504\\sum_{n=1}^\\infty \\sigma_5(n) q^n \\right).\n\\end{align}" }, { "math_id": 15, "text": "\\sum_{n=1}^{\\infty} q^n \\sigma_a(n) = \\sum_{n=1}^{\\infty} \\frac{n^a q^n}{1-q^n}" }, { "math_id": 16, "text": "\\begin{align}\nE_{2k}(\\tau)&=\\frac{G_{2k}(\\tau)}{2\\zeta (2k)}\\\\\n&= 1+\\frac {2}{\\zeta(1-2k)}\\sum_{n=1}^{\\infty} \\frac{n^{2k-1} q^n}{1-q^n} \\\\\n&= 1- \\frac{4k}{B_{2k}}\\sum_{n=1}^{\\infty} \\sigma_{2k-1}(n)q^n \\\\\n&= 1 - \\frac{4k}{B_{2k}} \\sum_{d,n \\geq 1} n^{2k-1} q^{nd}. \\end{align} " }, { "math_id": 17, "text": "\\begin{align}\nE_4(\\tau)&=1+240\\sum_{n=1}^\\infty \\frac {n^3q^n}{1-q^n} \\\\\nE_6(\\tau)&=1-504\\sum_{n=1}^\\infty \\frac {n^5q^n}{1-q^n} \\\\\nE_8(\\tau)&=1+480\\sum_{n=1}^\\infty \\frac {n^7q^n}{1-q^n}\n\\end{align}" }, { "math_id": 18, "text": "\\begin{align}\na&=\\theta_2\\left(0; e^{\\pi i\\tau}\\right)=\\vartheta_{10}(0; \\tau) \\\\\nb&=\\theta_3\\left(0; e^{\\pi i\\tau}\\right)=\\vartheta_{00}(0; \\tau) \\\\\nc&=\\theta_4\\left(0; e^{\\pi i\\tau}\\right)=\\vartheta_{01}(0; \\tau)\n\\end{align}" }, { "math_id": 19, "text": "\\begin{align}\nE_4(\\tau)&= \\tfrac{1}{2}\\left(a^8+b^8+c^8\\right) \\\\[4pt]\nE_6(\\tau)&= \\tfrac{1}{2}\\sqrt{\\frac{\\left(a^8+b^8+c^8\\right)^3-54(abc)^8}{2}} \\\\[4pt]\nE_8(\\tau)&= \\tfrac{1}{2}\\left(a^{16}+b^{16}+c^{16}\\right) = a^8b^8 +a^8c^8 +b^8c^8\n\\end{align}" }, { "math_id": 20, "text": "E_4^3-E_6^2 = \\tfrac{27}{4}(abc)^8 " }, { "math_id": 21, "text": "\\Delta = g_2^3-27g_3^2 = (2\\pi)^{12} \\left(\\tfrac{1}{2}a b c\\right)^8" }, { "math_id": 22, "text": "E_4^2 = E_8, \\quad E_4 E_6 = E_{10}, \\quad E_4 E_{10} = E_{14}, \\quad E_6 E_8 = E_{14}. " }, { "math_id": 23, "text": "\\left(1+240\\sum_{n=1}^\\infty \\sigma_3(n) q^n\\right)^2 = 1+480\\sum_{n=1}^\\infty \\sigma_7(n) q^n," }, { "math_id": 24, "text": "\\sigma_7(n)=\\sigma_3(n)+120\\sum_{m=1}^{n-1}\\sigma_3(m)\\sigma_3(n-m)," }, { "math_id": 25, "text": " \\theta_\\Gamma (\\tau)=1+\\sum_{n=1}^\\infty r_{\\Gamma}(2n) q^{n} = E_4(\\tau), \\qquad r_{\\Gamma}(n) = 240\\sigma_3(n) " }, { "math_id": 26, "text": "\\begin{align}\nE_{8} &= E_4^2 \\\\\nE_{10} &= E_4\\cdot E_6 \\\\\n691 \\cdot E_{12} &= 441\\cdot E_4^3+ 250\\cdot E_6^2 \\\\\nE_{14} &= E_4^2\\cdot E_6 \\\\\n3617\\cdot E_{16} &= 1617\\cdot E_4^4+ 2000\\cdot E_4 \\cdot E_6^2 \\\\\n43867 \\cdot E_{18} &= 38367\\cdot E_4^3\\cdot E_6+5500\\cdot E_6^3 \\\\\n174611 \\cdot E_{20} &= 53361\\cdot E_4^5+ 121250\\cdot E_4^2\\cdot E_6^2 \\\\\n77683 \\cdot E_{22} &= 57183\\cdot E_4^4\\cdot E_6+20500\\cdot E_4\\cdot E_6^3 \\\\\n236364091 \\cdot E_{24} &= 49679091\\cdot E_4^6+ 176400000\\cdot E_4^3\\cdot E_6^2 + 10285000\\cdot E_6^4\n\\end{align}" }, { "math_id": 27, "text": " \\left(\\frac{\\Delta}{(2\\pi)^{12}}\\right)^2=-\\frac{691}{1728^2\\cdot250}\\det \\begin{vmatrix}E_4&E_6&E_8\\\\ E_6&E_8&E_{10}\\\\ E_8&E_{10}&E_{12}\\end{vmatrix}" }, { "math_id": 28, "text": " \\Delta=(2\\pi)^{12}\\frac{E_4^3-E_6^2}{1728}" }, { "math_id": 29, "text": "\\begin{align}\nL(q)&=1-24\\sum_{n=1}^\\infty \\frac {nq^n}{1-q^n}&&=E_2(\\tau) \\\\\nM(q)&=1+240\\sum_{n=1}^\\infty \\frac {n^3q^n}{1-q^n}&&=E_4(\\tau) \\\\\nN(q)&=1-504\\sum_{n=1}^\\infty \\frac {n^5q^n}{1-q^n}&&=E_6(\\tau),\n\\end{align}" }, { "math_id": 30, "text": "\\begin{align}\nq\\frac{dL}{dq} &= \\frac {L^2-M}{12} \\\\\nq\\frac{dM}{dq} &= \\frac {LM-N}{3} \\\\\nq\\frac{dN}{dq} &= \\frac {LN-M^2}{2}.\n\\end{align}" }, { "math_id": 31, "text": "\\begin{align}\\sigma_p(0) = \\tfrac12\\zeta(-p) \\quad\\Longrightarrow\\quad\n\\sigma(0) &= -\\tfrac{1}{24}\\\\\n\\sigma_3(0) &= \\tfrac{1}{240}\\\\\n\\sigma_5(0) &= -\\tfrac{1}{504}.\n\\end{align}" }, { "math_id": 32, "text": "\\sum_{k=0}^n\\sigma(k)\\sigma(n-k)=\\tfrac5{12}\\sigma_3(n)-\\tfrac12n\\sigma(n)." }, { "math_id": 33, "text": "\\begin{align}\n\\sum_{k=0}^n\\sigma_3(k)\\sigma_3(n-k)&=\\tfrac1{120}\\sigma_7(n) \\\\\n\\sum_{k=0}^n\\sigma(2k+1)\\sigma_3(n-k)&=\\tfrac1{240}\\sigma_5(2n+1) \\\\\n\\sum_{k=0}^n\\sigma(3k+1)\\sigma(3n-3k+1)&=\\tfrac19\\sigma_3(3n+2).\n\\end{align}" } ]
https://en.wikipedia.org/wiki?curid=1270458
12704641
Tits alternative
In mathematics, the Tits alternative, named after Jacques Tits, is an important theorem about the structure of finitely generated linear groups. Statement. The theorem, proven by Tits, is stated as follows. <templatestyles src="Math_theorem/styles.css" /> Theorem —  Let formula_0 be a finitely generated linear group over a field. Then two following possibilities occur: Consequences. A linear group is not amenable if and only if it contains a non-abelian free group (thus the von Neumann conjecture, while not true in general, holds for linear groups). The Tits alternative is an important ingredient in the proof of Gromov's theorem on groups of polynomial growth. In fact the alternative essentially establishes the result for linear groups (it reduces it to the case of solvable groups, which can be dealt with by elementary means). Generalizations. In geometric group theory, a group "G" is said to satisfy the Tits alternative if for every subgroup "H" of "G" either "H" is virtually solvable or "H" contains a nonabelian free subgroup (in some versions of the definition this condition is only required to be satisfied for all finitely generated subgroups of "G"). Examples of groups satisfying the Tits alternative which are either not linear, or at least not known to be linear, are: Examples of groups not satisfying the Tits alternative are: Proof. The proof of the original Tits alternative is by looking at the Zariski closure of formula_0 in formula_1. If it is solvable then the group is solvable. Otherwise one looks at the image of formula_0 in the Levi component. If it is noncompact then a ping-pong argument finishes the proof. If it is compact then either all eigenvalues of elements in the image of formula_0 are roots of unity and then the image is finite, or one can find an embedding of formula_2 in which one can apply the ping-pong strategy. Note that the proof of all generalisations above also rests on a ping-pong argument. References. <templatestyles src="Reflist/styles.css" />
[ { "math_id": 0, "text": "G" }, { "math_id": 1, "text": "\\mathrm{GL}_n(k)" }, { "math_id": 2, "text": "k" } ]
https://en.wikipedia.org/wiki?curid=12704641
1270691
Kirchhoff's theorem
On the number of spanning trees in a graph In the mathematical field of graph theory, Kirchhoff's theorem or Kirchhoff's matrix tree theorem named after Gustav Kirchhoff is a theorem about the number of spanning trees in a graph, showing that this number can be computed in polynomial time from the determinant of a submatrix of the graph's Laplacian matrix; specifically, the number is equal to "any" cofactor of the Laplacian matrix. Kirchhoff's theorem is a generalization of Cayley's formula which provides the number of spanning trees in a complete graph. Kirchhoff's theorem relies on the notion of the Laplacian matrix of a graph, which is equal to the difference between the graph's degree matrix (the diagonal matrix of vertex degrees) and its adjacency matrix (a (0,1)-matrix with 1's at places corresponding to entries where the vertices are adjacent and 0's otherwise). For a given connected graph "G" with "n" labeled vertices, let "λ"1, "λ"2, ..., "λn"−1 be the non-zero eigenvalues of its Laplacian matrix. Then the number of spanning trees of "G" is formula_0 An English translation of Kirchhoff's original 1847 paper was made by J. B. O'Toole and published in 1958. An example using the matrix-tree theorem. First, construct the Laplacian matrix "Q" for the example diamond graph "G" (see image on the right): formula_1 Next, construct a matrix "Q"* by deleting any row and any column from "Q". For example, deleting row 1 and column 1 yields formula_2 Finally, take the determinant of "Q"* to obtain "t"("G"), which is 8 for the diamond graph. (Notice "t"("G") is the (1,1)-cofactor of "Q" in this example.) Proof outline. First notice that the Laplacian matrix has the property that the sum of its entries across any row and any column is 0. Thus we can transform any minor into any other minor by adding rows and columns, switching them, and multiplying a row or a column by −1. Thus the cofactors are the same up to sign, and it can be verified that, in fact, they have the same sign. We proceed to show that the determinant of the minor "M"11 counts the number of spanning trees. Let "n" be the number of vertices of the graph, and "m" the number of its edges. The incidence matrix "E" is an "n"-by-"m" matrix, which may be defined as follows: suppose that ("i", "j") is the "k"th edge of the graph, and that "i" < "j". Then "Eik" = 1, "Ejk" = −1, and all other entries in column "k" are 0 (see oriented incidence matrix for understanding this modified incidence matrix "E"). For the preceding example (with "n" = 4 and "m" = 5): formula_3 Recall that the Laplacian "L" can be factored into the product of the incidence matrix and its transpose, i.e., "L" = "EE"T. Furthermore, let "F" be the matrix "E" with its first row deleted, so that "FF"T = "M"11. Now the Cauchy–Binet formula allows us to write formula_4 where "S" ranges across subsets of ["m"] of size "n" − 1, and "FS" denotes the ("n" − 1)-by-("n" − 1) matrix whose columns are those of "F" with index in "S". Then every "S" specifies "n" − 1 edges of the original graph, and it can be shown that those edges induce a spanning tree if and only if the determinant of "FS" is +1 or −1, and that they do not induce a spanning tree if and only if the determinant is 0. This completes the proof. Particular cases and generalizations. Cayley's formula. Cayley's formula follows from Kirchhoff's theorem as a special case, since every vector with 1 in one place, −1 in another place, and 0 elsewhere is an eigenvector of the Laplacian matrix of the complete graph, with the corresponding eigenvalue being "n". These vectors together span a space of dimension "n" − 1, so there are no other non-zero eigenvalues. Alternatively, note that as Cayley's formula counts the number of distinct labeled trees of a complete graph "Kn" we need to compute any cofactor of the Laplacian matrix of "Kn". The Laplacian matrix in this case is formula_5 Any cofactor of the above matrix is "nn"−2, which is Cayley's formula. Kirchhoff's theorem for multigraphs. Kirchhoff's theorem holds for multigraphs as well; the matrix "Q" is modified as follows: Cayley's formula for a complete multigraph is "m""n"−1("n""n"−1−("n"−1)"n""n"−2) by same methods produced above, since a simple graph is a multigraph with "m" = 1. Explicit enumeration of spanning trees. Kirchhoff's theorem can be strengthened by altering the definition of the Laplacian matrix. Rather than merely counting edges emanating from each vertex or connecting a pair of vertices, label each edge with an indeterminate and let the ("i", "j")-th entry of the modified Laplacian matrix be the sum over the indeterminates corresponding to edges between the "i"-th and "j"-th vertices when "i" does not equal "j", and the negative sum over all indeterminates corresponding to edges emanating from the "i"-th vertex when "i" equals "j". The determinant of the modified Laplacian matrix by deleting any row and column (similar to finding the number of spanning trees from the original Laplacian matrix), above is then a homogeneous polynomial (the Kirchhoff polynomial) in the indeterminates corresponding to the edges of the graph. After collecting terms and performing all possible cancellations, each monomial in the resulting expression represents a spanning tree consisting of the edges corresponding to the indeterminates appearing in that monomial. In this way, one can obtain explicit enumeration of all the spanning trees of the graph simply by computing the determinant. For a proof of this version of the theorem see Bollobás (1998). Matroids. The spanning trees of a graph form the bases of a graphic matroid, so Kirchhoff's theorem provides a formula to count the number of bases in a graphic matroid. The same method may also be used to count the number of bases in regular matroids, a generalization of the graphic matroids . Kirchhoff's theorem for directed multigraphs. Kirchhoff's theorem can be modified to count the number of oriented spanning trees in directed multigraphs. The matrix "Q" is constructed as follows: The number of oriented spanning trees rooted at a vertex "i" is the determinant of the matrix gotten by removing the "i"th row and column of "Q" Counting spanning "k"-component forests. Kirchhoff's theorem can be generalized to count k-component spanning forests in an unweighted graph. A k-component spanning forest is a subgraph with k connected components that contains all vertices and is cycle-free, i.e., there is at most one path between each pair of vertices. Given such a forest "F" with connected components formula_6, define its weight formula_7 to be the product of the number of vertices in each component. Then formula_8 where the sum is over all k-component spanning forests and formula_9 is the coefficient of formula_10 of the polynomial formula_11 The last factor in the polynomial is due to the zero eigenvalue formula_12. More explicitly, the number formula_9 can be computed as formula_13 where the sum is over all "n"−"k"-element subsets of formula_14. For example formula_15 Since a spanning forest with "n"−1 components corresponds to a single edge, the "k" = "n"−1 case states that the sum of the eigenvalues of "Q" is twice the number of edges. The "k" = 1 case corresponds to the original Kirchhoff theorem since the weight of every spanning tree is "n". The proof can be done analogously to the proof of Kirchhoff's theorem. An invertible formula_16 submatrix of the incidence matrix corresponds bijectively to a "k"-component spanning forest with a choice of vertex for each component. The coefficients formula_9 are up to sign the coefficients of the characteristic polynomial of "Q". References. <templatestyles src="Reflist/styles.css" /> <templatestyles src="Refbegin/styles.css" />
[ { "math_id": 0, "text": "t(G) = \\frac{1}{n} \\lambda_1\\lambda_2\\cdots\\lambda_{n-1}\\,." }, { "math_id": 1, "text": "Q = \\left[\\begin{array}{rrrr}\n2 & -1 & -1 & 0 \\\\\n-1 & 3 & -1 & -1 \\\\\n-1 & -1 & 3 & -1 \\\\\n0 & -1 & -1 & 2\n\\end{array}\\right]." }, { "math_id": 2, "text": "Q^\\ast = \\left[\\begin{array}{rrr}\n3 & -1 & -1 \\\\\n-1 & 3 & -1 \\\\\n-1 & -1 & 2\n\\end{array}\\right]." }, { "math_id": 3, "text": "E = \\begin{bmatrix}\n 1 & 1 & 0 & 0 & 0 \\\\\n -1 & 0 & 1 & 1 & 0 \\\\\n 0 & -1 & -1 & 0 & 1 \\\\\n 0 & 0 & 0 & -1 & -1 \\\\\n\\end{bmatrix}." }, { "math_id": 4, "text": "\\det\\left(M_{11}\\right) = \\sum_S \\det\\left(F_S\\right)\\det\\left(F_S^{\\mathrm{T}}\\right) = \\sum_S \\det\\left(F_S\\right)^2" }, { "math_id": 5, "text": "\\begin{bmatrix}\n n-1 & -1 & \\cdots & -1 \\\\\n -1 & n-1 & \\cdots & -1 \\\\\n \\vdots & \\vdots& \\ddots & \\vdots \\\\\n -1 & -1 & \\cdots & n-1 \\\\\n\\end{bmatrix}." }, { "math_id": 6, "text": "F_1, \\dots, F_k" }, { "math_id": 7, "text": "w(F) = |V(F_1)| \\cdot \\dots \\cdot |V(F_k)|" }, { "math_id": 8, "text": "\\sum_F w(F) = q_k," }, { "math_id": 9, "text": "q_k" }, { "math_id": 10, "text": "x^k" }, { "math_id": 11, "text": "(x+\\lambda_1) \\dots (x+\\lambda_{n-1}) x." }, { "math_id": 12, "text": "\\lambda_n=0" }, { "math_id": 13, "text": "q_k = \\sum_{\\{i_1, \\dots, i_{n-k}\\}\\subset\\{1\\dots n-1\\}} \\lambda_{i_1} \\dots \\lambda_{i_{n-k}}." }, { "math_id": 14, "text": "\\{1, \\dots, n\\}" }, { "math_id": 15, "text": "\\begin{align}\nq_{n-1} &= \\lambda_1 + \\dots + \\lambda_{n-1} = \\mathrm{tr} Q = 2|E| \\\\\nq_{n-2} &= \\lambda_1\\lambda_2 + \\lambda_1 \\lambda_3 + \\dots + \\lambda_{n-2} \\lambda_{n-1} \\\\\nq_{2} &= \\lambda_1 \\dots \\lambda_{n-2} + \\lambda_1 \\dots \\lambda_{n-3} \\lambda_{n-1} + \\dots + \\lambda_2 \\dots \\lambda_{n-1}\\\\\nq_{1} &= \\lambda_1 \\dots \\lambda_{n-1} \\\\\n\\end{align}" }, { "math_id": 16, "text": "(n-k) \\times (n-k)" } ]
https://en.wikipedia.org/wiki?curid=1270691
12708106
S5 (modal logic)
One of five systems of modal logic In logic and philosophy, S5 is one of five systems of modal logic proposed by Clarence Irving Lewis and Cooper Harold Langford in their 1932 book "Symbolic Logic". It is a normal modal logic, and one of the oldest systems of modal logic of any kind. It is formed with propositional calculus formulas and tautologies, and inference apparatus with substitution and modus ponens, but extending the syntax with the modal operator "necessarily" formula_0 and its dual "possibly" formula_1. The axioms of S5. The following makes use of the modal operators formula_0 ("necessarily") and formula_1 ("possibly"). S5 is characterized by the axioms: and either: * 4: formula_5, and * B: formula_6. The (5) axiom restricts the accessibility relation formula_7 of the Kripke frame to be Euclidean, i.e. formula_8, thereby conflating necessity with possibility under idempotence. Kripke semantics. In terms of Kripke semantics, S5 is characterized by frames where the accessibility relation is an equivalence relation: it is reflexive, transitive, and symmetric. Determining the satisfiability of an S5 formula is an NP-complete problem. The hardness proof is trivial, as S5 includes the propositional logic. Membership is proved by showing that any satisfiable formula has a Kripke model where the number of worlds is at most linear in the size of the formula. Applications. S5 is useful because it avoids superfluous iteration of qualifiers of different kinds. For example, under S5, if "X" is necessarily, possibly, necessarily, possibly true, then "X" is possibly true. Unbolded qualifiers before the final "possibly" are pruned in S5. While this is useful for keeping propositions reasonably short, it also might appear counter-intuitive in that, under S5, if something is possibly necessary, then it is necessary. Alvin Plantinga has argued that this feature of S5 is not, in fact, counter-intuitive. To justify, he reasons that if "X" is "possibly necessary", it is necessary in at least one possible world; hence it is necessary in "all" possible worlds and thus is true in all possible worlds. Such reasoning underpins 'modal' formulations of the ontological argument. S5 is equivalent to the adjunction formula_9. Leibniz proposed an ontological argument for the existence of God using this axiom. In his words, "If a necessary being is possible, it follows that it exists actually". S5 is also the modal system for the metaphysics of saint Thomas Aquinas and in particular for the Five Ways. However, these applications require that each operator is in a serial arrangement of a single modality. Under multimodal logic, e.g., "X is possibly (in epistemic modality, per one's data) necessary (in alethic modality)," it no longer follows that X being necessary in at least one epistemically possible world means it is necessary in all epistemically possible worlds. This aligns with the intuition that proposing a certain necessary entity does not mean it is real. References. <templatestyles src="Reflist/styles.css" />
[ { "math_id": 0, "text": "\\Box" }, { "math_id": 1, "text": "\\Diamond" }, { "math_id": 2, "text": "\\Box(A\\to B)\\to(\\Box A\\to\\Box B)" }, { "math_id": 3, "text": "\\Box A \\to A" }, { "math_id": 4, "text": "\\Diamond A\\to \\Box\\Diamond A" }, { "math_id": 5, "text": "\\Box A\\to\\Box\\Box A" }, { "math_id": 6, "text": "A\\to\\Box\\Diamond A" }, { "math_id": 7, "text": "R" }, { "math_id": 8, "text": "(wRv \\land wRu) \\implies vRu " }, { "math_id": 9, "text": "\\Diamond\\dashv\\Box" } ]
https://en.wikipedia.org/wiki?curid=12708106
1271019
Specified complexity
Creationist argument by William Dembski Specified complexity is a creationist argument introduced by William Dembski, used by advocates to promote the pseudoscience of intelligent design. According to Dembski, the concept can formalize a property that singles out patterns that are both "specified" and "complex", where in Dembski's terminology, a "specified" pattern is one that admits short descriptions, whereas a "complex" pattern is one that is unlikely to occur by chance. An example cited by Dembski is a poker hand, where for example the repeated appearance of a royal flush will raise suspicion of cheating. Proponents of intelligent design use specified complexity as one of their two main arguments, along with irreducible complexity. Dembski argues that it is impossible for specified complexity to exist in patterns displayed by configurations formed by unguided processes. Therefore, Dembski argues, the fact that specified complex patterns can be found in living things indicates some kind of guidance in their formation, which is indicative of intelligence. Dembski further argues that one can show by applying no-free-lunch theorems the inability of evolutionary algorithms to select or generate configurations of high specified complexity. Dembski states that specified complexity is a reliable marker of design by an intelligent agent—a central tenet to intelligent design, which Dembski argues for in opposition to modern evolutionary theory. Specified complexity is what Dembski terms an "explanatory filter": one can recognize design by detecting complex specified information (CSI). Dembski argues that the unguided emergence of CSI solely according to known physical laws and chance is highly improbable. The concept of specified complexity is widely regarded as mathematically unsound and has not been the basis for further independent work in information theory, in the theory of complex systems, or in biology. A study by Wesley Elsberry and Jeffrey Shallit states: "Dembski's work is riddled with inconsistencies, equivocation, flawed use of mathematics, poor scholarship, and misrepresentation of others' results." Another objection concerns Dembski's calculation of probabilities. According to Martin Nowak, a Harvard professor of mathematics and evolutionary biology, "We cannot calculate the probability that an eye came about. We don't have the information to make the calculation." Definition. Orgel's terminology. The term "specified complexity" was originally coined by origin of life researcher Leslie Orgel in his 1973 book "The Origins of Life: Molecules and Natural Selection", which proposed that RNA could have evolved through Darwinian natural selection. Orgel used the phrase in discussing the differences between life and non-living structures: In brief, living organisms are distinguished by their "specified" complexity. Crystals are usually taken as the prototypes of simple well-specified structures, because they consist of a very large number of identical molecules packed together in a uniform way. Lumps of granite or random mixtures of polymers are examples of structures that are complex but not specified. The crystals fail to qualify as living because they lack complexity; the mixtures of polymers fail to qualify because they lack specificity. The phrase was taken up by the creationists Charles Thaxton and Walter L Bradley in a chapter they contributed to the 1994 book "The Creation Hypothesis" where they discussed "design detection" and redefined "specified complexity" as a way of measuring information. Another contribution to the book was written by William A. Dembski, who took this up as the basis of his subsequent work. The term was later employed by physicist Paul Davies to qualify the complexity of living organisms: Living organisms are mysterious not for their complexity per se, but for their tightly specified complexity Dembski's definition. Whereas Orgel used the term for biological features which are considered in science to have arisen through a process of evolution, Dembski says that it describes features which cannot form through "undirected" evolution—and concludes that it allows one to infer intelligent design. While Orgel employed the concept in a qualitative way, Dembski's use is intended to be quantitative. Dembski's use of the concept dates to his 1998 monograph "The Design Inference". Specified complexity is fundamental to his approach to intelligent design, and each of his subsequent books has also dealt significantly with the concept. He has stated that, in his opinion, "if there is a way to detect design, specified complexity is it". Dembski asserts that specified complexity is present in a configuration when it can be described by a pattern that displays a large amount of independently specified information and is also complex, which he defines as having a low probability of occurrence. He provides the following examples to demonstrate the concept: "A single letter of the alphabet is specified without being complex. A long sentence of random letters is complex without being specified. A Shakespearean sonnet is both complex and specified." In his earlier papers Dembski defined "complex specified information" (CSI) as being present in a specified event whose probability did not exceed 1 in 10150, which he calls the universal probability bound. In that context, "specified" meant what in later work he called "pre-specified", that is specified by the unnamed designer before any information about the outcome is known. The value of the universal probability bound corresponds to the inverse of the upper limit of "the total number of [possible] specified events throughout cosmic history", as calculated by Dembski. Anything below this bound has CSI. The terms "specified complexity" and "complex specified information" are used interchangeably. In more recent papers Dembski has redefined the universal probability bound, with reference to another number, corresponding to the total number of bit operations that could possibly have been performed in the entire history of the universe. Dembski asserts that CSI exists in numerous features of living things, such as in DNA and in other functional biological molecules, and argues that it cannot be generated by the only known natural mechanisms of physical law and chance, or by their combination. He argues that this is so because laws can only shift around or lose information, but do not produce it, and because chance can produce complex unspecified information, or simple specified information, but not CSI; he provides a mathematical analysis that he claims demonstrates that law and chance working together cannot generate CSI, either. Moreover, he claims that CSI is holistic, with the whole being greater than the sum of the parts, and that this decisively eliminates Darwinian evolution as a possible means of its "creation". Dembski maintains that by process of elimination, CSI is best explained as being due to intelligence, and is therefore a reliable indicator of design. Law of conservation of information. Dembski formulates and proposes a law of conservation of information as follows: This strong proscriptive claim, that natural causes can only transmit CSI but never originate it, I call the Law of Conservation of Information. Immediate corollaries of the proposed law are the following: Dembski notes that the term "Law of Conservation of Information" was previously used by Peter Medawar in his book The Limits of Science (1984) "to describe the weaker claim that deterministic laws cannot produce novel information." The actual validity and utility of Dembski's proposed law are uncertain; it is neither widely used by the scientific community nor cited in mainstream scientific literature. A 2002 essay by Erik Tellgren provided a mathematical rebuttal of Dembski's law and concludes that it is "mathematically unsubstantiated." Specificity. In a more recent paper, Dembski provides an account which he claims is simpler and adheres more closely to the theory of statistical hypothesis testing as formulated by Ronald Fisher. In general terms, Dembski proposes to view design inference as a statistical test to reject a chance hypothesis P on a space of outcomes Ω. Dembski's proposed test is based on the Kolmogorov complexity of a pattern "T" that is exhibited by an event "E" that has occurred. Mathematically, "E" is a subset of Ω, the pattern "T" specifies a set of outcomes in Ω and "E" is a subset of "T". Quoting Dembski Thus, the event "E" might be a die toss that lands six and "T" might be the composite event consisting of all die tosses that land on an even face. Kolmogorov complexity provides a measure of the computational resources needed to specify a pattern (such as a DNA sequence or a sequence of alphabetic characters). Given a pattern "T", the number of other patterns may have Kolmogorov complexity no larger than that of "T" is denoted by φ("T"). The number φ("T") thus provides a ranking of patterns from the simplest to the most complex. For example, for a pattern "T" which describes the bacterial flagellum, Dembski claims to obtain the upper bound φ("T") ≤ 1020. Dembski defines specified complexity of the pattern "T" under the chance hypothesis P as formula_0 where P("T") is the probability of observing the pattern "T", "R" is the number of "replicational resources" available "to witnessing agents". "R" corresponds roughly to repeated attempts to create and discern a pattern. Dembski then asserts that "R" can be bounded by 10120. This number is supposedly justified by a result of Seth Lloyd in which he determines that the number of elementary logic operations that can have been performed in the universe over its entire history cannot exceed 10120 operations on 1090 bits. Dembski's main claim is that the following test can be used to infer design for a configuration: There is a target pattern "T" that applies to the configuration and whose specified complexity exceeds 1. This condition can be restated as the inequality formula_1 Dembski's explanation of specified complexity. Dembski's expression σ is unrelated to any known concept in information theory, though he claims he can justify its relevance as follows: An intelligent agent "S" witnesses an event "E" and assigns it to some reference class of events Ω and within this reference class considers it as satisfying a specification "T". Now consider the quantity φ("T") × P("T") (where P is the "chance" hypothesis): Think of S as trying to determine whether an archer, who has just shot an arrow at a large wall, happened to hit a tiny target on that wall by chance. The arrow, let us say, is indeed sticking squarely in this tiny target. The problem, however, is that there are lots of other tiny targets on the wall. Once all those other targets are factored in, is it still unlikely that the archer could have hit any of them by chance? In addition, we need to factor in what I call the replicational resources associated with "T", that is, all the opportunities to bring about an event of "T"'s descriptive complexity and improbability by multiple agents witnessing multiple events. According to Dembski, the number of such "replicational resources" can be bounded by "the maximal number of bit operations that the known, observable universe could have performed throughout its entire multi-billion year history", which according to Lloyd is 10120. However, according to Elsberry and Shallit, "[specified complexity] has not been defined formally in any reputable peer-reviewed mathematical journal, nor (to the best of our knowledge) adopted by any researcher in information theory." Calculation of specified complexity. Thus far, Dembski's only attempt at calculating the specified complexity of a naturally occurring biological structure is in his book "No Free Lunch", for the bacterial flagellum of E. coli. This structure can be described by the pattern "bidirectional rotary motor-driven propeller". Dembski estimates that there are at most 1020 patterns described by four basic concepts or fewer, and so his test for design will apply if formula_2 However, Dembski says that the precise calculation of the relevant probability "has yet to be done", although he also claims that some methods for calculating these probabilities "are now in place". These methods assume that all of the constituent parts of the flagellum must have been generated completely at random, a scenario that biologists do not seriously consider. He justifies this approach by appealing to Michael Behe's concept of "irreducible complexity" (IC), which leads him to assume that the flagellum could not come about by any gradual or step-wise process. The validity of Dembski's particular calculation is thus wholly dependent on Behe's IC concept, and therefore susceptible to its criticisms, of which there are many. To arrive at the ranking upper bound of 1020 patterns, Dembski considers a specification pattern for the flagellum defined by the (natural language) predicate "bidirectional rotary motor-driven propeller", which he regards as being determined by four independently chosen basic concepts. He furthermore assumes that English has the capability to express at most 105 basic concepts (an upper bound on the size of a dictionary). Dembski then claims that we can obtain the rough upper bound of formula_3 for the set of patterns described by four basic concepts or fewer. From the standpoint of Kolmogorov complexity theory, this calculation is problematic. Quoting Ellsberry and Shallit "Natural language specification without restriction, as Dembski tacitly permits, seems problematic. For one thing, it results in the Berry paradox". These authors add: "We have no objection to natural language specifications per se, provided there is some evident way to translate them to Dembski's formal framework. But what, precisely, is the space of events Ω here?" Criticism. The soundness of Dembski's concept of specified complexity and the validity of arguments based on this concept are widely disputed. A frequent criticism (see Elsberry and Shallit) is that Dembski has used the terms "complexity", "information" and "improbability" interchangeably. These numbers measure properties of things of different types: Complexity measures how hard it is to describe an object (such as a bitstring), information is how much the uncertainty about the state of an object is reduced by knowing the state of another object or system, and improbability measures how unlikely an event is given a probability distribution. On page 150 of "No Free Lunch" Dembski claims he can demonstrate his thesis mathematically: "In this section I will present an in-principle mathematical argument for why natural causes are incapable of generating complex specified information." When Tellgren investigated Dembski's "Law of Conservation of Information” using a more formal approach, he concluded it is mathematically unsubstantiated. Dembski responded in part that he is not "in the business of offering a strict mathematical proof for the inability of material mechanisms to generate specified complexity". Jeffrey Shallit states that Demski's mathematical argument has multiple problems, for example; a crucial calculation on page 297 of "No Free Lunch" is off by a factor of approximately 1065. Dembski's calculations show how a simple smooth function cannot gain information. He therefore concludes that there must be a designer to obtain CSI. However, natural selection has a branching mapping from one to many (replication) followed by pruning mapping of the many back down to a few (selection). When information is replicated, some copies can be differently modified while others remain the same, allowing information to increase. These increasing and reductional mappings were not modeled by Dembski. In other words, Dembski's calculations do not model birth and death. This basic flaw in his modeling renders all of Dembski's subsequent calculations and reasoning in "No Free Lunch" irrelevant because his basic model does not reflect reality. Since the basis of "No Free Lunch" relies on this flawed argument, the entire thesis of the book collapses. According to Martin Nowak, a Harvard professor of mathematics and evolutionary biology "We cannot calculate the probability that an eye came about. We don't have the information to make the calculation". Dembski's critics note that specified complexity, as originally defined by Leslie Orgel, is precisely what Darwinian evolution is supposed to create. Critics maintain that Dembski uses "complex" as most people would use "absurdly improbable". They also claim that his argument is circular: CSI cannot occur naturally because Dembski has defined it thus. They argue that to successfully demonstrate the existence of CSI, it would be necessary to show that some biological feature undoubtedly has an extremely low probability of occurring by any natural means whatsoever, something which Dembski and others have almost never attempted to do. Such calculations depend on the accurate assessment of numerous contributing probabilities, the determination of which is often necessarily subjective. Hence, CSI can at most provide a "very high probability", but not absolute certainty. Another criticism refers to the problem of "arbitrary but specific outcomes". For example, if a coin is tossed randomly 1000 times, the probability of any particular outcome occurring is roughly one in 10300. For any particular specific outcome of the coin-tossing process, the "a priori" probability (probability measured before event happens) that this pattern occurred is thus one in 10300, which is astronomically smaller than Dembski's universal probability bound of one in 10150. Yet we know that the "post hoc" probability (probabilitly as observed after event occurs) of its happening is exactly one, since we observed it happening. This is similar to the observation that it is unlikely that any given person will win a lottery, but, eventually, a lottery will have a winner; to argue that it is very unlikely that any one player would win is not the same as proving that there is the same chance that no one will win. Similarly, it has been argued that "a space of possibilities is merely being explored, and we, as pattern-seeking animals, are merely imposing patterns, and therefore targets, after the fact." Apart from such theoretical considerations, critics cite reports of evidence of the kind of evolutionary "spontanteous generation" that Dembski claims is too improbable to occur naturally. For example, in 1982, B.G. Hall published research demonstrating that after removing a gene that allows sugar digestion in certain bacteria, those bacteria, when grown in media rich in sugar, rapidly evolve new sugar-digesting enzymes to replace those removed. Another widely cited example is the discovery of nylon eating bacteria that produce enzymes only useful for digesting synthetic materials that did not exist prior to the invention of nylon in 1935. Other commentators have noted that evolution through selection is frequently used to design certain electronic, aeronautic and automotive systems which are considered problems too complex for human "intelligent designers". This contradicts the argument that an intelligent designer is required for the most complex systems. Such evolutionary techniques can lead to designs that are difficult to understand or evaluate since no human understands which trade-offs were made in the evolutionary process, something which mimics our poor understanding of biological systems. Dembski's book "No Free Lunch" was criticised for not addressing the work of researchers who use computer simulations to investigate artificial life. According to Shallit: The field of artificial life evidently poses a significant challenge to Dembski's claims about the failure of evolutionary algorithms to generate complexity. Indeed, artificial life researchers regularly find their simulations of evolution producing the sorts of novelties and increased complexity that Dembski claims are impossible. Notes and references. <templatestyles src="Reflist/styles.css" />
[ { "math_id": 0, "text": " \\sigma= - \\log_2 [R \\times \\varphi(T) \\times \\operatorname{P}(T)], " }, { "math_id": 1, "text": " 10^{120} \\times \\varphi(T) \\times \\operatorname{P}(T) < \\frac{1}{2}. " }, { "math_id": 2, "text": " \\operatorname{P}(T) < \\frac{1}{2} \\times 10^{-140}. " }, { "math_id": 3, "text": " 10^{20}= 10^5 \\times 10^5 \\times 10^5 \\times 10^5 " } ]
https://en.wikipedia.org/wiki?curid=1271019
12711268
Anti-symmetric operator
Raising and lowering operators In quantum mechanics, a raising or lowering operator (collectively known as ladder operators) is an operator that increases or decreases the eigenvalue of another operator. In quantum mechanics, the raising operator is sometimes called the creation operator, and the lowering operator the annihilation operator. Well-known applications of ladder operators in quantum mechanics are in the formalisms of the quantum harmonic oscillator and angular momentum. Introduction. Another type of operator in quantum field theory, discovered in the early 1970s, is known as the anti-symmetric operator. This operator, similar to spin in non-relativistic quantum mechanics is a ladder operator that can create two fermions of opposite spin out of a boson or a boson from two fermions. A Fermion, named after Enrico Fermi, is a particle with a half-integer spin, such as electrons and protons. This is a matter particle. A boson, named after S. N. Bose, is a particle with full integer spin, such as photons and W's. This is a force carrying particle. Spin. First, we will review spin for non-relativistic quantum mechanics. Spin, an intrinsic property similar to angular momentum, is defined by a spin operator S that plays a role on a system similar to the operator L for orbital angular momentum. The operators formula_0 and formula_1 whose eigenvalues are formula_2 and formula_3 respectively. These formalisms also obey the usual commutation relations for angular momentum formula_4, formula_5, and formula_6. The raising and lowering operators, formula_7 and formula_8, are defined as formula_9 and formula_10 respectively. These ladder operators act on the state in the following formula_11 and formula_12 respectively. The operators S_x and S_y can be determined using the ladder method. In the case of the spin 1/2 case (fermion), the operator formula_13 acting on a state produces formula_14 and formula_15. Likewise, the operator formula_16 acting on a state produces formula_17 and formula_18. The matrix representations of these operators are constructed as follows: formula_19 formula_20 Therefore, formula_21 and formula_22 can be represented by the matrix representations: formula_23 formula_24 Recalling the generalized uncertainty relation for two operators A and B, formula_25, we can immediately see that the uncertainty relation of the operators formula_21 and formula_22 are as follows: formula_26 Therefore, like orbital angular momentum, we can only specify one coordinate at a time. We specify the operators formula_0 and formula_1. Application in quantum field theory. The creation of a particle and anti-particle from a boson is defined similarly but for infinite dimensions. Therefore, the Levi-Civita symbol for infinite dimensions is introduced. formula_27 The commutation relations are simply carried over to infinite dimensions formula_28. formula_0 is now equal to formula_29 where n=∞. Its eigenvalue is formula_30. Defining the magnetic quantum number, angular momentum projected in the z direction, is more challenging than the simple state of spin. The problem becomes analogous to moment of inertia in classical mechanics and is generalizable to n dimensions. It is this property that allows for the creation and annihilation of bosons. Bosons. Characterized by their spin, a bosonic field can be scalar fields, vector fields and even tensor fields. To illustrate, the electromagnetic field quantized is the photon field, which can be quantized using conventional methods of canonical or path integral quantization. This has led to the theory of quantum electrodynamics, arguably the most successful theory in physics. The graviton field is the quantized gravitational field. There is yet to be a theory that quantizes the gravitational field, but theories such as string theory can be thought of the gravitational field quantized. An example of a non-relativistic bosonic field is that describing cold bosonic atoms, such as Helium-4. Free bosonic fields obey commutation relations: formula_31 formula_32, To illustrate, suppose we have a system of N bosons that occupy mutually orthogonal single-particle states formula_33, etc. Using the usual representation, we demonstrate the system by assigning a state to each particle and then imposing exchange symmetry. formula_34 This wave equation can be represented using a second quantized approach, known as second quantization. The number of particles in each single-particle state is listed. formula_35 The creation and annihilation operators, which add and subtract particles from multi-particle states. These creation and annihilation operators are very similar to those defined for the quantum harmonic oscillator, which added and subtracted energy quanta. However, these operators literally create and annihilate particles with a given quantum state. The bosonic annihilation operator formula_36 and creation operator formula_37 have the following effects: formula_38 formula_39 Like the creation and annihilation operators formula_40 and formula_41 also found in quantum field theory, the creation and annihilation operators formula_42 and formula_43 act on bosons in multi-particle states. While formula_40 and formula_41 allows us to determine whether a particle was created or destroyed in a system, the spin operators formula_42 and formula_43 allow us to determine how. A photon can become both a positron and electron and vice versa. Because of the anti-symmetric statistics, a particle of spin formula_44 obeys the Pauli-Exclusion Rule. Two particles can exist in the same state if and only if the spin of the particle is opposite. Back to our example, the spin state of the particle is spin-1. Symmetric particles, or bosons, need not obey the Pauli-Exclusion Principle so therefore we can represent the spin state of the particle as follows: formula_45 and formula_46 The annihilation spin operator, as its name implies, annihilates a photon into both an electron and positron. Likewise, the creation spin operator creates a photon. The photon can be in either the first state or the second state in this example. If we apply the linear momentum operator Fermions. Therefore, we define the operator formula_47 and formula_48. In the case of the non-relativistic particle, if formula_13 is applied to a fermion twice, the resulting eigenvalue is 0. Similarly, the eigenvalue is 0 when formula_16 is applied to a fermion twice. This relation satisfies the Pauli Exclusion Principle. However, bosons are symmetric particles, which do not obey the Pauli Exclusion Principle.
[ { "math_id": 0, "text": "S^2" }, { "math_id": 1, "text": "S_z" }, { "math_id": 2, "text": "S^2|s,m\\rangle=s(s+1)\\hbar^2|s,m\\rangle " }, { "math_id": 3, "text": "S_z|s,m\\rangle=m\\hbar|s,m\\rangle " }, { "math_id": 4, "text": "[S_x,S_y]=i\\hbar S_z" }, { "math_id": 5, "text": "[S_y,S_z]=i\\hbar S_x" }, { "math_id": 6, "text": "[S_z,S_x]=i\\hbar S_y" }, { "math_id": 7, "text": "S_{+}" }, { "math_id": 8, "text": "S_{-}" }, { "math_id": 9, "text": "S_{+}=S_x+i\\cdot S_y" }, { "math_id": 10, "text": "S_{-}=S_x - i\\cdot S_y" }, { "math_id": 11, "text": "S_+|s,m\\rangle=\\hbar \\sqrt{s(s+1)-m(m+1)}|s,m+1\\rangle" }, { "math_id": 12, "text": "S_-|s,m\\rangle=\\hbar \\sqrt{s(s+1)-m(m-1)}|s,m-1\\rangle" }, { "math_id": 13, "text": "S_+" }, { "math_id": 14, "text": "S_+|+\\rangle=0" }, { "math_id": 15, "text": "S_+|-\\rangle=\\hbar|+\\rangle" }, { "math_id": 16, "text": "S_-" }, { "math_id": 17, "text": "S_-|-\\rangle=0" }, { "math_id": 18, "text": "S_-|+\\rangle=\\hbar|-\\rangle" }, { "math_id": 19, "text": "[S_+] = \\begin{bmatrix}\n\\langle+|S_+|+\\rangle & \\langle+|S_+|-\\rangle \\\\\n\\langle-|S_+|+\\rangle & \\langle-|S_+|-\\rangle \\end{bmatrix}\n=\n\\hbar \\cdot\n\\begin{bmatrix}\n0 & 1 \\\\\n0 & 0 \\end{bmatrix}\n" }, { "math_id": 20, "text": "[S_-] = \\begin{bmatrix}\n\\langle+|S_-|+\\rangle & \\langle+|S_-|-\\rangle \\\\\n\\langle-|S_-|+\\rangle & \\langle-|S_-|-\\rangle \\end{bmatrix}\n=\n\\hbar \\cdot\n\\begin{bmatrix}\n0 & 0 \\\\\n1 & 0 \\end{bmatrix}\n" }, { "math_id": 21, "text": "S_x" }, { "math_id": 22, "text": "S_y" }, { "math_id": 23, "text": "[S_x] = \\frac{ \\hbar}{2} \\cdot\n\\begin{bmatrix}\n0 & 1 \\\\\n1 & 0 \\end{bmatrix}\n" }, { "math_id": 24, "text": "[S_y] = \\frac{ \\hbar}{2} \\cdot\n\\begin{bmatrix}\n0 & -i \\\\\ni & 0 \\end{bmatrix}\n" }, { "math_id": 25, "text": "\n\\Delta_{\\psi} A \\, \\Delta_{\\psi} B \\ge \\frac{1}{2} \\left|\\left\\langle\\left[{A},{B}\\right]\\right\\rangle_\\psi\\right|\n" }, { "math_id": 26, "text": "\n\\Delta_{\\psi} S_x \\, \\Delta_{\\psi} S_y \\ge \\frac{1}{2} \\left|\\left\\langle\\left[{S_x},{S_y}\\right]\\right\\rangle_\\psi\\right|\n=\n\\frac{1}{2} (i \\hbar S_z)\n=\n\\frac{ \\hbar}{2} S_z\n" }, { "math_id": 27, "text": "\\varepsilon_{ijk\\ell\\dots} =\n\\left\\{\n\\begin{matrix}\n+1 & \\mbox{if }(i,j,k,\\ell,\\dots) \\mbox{ is an even permutation of } (1,2,3,4,\\dots) \\\\\n-1 & \\mbox{if }(i,j,k,\\ell,\\dots) \\mbox{ is an odd permutation of } (1,2,3,4,\\dots) \\\\\n0 & \\mbox{if any two labels are the same}\n\\end{matrix}\n\\right.\n" }, { "math_id": 28, "text": "[S_i,S_j]=i\\hbar S_k\\varepsilon_{ijk}" }, { "math_id": 29, "text": "S^2=\\sum_{m=1}^n S_m^2" }, { "math_id": 30, "text": "S^2|s,m>=s(s+1)\\hbar^2|s,m> " }, { "math_id": 31, "text": "[a_i,a_j]=[a^\\dagger_i,a^\\dagger_j]=0" }, { "math_id": 32, "text": "[a_i,a^\\dagger_i]=\\langle f|g \\rangle" }, { "math_id": 33, "text": " |\\phi_1\\rang, |\\phi_2\\rang, |\\phi_3\\rang " }, { "math_id": 34, "text": " \\frac{1}{\\sqrt{3}} \\left[ |\\phi_1\\rang |\\phi_2\\rang\n|\\phi_3\\rang + |\\phi_2\\rang |\\phi_1\\rang |\\phi_3\\rang + |\\phi_3\\rang\n|\\phi_2\\rang |\\phi_1\\rang \\right]. " }, { "math_id": 35, "text": " |1, 2, 0, 0, 0, \\cdots \\rangle," }, { "math_id": 36, "text": "a_2" }, { "math_id": 37, "text": "a_2^\\dagger" }, { "math_id": 38, "text": " a_2 | N_1, N_2, N_3, \\cdots \\rangle = \\sqrt{N_2} \\mid N_1, (N_2 - 1), N_3, \\cdots \\rangle," }, { "math_id": 39, "text": " a_2^\\dagger | N_1, N_2, N_3, \\cdots \\rangle = \\sqrt{N_2 + 1} \\mid N_1, (N_2 + 1), N_3, \\cdots \\rangle." }, { "math_id": 40, "text": " a_i " }, { "math_id": 41, "text": " a_i^\\dagger" }, { "math_id": 42, "text": " S_i^+ " }, { "math_id": 43, "text": " S_i^-" }, { "math_id": 44, "text": " \\frac{1}{2} " }, { "math_id": 45, "text": " |1, ix, 0, 0, 0, \\cdots \\rangle," }, { "math_id": 46, "text": " |1, -ix, 0, 0, 0, \\cdots \\rangle," }, { "math_id": 47, "text": "S_i +" }, { "math_id": 48, "text": "S_i -" } ]
https://en.wikipedia.org/wiki?curid=12711268
1271415
Shape optimization
Problem of finding the optimal shape under given conditions Shape optimization is part of the field of optimal control theory. The typical problem is to find the shape which is optimal in that it minimizes a certain cost functional while satisfying given constraints. In many cases, the functional being solved depends on the solution of a given partial differential equation defined on the variable domain. Topology optimization is, in addition, concerned with the number of connected components/boundaries belonging to the domain. Such methods are needed since typically shape optimization methods work in a subset of allowable shapes which have fixed topological properties, such as having a fixed number of holes in them. Topological optimization techniques can then help work around the limitations of pure shape optimization. Definition. Mathematically, shape optimization can be posed as the problem of finding a bounded set formula_0, minimizing a functional formula_1, possibly subject to a constraint of the form formula_2 Usually we are interested in sets formula_0 which are Lipschitz or C1 boundary and consist of finitely many components, which is a way of saying that we would like to find a rather pleasing shape as a solution, not some jumble of rough bits and pieces. Sometimes additional constraints need to be imposed to that end to ensure well-posedness of the problem and uniqueness of the solution. Shape optimization is an infinite-dimensional optimization problem. Furthermore, the space of allowable shapes over which the optimization is performed does not admit a vector space structure, making application of traditional optimization methods more difficult. Techniques. Shape optimization problems are usually solved numerically, by using iterative methods. That is, one starts with an initial guess for a shape, and then gradually evolves it, until it morphs into the optimal shape. Keeping track of the shape. To solve a shape optimization problem, one needs to find a way to represent a shape in the computer memory, and follow its evolution. Several approaches are usually used. One approach is to follow the boundary of the shape. For that, one can sample the shape boundary in a relatively dense and uniform manner, that is, to consider enough points to get a sufficiently accurate outline of the shape. Then, one can evolve the shape by gradually moving the boundary points. This is called the "Lagrangian approach". Another approach is to consider a function defined on a rectangular box around the shape, which is positive inside of the shape, zero on the boundary of the shape, and negative outside of the shape. One can then evolve this function instead of the shape itself. One can consider a rectangular grid on the box and sample the function at the grid points. As the shape evolves, the grid points do not change; only the function values at the grid points change. This approach, of using a fixed grid, is called the "Eulerian approach". The idea of using a function to represent the shape is at the basis of the level-set method. A third approach is to think of the shape evolution as of a flow problem. That is, one can imagine that the shape is made of a plastic material gradually deforming such that any point inside or on the boundary of the shape can be always traced back to a point of the original shape in a one-to-one fashion. Mathematically, if formula_3 is the initial shape, and formula_4 is the shape at time "t", one considers the diffeomorphisms formula_5 The idea is again that shapes are difficult entities to be dealt with directly, so manipulate them by means of a function. Iterative methods using shape gradients. Consider a smooth velocity field formula_6 and the family of transformations formula_7 of the initial domain formula_3 under the velocity field formula_6: formula_8, and denote formula_9 Then the Gâteaux or shape derivative of formula_1 at formula_3 with respect to the shape is the limit of formula_10 if this limit exists. If in addition the derivative is linear with respect to formula_6, there is a unique element of formula_11 and formula_12 where formula_13 is called the shape gradient. This gives a natural idea of gradient descent, where the boundary formula_14 is evolved in the direction of negative shape gradient in order to reduce the value of the cost functional. Higher order derivatives can be similarly defined, leading to Newtonlike methods. Typically, gradient descent is preferred, even if requires a large number of iterations, because, it can be hard to compute the second-order derivative (that is, the Hessian) of the objective functional formula_15. If the shape optimization problem has constraints, that is, the functional formula_16 is present, one has to find ways to convert the constrained problem into an unconstrained one. Sometimes ideas based on Lagrange multipliers, like the adjoint state method, can work. Geometry parametrization. Shape optimization can be faced using standard optimization methods if a parametrization of the geometry is defined. Such parametrization is very important in CAE field where goal functions are usually complex functions evaluated using numerical models (CFD, FEA...). A convenient approach, suitable for a wide class of problems, consists in the parametrization of the CAD model coupled with a full automation of all the process required for function evaluation (meshing, solving and result processing). Mesh morphing is a valid choice for complex problems that resolves typical issues associated with re-meshing such as discontinuities in the computed objective and constraint functions. In this case the parametrization is defined after the meshing stage acting directly on the numerical model used for calculation that is changed using mesh updating methods. There are several algorithms available for mesh morphing (deforming volumes, pseudosolids, radial basis functions). The selection of the parametrization approach depends mainly on the size of the problem: the CAD approach is preferred for small-to-medium sized models whilst the mesh morphing approach is the best (and sometimes the only feasible one) for large and very large models. The multi-objective Pareto optimization (NSGA II) could be utilized as a powerful approach for shape optimization. In this regard, the Pareto optimization approach displays useful advantages in design method such as the effect of area constraint that other multi-objective optimization cannot declare it. The approach of using a penalty function is an effective technique which could be used in the first stage of optimization. In this method the constrained shape design problem is adapted to an unconstrained problem with utilizing the constraints in the objective function as a penalty factor. Most of the time penalty factor is dependent to the amount of constraint variation rather than constraint number. The GA real-coded technique is applied in the present optimization problem. Therefore, the calculations are based on real value of variables. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\Omega" }, { "math_id": 1, "text": "\\mathcal{F}(\\Omega)" }, { "math_id": 2, "text": "\\mathcal{G}(\\Omega)=0." }, { "math_id": 3, "text": "\\Omega_0" }, { "math_id": 4, "text": "\\Omega_t" }, { "math_id": 5, "text": "f_t:\\Omega_0\\to \\Omega_t, \\mbox{ for } 0\\le t\\le t_0." }, { "math_id": 6, "text": "V" }, { "math_id": 7, "text": "T_s" }, { "math_id": 8, "text": "x(0) = x_0 \\in \\Omega_0, \\quad x'(s) = V(x(s)), \\quad T_s(x_0) = x(s), \\quad s \\geq 0 " }, { "math_id": 9, "text": "\\Omega_0 \\mapsto T_s(\\Omega_0) = \\Omega_s." }, { "math_id": 10, "text": "d\\mathcal{F}(\\Omega_0;V) = \\lim_{s \\to 0}\\frac{\\mathcal{F}(\\Omega_s) - \\mathcal{F}(\\Omega_0)}{s}" }, { "math_id": 11, "text": "\\nabla \\mathcal{F} \\in L^2(\\partial \\Omega_0)" }, { "math_id": 12, "text": "d\\mathcal{F}(\\Omega_0;V) = \\langle \\nabla \\mathcal{F}, V \\rangle_{\\partial \\Omega_0}" }, { "math_id": 13, "text": "\\nabla \\mathcal{F}" }, { "math_id": 14, "text": "\\partial \\Omega" }, { "math_id": 15, "text": "\\mathcal{F}" }, { "math_id": 16, "text": "\\mathcal{G}" } ]
https://en.wikipedia.org/wiki?curid=1271415
12718563
Gravity turn
Spacecraft launch or descent maneuver A gravity turn or zero-lift turn is a maneuver used in launching a spacecraft into, or descending from, an orbit around a celestial body such as a planet or a moon. It is a trajectory optimization that uses gravity to steer the vehicle onto its desired trajectory. It offers two main advantages over a trajectory controlled solely through the vehicle's own thrust. First, the thrust is not used to change the spacecraft's direction, so more of it is used to accelerate the vehicle into orbit. Second, and more importantly, during the initial ascent phase the vehicle can maintain low or even zero angle of attack. This minimizes transverse aerodynamic stress on the launch vehicle, allowing for a lighter launch vehicle. The term gravity turn can also refer to the use of a planet's gravity to change a spacecraft's direction in situations other than entering or leaving the orbit. When used in this context, it is similar to a gravitational slingshot; the difference is that a gravitational slingshot often increases or decreases spacecraft velocity and changes direction, while the gravity turn only changes direction. Launch procedure. Vertical climb. A gravity turn is commonly used with rocket powered vehicles that launch vertically, like the Space Shuttle. The rocket begins by flying straight up, gaining both vertical speed and altitude. During this portion of the launch, gravity acts directly against the thrust of the rocket, lowering its vertical acceleration. Losses associated with this slowing are known as gravity drag, and can be minimized by executing the next phase of the launch, the pitchover maneuver or roll program, as soon as possible. The pitchover should also be carried out while the vertical velocity is small to avoid large aerodynamic loads on the vehicle during the maneuver. The pitchover maneuver consists of the rocket gimbaling its engine slightly to direct some of its thrust to one side. This force creates a net torque on the ship, turning it so that it no longer points vertically. The pitchover angle varies with the launch vehicle and is included in the rocket's inertial guidance system. For some vehicles it is only a few degrees, while other vehicles use relatively large angles (a few tens of degrees). After the pitchover is complete, the engines are reset to point straight down the axis of the rocket again. This small steering maneuver is the only time during an ideal gravity turn ascent that thrust must be used for purposes of steering. The pitchover maneuver serves two purposes. First, it turns the rocket slightly so that its flight path is no longer vertical, and second, it places the rocket on the correct heading for its ascent to orbit. After the pitchover, the rocket's angle of attack is adjusted to zero for the remainder of its climb to orbit. This zeroing of the angle of attack reduces lateral aerodynamic loads and produces negligible lift force during the ascent. Downrange acceleration. After the pitchover, the rocket's flight path is no longer completely vertical, so gravity acts to turn the flight path back towards the ground. If the rocket were not producing thrust, the flight path would be a simple ellipse like a thrown ball (it is a common mistake to think it is a parabola: this is only true if it is assumed that the Earth is flat, and gravity always points in the same direction, which is a good approximation for short distances), leveling off and then falling back to the ground. The rocket is producing thrust though, and rather than leveling off and then descending again, by the time the rocket levels off, it has gained sufficient altitude and velocity to place it in a stable orbit. If the rocket is a multi-stage system where stages fire sequentially, the rocket's ascent burn may not be continuous. Some time must be allowed for stage separation and engine ignition between each successive stage, but some rocket designs call for extra free-flight time between stages. This is particularly useful in very high thrust rockets, where if the engines were fired continuously, the rocket would run out of fuel before leveling off and reaching a stable orbit above the atmosphere. The technique is also useful when launching from a planet with a thick atmosphere, such as the Earth. Because gravity turns the flight path during free flight, the rocket can use a smaller initial pitchover angle, giving it higher vertical velocity, and taking it out of the atmosphere more quickly. This reduces both aerodynamic drag as well as aerodynamic stress during launch. Then later during the flight the rocket coasts between stage firings, allowing it to level off above the atmosphere, so when the engine fires again, at zero angle of attack, the thrust accelerates the ship horizontally, inserting it into orbit. Descent and landing procedure. Because heat shields and parachutes cannot be used to land on an airless body such as the Moon, a powered descent with a gravity turn is a good alternative. The Apollo Lunar Module used a slightly modified gravity turn to land from lunar orbit. This was essentially a launch in reverse except that a landing spacecraft is lightest at the surface while a spacecraft being launched is heaviest at the surface. A computer program called Lander that simulated gravity turn landings applied this concept by simulating a gravity turn launch with a negative mass flow rate, i.e. the propellant tanks filled during the rocket burn. The idea of using a gravity turn maneuver to land a vehicle was originally developed for the Lunar Surveyor landings, although Surveyor made a direct approach to the surface without first going into lunar orbit. Deorbit and entry. The vehicle begins by orienting for a retrograde burn to reduce its orbital velocity, lowering its point of periapsis to near the surface of the body to be landed on. If the craft is landing on a planet with an atmosphere such as Mars the deorbit burn will only lower periapsis into the upper layers of the atmosphere, rather than just above the surface as on an airless body. After the deorbit burn is complete the vehicle can either coast until it is nearer to its landing site or continue firing its engine while maintaining zero angle of attack. For a planet with an atmosphere the coast portion of the trip includes entry through the atmosphere as well. After the coast and possible entry, the vehicle jettisons any no longer necessary heat shields and/or parachutes in preparation for the final landing burn. If the atmosphere is thick enough it can be used to slow the vehicle a considerable amount, thus saving on fuel. In this case a gravity turn is not the optimal entry trajectory but it does allow for approximation of the true delta-v required. In the case where there is no atmosphere however, the landing vehicle must provide the full delta-v necessary to land safely on the surface. Landing. If it is not already properly oriented, the vehicle lines up its engines to fire directly opposite its current surface velocity vector, which at this point is either parallel to the ground or only slightly vertical, as shown to the left. The vehicle then fires its landing engine to slow down for landing. As the vehicle loses horizontal velocity the gravity of the body to be landed on will begin pulling the trajectory closer and closer to a vertical descent. In an ideal maneuver on a perfectly spherical body the vehicle could reach zero horizontal velocity, zero vertical velocity, and zero altitude all at the same moment, landing safely on the surface (if the body is not rotating; else the horizontal velocity shall be made equal to the one of the body at the considered latitude). However, due to rocks and uneven surface terrain the vehicle usually picks up a few degrees of angle of attack near the end of the maneuver to zero its horizontal velocity just above the surface. This process is the mirror image of the pitch over maneuver used in the launch procedure and allows the vehicle to hover straight down, landing gently on the surface. Guidance and control. The steering of a rocket's course during its flight is divided into two separate components; control, the ability to point the rocket in a desired direction, and guidance, the determination of what direction a rocket should be pointed to reach a given target. The desired target can either be a location on the ground, as in the case of a ballistic missile, or a particular orbit, as in the case of a launch vehicle. Launch. The gravity turn trajectory is most commonly used during early ascent. The guidance program is a precalculated lookup table of pitch vs time. Control is done with engine gimballing and/or aerodynamic control surfaces. The pitch program maintains a zero angle of attack (the definition of a gravity turn) until the vacuum of space is reached, thus minimizing lateral aerodynamic loads on the vehicle. (Excessive aerodynamic loads can quickly destroy the vehicle.) Although the preprogrammed pitch schedule is adequate for some applications, an adaptive inertial guidance system that determines location, orientation and velocity with accelerometers and gyroscopes, is almost always employed on modern rockets. The British satellite launcher Black Arrow was an example of a rocket that flew a preprogrammed pitch schedule, making no attempt to correct for errors in its trajectory, while the Apollo-Saturn rockets used "closed loop" inertial guidance after the gravity turn through the atmosphere. The initial pitch program is an open-loop system subject to errors from winds, thrust variations, etc. To maintain zero angle of attack during atmospheric flight, these errors are not corrected until reaching space. Then a more sophisticated program can take over to correct trajectory deviations and attain the desired orbit. In the Apollo missions, the transition to closed-loop guidance took place early in second stage flight after maintaining a fixed inertial attitude while jettisoning the first stage and interstage ring. Because the upper stages of a rocket operate in a near vacuum, fins are ineffective. Steering relies entirely on engine gimballing and a reaction control system. Landing. To serve as an example of how the gravity turn can be used for a powered landing, an Apollo type lander on an airless body will be assumed. The lander begins in a circular orbit docked to the command module. After separation from the command module the lander performs a retrograde burn to lower its periapsis to just above the surface. It then coasts to periapsis where the engine is restarted to perform the gravity turn descent. It has been shown that in this situation guidance can be achieved by maintaining a constant angle between the thrust vector and the line of sight to the orbiting command module. This simple guidance algorithm builds on a previous study which investigated the use of various visual guidance cues including the uprange horizon, the downrange horizon, the desired landing site, and the orbiting command module. The study concluded that using the command module provides the best visual reference, as it maintains a near constant visual separation from an ideal gravity turn until the landing is almost complete. Because the vehicle is landing in a vacuum, aerodynamic control surfaces are useless. Therefore, a system such as a gimballing main engine, a reaction control system, or possibly a control moment gyroscope must be used for attitude control. Limitations. Although gravity turn trajectories use minimal steering thrust they are not always the most efficient possible launch or landing procedure. Several things can affect the gravity turn procedure making it less efficient or even impossible due to the design limitations of the launch vehicle. A brief summary of factors affecting the turn is given below. Use in orbital redirection. For spacecraft missions where large changes in the direction of flight are necessary, direct propulsion by the spacecraft may not be feasible due to the large delta-v requirement. In these cases it may be possible to perform a flyby of a nearby planet or moon, using its gravitational attraction to alter the ship's direction of flight. Although this maneuver is very similar to the gravitational slingshot it differs in that a slingshot often implies a change in both speed and direction whereas the gravity turn only changes the direction of flight. A variant of this maneuver, the free return trajectory allows the spacecraft to depart from a planet, circle another planet once, and return to the starting planet using propulsion only during the initial departure burn. Although in theory it is possible to execute a perfect free return trajectory, in practice small correction burns are often necessary during the flight. Even though it does not require a burn for the return trip, other return trajectory types, such as an aerodynamic turn, can result in a lower total delta-v for the mission. Use in spaceflight. Many spaceflight missions have utilized the gravity turn, either directly or in a modified form, to carry out their missions. What follows is a short list of various mission that have used this procedure. Mathematical description. The simplest case of the gravity turn trajectory is that which describes a point mass vehicle, in a uniform gravitational field, neglecting air resistance. The thrust force formula_0 is a vector whose magnitude is a function of time and whose direction can be varied at will. Under these assumptions the differential equation of motion is given by: formula_1 Here formula_2 is a unit vector in the vertical direction and formula_3 is the instantaneous vehicle mass. By constraining the thrust vector to point parallel to the velocity and separating the equation of motion into components parallel to formula_4 and those perpendicular to formula_4 we arrive at the following system: formula_5 Here the current thrust to weight ratio has been denoted by formula_6 and the current angle between the velocity vector and the vertical by formula_7. This results in a coupled system of equations which can be integrated to obtain the trajectory. However, for all but the simplest case of constant formula_8 over the entire flight, the equations cannot be solved analytically and must be integrated numerically. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\vec{F}" }, { "math_id": 1, "text": "m \\frac{d \\vec{v}}{dt} = \\vec{F} - mg \\hat{k}\\;." }, { "math_id": 2, "text": "\\hat{k}" }, { "math_id": 3, "text": "m" }, { "math_id": 4, "text": "\\vec{v}" }, { "math_id": 5, "text": "\\begin{align}\n\\dot{v} &= g(n - \\cos{\\beta}) \\;,\\\\\nv \\dot{\\beta} &= g \\sin{\\beta}\\;. \\\\\n\\end{align}" }, { "math_id": 6, "text": "n = F/mg" }, { "math_id": 7, "text": "\\beta = \\arccos{(\\vec{\\tau_1} \\cdot \\hat{k})}" }, { "math_id": 8, "text": "n" } ]
https://en.wikipedia.org/wiki?curid=12718563
1271901
Projective line over a ring
Projective construction in ring theory In mathematics, the projective line over a ring is an extension of the concept of projective line over a field. Given a ring "A" (with 1), the projective line P1("A") over "A" consists of points identified by projective coordinates. Let "A"× be the group of units of "A"; pairs ("a", "b") and ("c", "d") from "A" × "A" are related when there is a "u" in "A"× such that "ua" = "c" and "ub" = "d". This relation is an equivalence relation. A typical equivalence class is written "U"["a", "b"]. P1("A") = {"U"["a", "b"]| "aA" + "bA" "A" }, that is, "U"["a", "b"] is in the projective line if the one-sided ideal generated by "a" and "b" is all of "A". The projective line P1("A") is equipped with a group of homographies. The homographies are expressed through use of the matrix ring over "A" and its group of units "V" as follows: If "c" is in Z("A"×), the center of "A"×, then the group action of matrix formula_0 on P1("A") is the same as the action of the identity matrix. Such matrices represent a normal subgroup "N" of "V". The homographies of P1("A") correspond to elements of the quotient group "V" / "N". P1("A") is considered an extension of the ring "A" since it contains a copy of "A" due to the embedding "E" : "a" → "U"["a", 1]. The multiplicative inverse mapping "u" → 1/"u", ordinarily restricted to "A"×, is expressed by a homography on P1("A"): formula_1 Furthermore, for "u","v" ∈ "A"×, the mapping "a" → "uav" can be extended to a homography: formula_2 formula_3 Since "u" is arbitrary, it may be substituted for "u"−1. Homographies on P1("A") are called linear-fractional transformations since formula_4 Instances. Rings that are fields are most familiar: The projective line over GF(2) has three elements: "U"[0, 1], "U"[1, 0], and "U"[1, 1]. Its homography group is the permutation group on these three. The ring Z / 3Z, or GF(3), has the elements 1, 0, and −1; its projective line has the four elements "U"[1, 0], "U"[1, 1], "U"[0, 1], "U"[1, −1] since both 1 and −1 are units. The homography group on this projective line has 12 elements, also described with matrices or as permutations. For a finite field GF("q"), the projective line is the Galois geometry PG(1, "q"). J. W. P. Hirschfeld has described the harmonic tetrads in the projective lines for "q" = 4, 5, 7, 8, 9. Over discrete rings. Consider P1(Z / "nZ) when "n" is a composite number. If "p" and "q" are distinct primes dividing "n", then ⟨"p"⟩ and ⟨"q"⟩ are maximal ideals in Z / "nZ and by Bézout's identity there are "a" and "b" in Z such that "ap" + "bq" = "1", so that "U"["p", "q"] is in P1(Z / "nZ) but it is not an image of an element under the canonical embedding. The whole of P1(Z / "nZ) is filled out by elements "U"["up", "vq"], where "u" ≠ "v" and "u", "v" ∈ "A"×, "A"× being the units of Z / "nZ. The instances Z / "nZ are given here for "n" = 6, 10, and 12, where according to modular arithmetic the group of units of the ring is (Z / 6Z)× = {1, 5}, (Z / 10Z)× = {1, 3, 7, 9}, and (Z / 12Z)× = {1, 5, 7, 11} respectively. Modular arithmetic will confirm that, in each table, a given letter represents multiple points. In these tables a point "U"["m", "n"] is labeled by "m" in the row at the table bottom and "n" in the column at the left of the table. For instance, the point at infinity A = "U"["v", 0], where "v" is a unit of the ring. The extra points can be associated with Q ⊂ R ⊂ C, the rationals in the extended complex upper-half plane. The group of homographies on P1(Z / "n"Z) is called a principal congruence subgroup. For the rational numbers Q, homogeneity of coordinates means that every element of P1(Q) may be represented by an element of P1(Z). Similarly, a homography of P1(Q) corresponds to an element of the modular group, the automorphisms of P1(Z). Over continuous rings. The projective line over a division ring results in a single auxiliary point ∞ = "U"[1, 0]. Examples include the real projective line, the complex projective line, and the projective line over quaternions. These examples of topological rings have the projective line as their one-point compactifications. The case of the complex number field C has the Möbius group as its homography group. The projective line over the dual numbers was described by Josef Grünwald in 1906. This ring includes a nonzero nilpotent "n" satisfying "nn" = 0. The plane { "z" "x" + "yn" |"x", "y" ∈ R} of dual numbers has a projective line including a line of points "U"[1, "xn"], "x" ∈ R. Isaak Yaglom has described it as an "inversive Galilean plane" that has the topology of a cylinder when the supplementary line is included. Similarly, if "A" is a local ring, then P1("A") is formed by adjoining points corresponding to the elements of the maximal ideal of "A". The projective line over the ring "M" of split-complex numbers introduces auxiliary lines {"U"[1, "x"(1 + j)]|"x" ∈ R} and {"U"[1, "x"(1 − j)]|"x" ∈ R} Using stereographic projection the plane of split-complex numbers is closed up with these lines to a hyperboloid of one sheet. The projective line over "M" may be called the Minkowski plane when characterized by behaviour of hyperbolas under homographic mapping. Modules. The projective line P1("A") over a ring "A" can also be identified as the space of projective modules in the module "A" ⊕ "A". An element of P1("A") is then a direct summand of "A" ⊕ "A". This more abstract approach follows the view of projective geometry as the geometry of subspaces of a vector space, sometimes associated with the lattice theory of Garrett Birkhoff or the book "Linear Algebra and Projective Geometry" by Reinhold Baer. In the case of the ring of rational integers Z, the module summand definition of P1(Z) narrows attention to the "U"["m", "n"], "m" coprime to "n", and sheds the embeddings that are a principal feature of P1("A") when "A" is topological. The 1981 article by W. Benz, Hans-Joachim Samaga, &amp; Helmut Scheaffer mentions the direct summand definition. In an article "Projective representations: projective lines over rings" the group of units of a matrix ring M2("R") and the concepts of module and bimodule are used to define a projective line over a ring. The group of units is denoted by GL(2, "R"), adopting notation from the general linear group, where "R" is usually taken to be a field. The projective line is the set of orbits under GL(2, "R") of the free cyclic submodule "R"(1, 0) of "R" × "R". Extending the commutative theory of Benz, the existence of a right or left multiplicative inverse of a ring element is related to P1("R") and GL(2, "R"). The Dedekind-finite property is characterized. Most significantly, representation of P1("R") in a projective space over a division ring "K" is accomplished with a ("K", "R")-bimodule "U" that is a left "K"-vector space and a right "R"-module. The points of P1("R") are subspaces of P1("K", "U" × "U") isomorphic to their complements. Cross-ratio. A homography "h" that takes three particular ring elements "a", "b", "c" to the projective line points "U"[0, 1], "U"[1, 1], "U"[1, 0] is called the cross-ratio homography. Sometimes the cross-ratio is taken as the value of "h" on a fourth point "x" : ("x", "a", "b", "c") = "h"("x"). To build "h" from "a", "b", "c" the generator homographies formula_5 are used, with attention to fixed points: +1 and −1 are fixed under inversion, "U"[1, 0] is fixed under translation, and the "rotation" with "u" leaves "U"[0, 1] and "U"[1, 0] fixed. The instructions are to place "c" first, then bring "a" to "U"[0, 1] with translation, and finally to use rotation to move "b" to "U"[1, 1]. Lemma: If "A" is a commutative ring and "b" − "a", "c" − "b", "c" − "a" are all units, then ("b" − "c")−1 + ("c" − "a")−1 is a unit. Proof: Evidently formula_6 is a unit, as required. Theorem: If ("b" − "c")−1 + ("c" − "a")−1 is a unit, then there is a homography "h" in G("A") such that "h"("a") = "U"[0, 1], "h"("b") = "U"[1, 1], and "h"("c") = "U"[1, 0]. Proof: The point "p" = ("b" − "c")−1 + ("c" − "a")−1 is the image of "b" after "a" was put to 0 and then inverted to "U"[1, 0], and the image of "c" is brought to "U"[0, 1]. As "p" is a unit, its inverse used in a rotation will move "p" to "U"[1, 1], resulting in "a", "b", "c" being all properly placed. The lemma refers to sufficient conditions for the existence of "h". One application of cross ratio defines the projective harmonic conjugate of a triple "a", "b", "c", as the element "x" satisfying ("x", "a", "b", "c") = −1. Such a quadruple is a harmonic tetrad. Harmonic tetrads on the projective line over a finite field GF("q") were used in 1954 to delimit the projective linear groups PGL(2, "q") for "q" = 5, 7, and 9, and demonstrate accidental isomorphisms. Chains. The real line in the complex plane gets permuted with circles and other real lines under Möbius transformations, which actually permute the canonical embedding of the real projective line in the complex projective line. Suppose "A" is an algebra over a field "F", generalizing the case where "F" is the real number field and "A" is the field of complex numbers. The canonical embedding of P1("F") into P1("A") is formula_7 A chain is the image of P1("F") under a homography on P1("A"). Four points lie on a chain if and only if their cross-ratio is in "F". Karl von Staudt exploited this property in his theory of "real strokes" [reeler Zug]. Point-parallelism. Two points of P1("A") are parallel if there is "no" chain connecting them. The convention has been adopted that points are parallel to themselves. This relation is invariant under the action of a homography on the projective line. Given three pair-wise non-parallel points, there is a unique chain that connects the three. History. August Ferdinand Möbius investigated the Möbius transformations between his book "Barycentric Calculus" (1827) and his 1855 paper "Theorie der Kreisverwandtschaft in rein geometrischer Darstellung". Karl Wilhelm Feuerbach and Julius Plücker are also credited with originating the use of homogeneous coordinates. Eduard Study in 1898, and Élie Cartan in 1908, wrote articles on hypercomplex numbers for German and French "Encyclopedias of Mathematics", respectively, where they use these arithmetics with linear fractional transformations in imitation of those of Möbius. In 1902 Theodore Vahlen contributed a short but well-referenced paper exploring some linear fractional transformations of a Clifford algebra. The ring of dual numbers "D" gave Josef Grünwald opportunity to exhibit P1("D") in 1906. Corrado Segre (1912) continued the development with that ring. Arthur Conway, one of the early adopters of relativity via biquaternion transformations, considered the quaternion-multiplicative-inverse transformation in his 1911 relativity study. In 1947 some elements of inversive quaternion geometry were described by P.G. Gormley in Ireland. In 1968 Isaak Yaglom's "Complex Numbers in Geometry" appeared in English, translated from Russian. There he uses P1("D") to describe line geometry in the Euclidean plane and P1("M") to describe it for Lobachevski's plane. Yaglom's text "A Simple Non-Euclidean Geometry" appeared in English in 1979. There in pages 174 to 200 he develops "Minkowskian geometry" and describes P1("M") as the "inversive Minkowski plane". The Russian original of Yaglom's text was published in 1969. Between the two editions, Walter Benz (1973) published his book, which included the homogeneous coordinates taken from "M". Notes and references. &lt;templatestyles src="Reflist/styles.css" /&gt; &lt;templatestyles src="Refbegin/styles.css" /&gt; Further reading. &lt;templatestyles src="Refbegin/styles.css" /&gt;
[ { "math_id": 0, "text": "\\left(\\begin{smallmatrix}c & 0 \\\\ 0 & c \\end{smallmatrix}\\right)" }, { "math_id": 1, "text": "U[a,1]\\begin{pmatrix}0&1\\\\1&0\\end{pmatrix} = U[1, a] \\thicksim U[a^{-1}, 1]." }, { "math_id": 2, "text": "\\begin{pmatrix}u & 0 \\\\0 & 1 \\end{pmatrix}\\begin{pmatrix}0 & 1 \\\\ 1 & 0 \\end{pmatrix}\\begin{pmatrix} v & 0 \\\\ 0 & 1 \\end{pmatrix}\\begin{pmatrix} 0 & 1 \\\\ 1 & 0 \\end{pmatrix} = \\begin{pmatrix} u & 0 \\\\ 0 & v \\end{pmatrix}. " }, { "math_id": 3, "text": "U[a,1]\\begin{pmatrix}v&0\\\\0&u\\end{pmatrix} = U[av,u] \\thicksim U[u^{-1}av,1]." }, { "math_id": 4, "text": "U[z,1] \\begin{pmatrix}a&c\\\\b&d\\end{pmatrix} = U[za+b,zc+d] \\thicksim U[(zc+d)^{-1}(za+b),1]." }, { "math_id": 5, "text": "\\begin{pmatrix}0 & 1\\\\1 & 0 \\end{pmatrix}, \\begin{pmatrix}1 & 0\\\\t & 1 \\end{pmatrix}, \\begin{pmatrix}u & 0\\\\0 & 1 \\end{pmatrix}" }, { "math_id": 6, "text": "\\frac{b-a}{(b-c)(c-a)} = \\frac{(b-c)+(c-a)}{(b-c)(c-a)}" }, { "math_id": 7, "text": "U_F[x, 1] \\mapsto U_A[x, 1] , \\quad U_F[1, 0] \\mapsto U_A[1, 0]." } ]
https://en.wikipedia.org/wiki?curid=1271901
12720370
Normalisation by evaluation
In programming language semantics, normalisation by evaluation (NBE) is a method of obtaining the normal form of terms in the λ-calculus by appealing to their denotational semantics. A term is first "interpreted" into a denotational model of the λ-term structure, and then a canonical (β-normal and η-long) representative is extracted by "reifying" the denotation. Such an essentially semantic, reduction-free, approach differs from the more traditional syntactic, reduction-based, description of normalisation as reductions in a term rewrite system where β-reductions are allowed deep inside λ-terms. NBE was first described for the simply typed lambda calculus. It has since been extended both to weaker type systems such as the untyped lambda calculus using a domain theoretic approach, and to richer type systems such as several variants of Martin-Löf type theory. Outline. Consider the simply typed lambda calculus, where types τ can be basic types (α), function types (→), or products (×), given by the following Backus–Naur form grammar (→ associating to the right, as usual): (Types) τ ::= α | τ1 → τ2 | τ1 × τ2 These can be implemented as a datatype in the meta-language; for example, for Standard ML, we might use: datatype ty = Basic of string | Arrow of ty * ty | Prod of ty * ty Terms are defined at two levels. The lower "syntactic" level (sometimes called the "dynamic" level) is the representation that one intends to normalise. (Syntax Terms) "s","t",… ::= var "x" | lam ("x", "t") | app ("s", "t") | pair ("s", "t") | fst "t" | snd "t" Here lam/app (resp. pair/fst,snd) are the intro/elim forms for → (resp. ×), and "x" are variables. These terms are intended to be implemented as a first-order datatype in the meta-language: datatype tm = var of string | lam of string * tm | app of tm * tm | pair of tm * tm | fst of tm | snd of tm The denotational semantics of (closed) terms in the meta-language interprets the constructs of the syntax in terms of features of the meta-language; thus, lam is interpreted as abstraction, app as application, etc. The semantic objects constructed are as follows: (Semantic Terms) "S","T",… ::= LAM (λ"x". "S" "x") | PAIR ("S", "T") | SYN "t" Note that there are no variables or elimination forms in the semantics; they are represented simply as syntax. These semantic objects are represented by the following datatype: datatype sem = LAM of (sem -&gt; sem) | PAIR of sem * sem | SYN of tm There are a pair of type-indexed functions that move back and forth between the syntactic and semantic layer. The first function, usually written ↑τ, "reflects" the term syntax into the semantics, while the second "reifies" the semantics as a syntactic term (written as ↓τ). Their definitions are mutually recursive as follows: formula_0 These definitions are easily implemented in the meta-language: (* fresh_var : unit -&gt; string *) val variable_ctr = ref ~1 fun fresh_var () = (variable_ctr := 1 + !variable_ctr; "v" ^ Int.toString (!variable_ctr)) (* reflect : ty -&gt; tm -&gt; sem *) fun reflect (Arrow (a, b)) t = LAM (fn S =&gt; reflect b (app (t, (reify a S)))) | reflect (Prod (a, b)) t = PAIR (reflect a (fst t), reflect b (snd t)) | reflect (Basic _) t = SYN t (* reify : ty -&gt; sem -&gt; tm *) and reify (Arrow (a, b)) (LAM S) = let val x = fresh_var () in lam (x, reify b (S (reflect a (var x)))) end | reify (Prod (a, b)) (PAIR (S, T)) = pair (reify a S, reify b T) | reify (Basic _) (SYN t) = t By induction on the structure of types, it follows that if the semantic object "S" denotes a well-typed term "s" of type τ, then reifying the object (i.e., ↓τ S) produces the β-normal η-long form of "s". All that remains is, therefore, to construct the initial semantic interpretation "S" from a syntactic term "s". This operation, written ∥"s"∥Γ, where Γ is a context of bindings, proceeds by induction solely on the term structure: formula_1 In the implementation: datatype ctx = empty | add of ctx * (string * sem) (* lookup : ctx -&gt; string -&gt; sem *) fun lookup (add (remdr, (y, value))) x = if x = y then value else lookup remdr x (* meaning : ctx -&gt; tm -&gt; sem *) fun meaning G t = case t of var x =&gt; lookup G x | lam (x, s) =&gt; LAM (fn S =&gt; meaning (add (G, (x, S))) s) | app (s, t) =&gt; (case meaning G s of LAM S =&gt; S (meaning G t)) | pair (s, t) =&gt; PAIR (meaning G s, meaning G t) | fst s =&gt; (case meaning G s of PAIR (S, T) =&gt; S) | snd t =&gt; (case meaning G t of PAIR (S, T) =&gt; T) Note that there are many non-exhaustive cases; however, if applied to a "closed" well-typed term, none of these missing cases are ever encountered. The NBE operation on closed terms is then: (* nbe : ty -&gt; tm -&gt; tm *) fun nbe a t = reify a (meaning empty t) As an example of its use, consider the syntactic term codice_0 defined below: val K = lam ("x", lam ("y", var "x")) val S = lam ("x", lam ("y", lam ("z", app (app (var "x", var "z"), app (var "y", var "z"))))) val SKK = app (app (S, K), K) This is the well-known encoding of the identity function in combinatory logic. Normalising it at an identity type produces: - nbe (Arrow (Basic "a", Basic "a")) SKK; val it = lam ("v0",var "v0") : tm The result is actually in η-long form, as can be easily seen by normalizing it at a different identity type: - nbe (Arrow (Arrow (Basic "a", Basic "b"), Arrow (Basic "a", Basic "b"))) SKK; val it = lam ("v1",lam ("v2",app (var "v1",var "v2"))) : tm Variants. Using de Bruijn levels instead of names in the residual syntax makes codice_1 a pure function in that there is no need for codice_2. The datatype of residual terms can also be the datatype of residual terms "in normal form". The type of codice_1 (and therefore of codice_4) then makes it clear that the result is normalized. And if the datatype of normal forms is typed, the type of codice_1 (and therefore of codice_4) then makes it clear that normalization is type preserving. Normalization by evaluation also scales to the simply typed lambda calculus with sums (codice_7), using the delimited control operators codice_8 and codice_9. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\n\\begin{align}\n \\uparrow_{\\alpha} t &= \\mathbf{SYN}\\ t \\\\\n \\uparrow_{\\tau_1 \\to \\tau_2} v &= \n \\mathbf{LAM} (\\lambda S.\\ \\uparrow_{\\tau_2} (\\mathbf{app}\\ (v, \\downarrow^{\\tau_1} S))) \\\\\n \\uparrow_{\\tau_1 \\times \\tau_2} v &=\n \\mathbf{PAIR} (\\uparrow_{\\tau_1} (\\mathbf{fst}\\ v), \\uparrow_{\\tau_2} (\\mathbf{snd}\\ v)) \\\\[1ex]\n \\downarrow^{\\alpha} (\\mathbf{SYN}\\ t) &= t \\\\\n \\downarrow^{\\tau_1 \\to \\tau_2} (\\mathbf{LAM}\\ S) &=\n \\mathbf{lam}\\ (x, \\downarrow^{\\tau_2} (S\\ (\\uparrow_{\\tau_1} (\\mathbf{var}\\ x)))) \n \\text{ where } x \\text{ is fresh} \\\\\n \\downarrow^{\\tau_1 \\times \\tau_2} (\\mathbf{PAIR}\\ (S, T)) &=\n \\mathbf{pair}\\ (\\downarrow^{\\tau_1} S, \\downarrow^{\\tau_2} T)\n\\end{align}\n" }, { "math_id": 1, "text": "\n\\begin{align}\n \\| \\mathbf{var}\\ x \\|_\\Gamma &= \\Gamma(x) \\\\\n \\| \\mathbf{lam}\\ (x, s) \\|_\\Gamma &= \n \\mathbf{LAM}\\ (\\lambda S.\\ \\| s \\|_{\\Gamma, x \\mapsto S}) \\\\\n \\| \\mathbf{app}\\ (s, t) \\|_\\Gamma &=\n S\\ (\\|t\\|_\\Gamma) \\text{ where } \\|s\\|_\\Gamma = \\mathbf{LAM}\\ S \\\\\n \\| \\mathbf{pair}\\ (s, t) \\|_\\Gamma &=\n \\mathbf{PAIR}\\ (\\|s\\|_\\Gamma, \\|t\\|_\\Gamma) \\\\\n \\| \\mathbf{fst}\\ s \\|_\\Gamma &=\n S \\text{ where } \\|s\\|_\\Gamma = \\mathbf{PAIR}\\ (S, T) \\\\\n \\| \\mathbf{snd}\\ t \\|_\\Gamma &=\n T \\text{ where } \\|t\\|_\\Gamma = \\mathbf{PAIR}\\ (S, T)\n\\end{align}\n" } ]
https://en.wikipedia.org/wiki?curid=12720370
12724582
Modulus and characteristic of convexity
In mathematics, the modulus of convexity and the characteristic of convexity are measures of "how convex" the unit ball in a Banach space is. In some sense, the modulus of convexity has the same relationship to the "ε"-"δ" definition of uniform convexity as the modulus of continuity does to the "ε"-"δ" definition of continuity. Definitions. The modulus of convexity of a Banach space ("X", ||⋅||) is the function "δ" : [0, 2] → [0, 1] defined by formula_0 where "S" denotes the unit sphere of ("X", || ||). In the definition of "δ"("ε"), one can as well take the infimum over all vectors "x", "y" in "X" such that ǁ"x"ǁ, ǁ"y"ǁ ≤ 1 and ǁ"x" − "y"ǁ ≥ "ε". The characteristic of convexity of the space ("X", || ||) is the number "ε"0 defined by formula_1 These notions are implicit in the general study of uniform convexity by J. A. Clarkson (; this is the same paper containing the statements of Clarkson's inequalities). The term "modulus of convexity" appears to be due to M. M. Day. formula_2 formula_3 Modulus of convexity of the "L""P" spaces. The modulus of convexity is known for the "L""P" spaces. If formula_4, then it satisfies the following implicit equation: formula_5 Knowing that formula_6 one can suppose that formula_7. Substituting this into the above, and expanding the left-hand-side as a Taylor series around formula_8, one can calculate the formula_9 coefficients: formula_10 For formula_11, one has the explicit expression formula_12 Therefore, formula_13. Notes. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\delta (\\varepsilon) = \\inf \\left\\{ 1 - \\left\\| \\frac{x + y}{2} \\right\\| \\,:\\, x, y \\in S, \\| x - y \\| \\geq \\varepsilon \\right\\}," }, { "math_id": 1, "text": "\\varepsilon_{0} = \\sup \\{ \\varepsilon \\,:\\, \\delta(\\varepsilon) = 0 \\}." }, { "math_id": 2, "text": "\\delta(\\varepsilon / 2) \\le \\delta_1(\\varepsilon) \\le \\delta(\\varepsilon), \\quad \\varepsilon \\in [0, 2]." }, { "math_id": 3, "text": "\\delta(\\varepsilon) \\ge c \\, \\varepsilon^q, \\quad \\varepsilon \\in [0, 2]." }, { "math_id": 4, "text": "1<p\\le2" }, { "math_id": 5, "text": "\\left(1-\\delta_p(\\varepsilon)+\\frac{\\varepsilon}{2}\\right)^p+\\left(1-\\delta_p(\\varepsilon)-\\frac{\\varepsilon}{2}\\right)^p=2.\n" }, { "math_id": 6, "text": "\\delta_p(\\varepsilon+)=0," }, { "math_id": 7, "text": "\\delta_p(\\varepsilon)=a_0\\varepsilon+a_1\\varepsilon^2+\\cdots" }, { "math_id": 8, "text": "\\varepsilon=0" }, { "math_id": 9, "text": "a_i" }, { "math_id": 10, "text": "\\delta_p(\\varepsilon)=\\frac{p-1}{8}\\varepsilon^2+\\frac{1}{384}(3-10p+9p^2-2p^3)\\varepsilon^4+\\cdots.\n" }, { "math_id": 11, "text": "2<p<\\infty" }, { "math_id": 12, "text": "\\delta_p(\\varepsilon)=1-\\left(1-\\left(\\frac{\\varepsilon}{2}\\right)^p\\right)^{\\frac1p}.\n" }, { "math_id": 13, "text": "\\delta_p(\\varepsilon)=\\frac{1}{p2^p}\\varepsilon^p+\\cdots" } ]
https://en.wikipedia.org/wiki?curid=12724582
12727
Original proof of Gödel's completeness theorem
The proof of Gödel's completeness theorem given by Kurt Gödel in his doctoral dissertation of 1929 (and a shorter version of the proof, published as an article in 1930, titled "The completeness of the axioms of the functional calculus of logic" (in German)) is not easy to read today; it uses concepts and formalisms that are no longer used and terminology that is often obscure. The version given below attempts to represent all the steps in the proof and all the important ideas faithfully, while restating the proof in the modern language of mathematical logic. This outline should not be considered a rigorous proof of the theorem. Assumptions. We work with first-order predicate calculus. Our languages allow constant, function and relation symbols. Structures consist of (non-empty) domains and interpretations of the relevant symbols as constant members, functions or relations over that domain. We assume classical logic (as opposed to intuitionistic logic for example). We fix some axiomatization (i.e. a syntax-based, machine-manageable proof system) of the predicate calculus: logical axioms and rules of inference. Any of the several well-known equivalent axiomatizations will do. Gödel's original proof assumed the Hilbert-Ackermann proof system. We assume without proof all the basic well-known results about our formalism that we need, such as the normal form theorem or the soundness theorem. We axiomatize predicate calculus "without equality" (sometimes confusingly called "without identity"), i.e. there are no special axioms expressing the properties of (object) equality as a special relation symbol. After the basic form of the theorem has been proved, it will be easy to extend it to the case of predicate calculus "with equality". Statement of the theorem and its proof. In the following, we state two equivalent forms of the theorem, and show their equivalence. Later, we prove the theorem. This is done in the following steps: ∀x1∀x2...∀xk ∃y1∃y2...∃ym φ(x1...xk, y1...ym) is either refutable (its negation is always true) or satisfiable, i.e. there is some model in which it holds (it might even be always true, i.e. a tautology); this model is simply assigning truth values to the subpropositions from which B is built. The reason for that is the completeness of propositional logic, with the existential quantifiers playing no role. Theorem 1. Every valid formula (true in all structures) is provable.. This is the most basic form of the completeness theorem. We immediately restate it in a form more convenient for our purposes: When we say "all structures", it is important to specify that the structures involved are classical (Tarskian) interpretations I, where I =  (U is a non-empty (possibly infinite) set of objects, whereas F is a set of functions from expressions of the interpreted symbolism into U). [By contrast, so-called "free logics" allow possibly empty sets for U. For more regarding free logics, see the work of Karel Lambert.] Theorem 2. Every formula φ is either refutable or satisfiable in some structure.. ""φ" is refutable" means "by definition" "¬"φ" is provable". Equivalence of both theorems. If Theorem 1 holds, and φ is not satisfiable in any structure, then ¬φ is valid in all structures and therefore provable, thus φ is refutable and Theorem 2 holds. If on the other hand Theorem 2 holds and φ is valid in all structures, then ¬φ is not satisfiable in any structure and therefore refutable; then ¬¬φ is provable and then so is φ, thus Theorem 1 holds. Proof of theorem 2: first step. We approach the proof of Theorem 2 by successively restricting the class of all formulas φ for which we need to prove "φ is either refutable or satisfiable". At the beginning we need to prove this for all possible formulas φ in our language. However, suppose that for every formula φ there is some formula ψ taken from a more restricted class of formulas C, such that "ψ is either refutable or satisfiable" → "φ is either refutable or satisfiable". Then, once this claim (expressed in the previous sentence) is proved, it will suffice to prove "φ is either refutable or satisfiable" only for φ's belonging to the class C. If φ is provably equivalent to ψ ("i.e.", ("φ" ≡ "ψ") is provable), then it is indeed the case that "ψ is either refutable or satisfiable" → ""φ" is either refutable or satisfiable" (the soundness theorem is needed to show this). There are standard techniques for rewriting an arbitrary formula into one that does not use function or constant symbols, at the cost of introducing additional quantifiers; we will therefore assume that all formulas are free of such symbols. Gödel's paper uses a version of first-order predicate calculus that has no function or constant symbols to begin with. Next we consider a generic formula "φ" (which no longer uses function or constant symbols) and apply the prenex form theorem to find a formula "ψ" in "normal form" such that "φ" ≡ "ψ" ("ψ" being in "normal form" means that all the quantifiers in "ψ", if there are any, are found at the very beginning of "ψ"). It follows now that we need only prove Theorem 2 for formulas "φ" in normal form. Next, we eliminate all free variables from "φ" by quantifying them existentially: if, say, "x"1..."x""n" are free in "φ", we form formula_0. If "ψ" is satisfiable in a structure "M", then certainly so is "φ" and if "ψ" is refutable, then formula_1 is provable, and then so is ¬"φ", thus "φ" is refutable. We see that we can restrict "φ" to be a "sentence", that is, a formula with no free variables. Finally, we would like, for reasons of technical convenience, that the "prefix" of "φ" (that is, the string of quantifiers at the beginning of "φ", which is in normal form) begin with a universal quantifier and end with an existential quantifier. To achieve this for a generic "φ" (subject to restrictions we have already proved), we take some one-place relation symbol F unused in "φ", and two new variables y and z.. If "φ" = (P)Φ, where (P) stands for the prefix of "φ" and Φ for the "matrix" (the remaining, quantifier-free part of "φ") we form formula_2. Since formula_3 is clearly provable, it is easy to see that formula_4 is provable. Reducing the theorem to formulas of degree 1. Our generic formula φ now is a sentence, in normal form, and its prefix starts with a universal quantifier and ends with an existential quantifier. Let us call the class of all such formulas R. We are faced with proving that every formula in R is either refutable or satisfiable. Given our formula φ, we group strings of quantifiers of one kind together in blocks: formula_5 We define the degree of formula_6 to be the number of universal quantifier blocks, separated by existential quantifier blocks as shown above, in the prefix of formula_6. The following lemma, which Gödel adapted from Skolem's proof of the Löwenheim–Skolem theorem, lets us sharply reduce the complexity of the generic formula formula_6 we need to prove the theorem for: Lemma. Let "k" ≥ 1. If every formula in R of degree "k" is either refutable or satisfiable, then so is every formula in R of degree "k" + 1. Comment: Take a formula "φ" of degree "k" + 1 of the form formula_7, where formula_8 is the remainder of formula_6 (it is thus of degree "k" − 1). φ states that for every x there is a y such that... (something). It would have been nice to have a predicate "Q' " so that for every x, "Q"′("x","y") would be true if and only if "y" is the required one to make (something) true. Then we could have written a formula of degree "k", which is equivalent to φ, namely formula_9. This formula is indeed equivalent to φ because it states that for every x, if there is a y that satisfies Q'(x,y), then (something) holds, and furthermore, we know that there is such a y, because for every x', there is a y' that satisfies Q'(x',y'). Therefore φ follows from this formula. It is also easy to show that if the formula is false, then so is φ. Unfortunately, in general there is no such predicate Q'. However, this idea can be understood as a basis for the following proof of the Lemma. Proof. Let φ be a formula of degree "k" + 1; then we can write it as formula_7 where (P) is the remainder of the prefix of formula_6 (it is thus of degree "k" – 1) and formula_10 is the quantifier-free matrix of formula_6. x, y, u and v denote here "tuples" of variables rather than single variables; "e.g." formula_11 really stands for formula_12 where formula_13 are some distinct variables. Let now x' and y' be tuples of previously unused variables of the same length as x and y respectively, and let Q be a previously unused relation symbol that takes as many arguments as the sum of lengths of x and y; we consider the formula formula_14 Clearly, formula_15 is provable. Now since the string of quantifiers formula_16 does not contain variables from x or y, the following equivalence is easily provable with the help of whatever formalism we're using: formula_17 And since these two formulas are equivalent, if we replace the first with the second inside Φ, we obtain the formula Φ' such that Φ≡Φ': formula_18 Now Φ' has the form formula_19, where (S) and (S') are some quantifier strings, ρ and ρ' are quantifier-free, and, furthermore, no variable of (S) occurs in ρ' and no variable of (S') occurs in ρ. Under such conditions every formula of the form formula_20, where (T) is a string of quantifiers containing all quantifiers in (S) and (S') interleaved among themselves in any fashion, but maintaining the relative order inside (S) and (S'), will be equivalent to the original formula Φ'(this is yet another basic result in first-order predicate calculus that we rely on). To wit, we form Ψ as follows: formula_21 and we have formula_22. Now formula_23 is a formula of degree "k" and therefore by assumption either refutable or satisfiable. If formula_23 is satisfiable in a structure M, then, considering formula_24, we see that formula_6 is satisfiable as well. If formula_23 is refutable, then so is formula_25, which is equivalent to it; thus formula_26 is provable. Now we can replace all occurrences of Q inside the provable formula formula_26 by some other formula dependent on the same variables, and we will still get a provable formula. In this particular case, we replace Q(x',y') in formula_26 with the formula formula_27. Here (x,y | x',y') means that instead of ψ we are writing a different formula, in which x and y are replaced with x' and y'. Q(x,y) is simply replaced by formula_28. formula_26 then becomes formula_29 and this formula is provable; since the part under negation and after the formula_30 sign is obviously provable, and the part under negation and before the formula_30 sign is obviously φ, just with x and y replaced by x' and y', we see that formula_31 is provable, and φ is refutable. We have proved that φ is either satisfiable or refutable, and this concludes the proof of the Lemma. Notice that we could not have used formula_27 instead of Q(x',y') from the beginning, because formula_23 would not have been a well-formed formula in that case. This is why we cannot naively use the argument appearing at the comment that precedes the proof. Proving the theorem for formulas of degree 1. As shown by the Lemma above, we only need to prove our theorem for formulas φ in R of degree 1. φ cannot be of degree 0, since formulas in R have no free variables and don't use constant symbols. So the formula φ has the general form: formula_32 Now we define an ordering of the "k"-tuples of natural numbers as follows: formula_33 should hold if either formula_34, or formula_35, and formula_36 precedes formula_37 in lexicographic order. [Here formula_38 denotes the sum of the terms of the tuple.] Denote the nth tuple in this order by formula_39. Set the formula formula_40 as formula_41. Then put formula_42 as formula_43 Lemma: For every "n", formula_44. Proof: By induction on "n"; we have formula_45, where the latter implication holds by variable substitution, since the ordering of the tuples is such that formula_46. But the last formula is equivalent to formula_47φ. For the base case, formula_48 is obviously a corollary of φ as well. So the Lemma is proven. Now if formula_49 is refutable for some "n", it follows that φ is refutable. On the other hand, suppose that formula_49 is not refutable for any "n". Then for each "n" there is some way of assigning truth values to the distinct subpropositions formula_50 (ordered by their first appearance in formula_42; "distinct" here means either distinct predicates, or distinct bound variables) in formula_51, such that formula_49 will be true when each proposition is evaluated in this fashion. This follows from the completeness of the underlying propositional logic. We will now show that there is such an assignment of truth values to formula_50, so that all formula_42 will be true: The formula_50 appear in the same order in every formula_49; we will inductively define a general assignment to them by a sort of "majority vote": Since there are infinitely many assignments (one for each formula_49) affecting formula_52, either infinitely many make formula_52 true, or infinitely many make it false and only finitely many make it true. In the former case, we choose formula_52 to be true in general; in the latter we take it to be false in general. Then from the infinitely many "n" for which formula_52 through formula_53 are assigned the same truth value as in the general assignment, we pick a general assignment to formula_50 in the same fashion. This general assignment must lead to every one of the formula_54 and formula_55 being true, since if one of the formula_54 were false under the general assignment, formula_42 would also be false for every "n &gt; k". But this contradicts the fact that for the finite collection of general formula_50 assignments appearing in formula_55, there are infinitely many "n" where the assignment making formula_42 true matches the general assignment. From this general assignment, which makes all of the formula_55 true, we construct an interpretation of the language's predicates that makes φ true. The universe of the model will be the natural numbers. Each i-ary predicate formula_23 should be true of the naturals formula_56 precisely when the proposition formula_57 is either true in the general assignment, or not assigned by it (because it never appears in any of the formula_55). In this model, each of the formulas formula_58 is true by construction. But this implies that φ itself is true in the model, since the formula_59 range over all possible k-tuples of natural numbers. So φ is satisfiable, and we are done. Intuitive explanation. We may write each "B""i" as Φ("x"1..."x""k","y"1..."y""m") for some "x"s, which we may call "first arguments" and "y"s that we may call "last arguments". Take "B"1 for example. Its "last arguments" are "z"2,"z"3..."z""m"+1, and for every possible combination of "k" of these variables there is some "j" so that they appear as "first arguments" in "B""j". Thus for large enough "n"1, "D""n"1 has the property that the "last arguments" of "B"1 appear, in every possible combinations of "k" of them, as "first arguments" in other "B""j"s within "D""n". For every Bi there is a Dni with the corresponding property. Therefore in a model that satisfies all the "D""n"s, there are objects corresponding to "z"1, "z"2... and each combination of "k" of these appear as "first arguments" in some "B""j", meaning that for every "k" of these objects "z""p"1..."z""p""k" there are "z""q"1..."z""q""m", which makes Φ("z""p"1..."z""p""k","z""q"1..."z""q""m") satisfied. By taking a submodel with only these "z"1, "z"2... objects, we have a model satisfying "φ". Extensions. Extension to first-order predicate calculus with equality. Gödel reduced a formula containing instances of the equality predicate to ones without it in an extended language. His method involves replacing a formula φ containing some instances of equality with the formula formula_60 formula_61 formula_62 formula_63 formula_64 formula_65 formula_62 formula_66 Here formula_67 denote the predicates appearing in φ (with formula_68 their respective arities), and φ' is the formula φ with all occurrences of equality replaced with the new predicate "Eq". If this new formula is refutable, the original φ was as well; the same is true of satisfiability, since we may take a quotient of satisfying model of the new formula by the equivalence relation representing "Eq". This quotient is well-defined with respect to the other predicates, and therefore will satisfy the original formula φ. Extension to countable sets of formulas. Gödel also considered the case where there are a countably infinite collection of formulas. Using the same reductions as above, he was able to consider only those cases where each formula is of degree 1 and contains no uses of equality. For a countable collection of formulas formula_69 of degree 1, we may define formula_70 as above; then define formula_71 to be the closure of formula_72. The remainder of the proof then went through as before. Extension to arbitrary sets of formulas. When there is an uncountably infinite collection of formulas, the Axiom of Choice (or at least some weak form of it) is needed. Using the full AC, one can well-order the formulas, and prove the uncountable case with the same argument as the countable one, except with transfinite induction. Other approaches can be used to prove that the completeness theorem in this case is equivalent to the Boolean prime ideal theorem, a weak form of AC.
[ { "math_id": 0, "text": "\\psi = \\exists x_1 \\cdots \\exists x_n \\varphi" }, { "math_id": 1, "text": "\\neg \\psi = \\forall x_1 \\cdots \\forall x_n \\neg \\varphi" }, { "math_id": 2, "text": "\\psi = \\forall y (P) \\exists z ( \\Phi \\wedge [ F(y) \\vee \\neg F(z) ] )" }, { "math_id": 3, "text": "\\forall y \\exists z ( F(y) \\vee \\neg F(z) )" }, { "math_id": 4, "text": "\\varphi=\\psi" }, { "math_id": 5, "text": "\\varphi = (\\forall x_1 \\cdots \\forall x_{k_1})(\\exists x_{k_1+1} \\cdots \\exists x_{k_2}) \\cdots (\\forall x_{k_{n-2}+1} \\cdots \\forall x_{k_{n-1}})(\\exists x_{k_{n-1}+1} \\cdots \\exists x_{k_n}) (\\Phi)" }, { "math_id": 6, "text": "\\varphi" }, { "math_id": 7, "text": "\\varphi = (\\forall x)(\\exists y)(\\forall u)(\\exist v) (P) \\psi" }, { "math_id": 8, "text": "(P)\\psi" }, { "math_id": 9, "text": "(\\forall x')(\\forall x)(\\forall y)(\\forall u)(\\exist v)(\\exist y') (P) Q'(x',y') \\wedge (Q'(x,y) \\rightarrow \\psi)" }, { "math_id": 10, "text": "\\psi" }, { "math_id": 11, "text": "(\\forall x)" }, { "math_id": 12, "text": "\\forall x_1 \\forall x_2 \\cdots \\forall x_n" }, { "math_id": 13, "text": "x_1 \\ldots x_n" }, { "math_id": 14, "text": "\\Phi = (\\forall x')(\\exists y') Q(x',y') \\wedge (\\forall x)(\\forall y)( Q(x,y) \\rightarrow (\\forall u)(\\exist v)(P)\\psi )" }, { "math_id": 15, "text": "\\Phi \\rightarrow \\varphi" }, { "math_id": 16, "text": "(\\forall u)(\\exists v)(P)" }, { "math_id": 17, "text": "( Q(x,y) \\rightarrow (\\forall u )(\\exists v)(P) \\psi) \\equiv (\\forall u)(\\exists v)(P) ( Q(x,y) \\rightarrow \\psi )" }, { "math_id": 18, "text": "\\Phi' = (\\forall x')(\\exist y') Q(x',y') \\wedge (\\forall x)(\\forall y) (\\forall u)(\\exists v)(P) ( Q(x,y) \\rightarrow \\psi )" }, { "math_id": 19, "text": "(S)\\rho \\wedge (S')\\rho'" }, { "math_id": 20, "text": "(T)(\\rho \\wedge \\rho')" }, { "math_id": 21, "text": "\\Psi = (\\forall x')(\\forall x)(\\forall y) (\\forall u)(\\exists y')(\\exists v)(P)Q(x',y') \\wedge (Q(x,y) \\rightarrow \\psi )" }, { "math_id": 22, "text": "\\Phi' \\equiv \\Psi" }, { "math_id": 23, "text": "\\Psi" }, { "math_id": 24, "text": "\\Psi \\equiv \\Phi' \\equiv \\Phi \\wedge \\Phi \\rightarrow \\varphi" }, { "math_id": 25, "text": "\\Phi" }, { "math_id": 26, "text": "\\neg \\Phi" }, { "math_id": 27, "text": "(\\forall u)(\\exists v)(P)\\psi(x,y\\mid x',y')" }, { "math_id": 28, "text": "(\\forall u)(\\exists v)(P)\\psi" }, { "math_id": 29, "text": "\\neg ( (\\forall x')(\\exists y') (\\forall u)(\\exists v)(P)\\psi(x,y\\mid x',y') \\wedge (\\forall x)(\\forall y) ( (\\forall u)(\\exists v)(P)\\psi \\rightarrow (\\forall u)(\\exists v)(P) \\psi ) )" }, { "math_id": 30, "text": "\\wedge" }, { "math_id": 31, "text": "\\neg \\varphi" }, { "math_id": 32, "text": " (\\forall x_1\\ldots x_k)(\\exists y_1\\ldots y_m) \\varphi(x_1\\ldots x_k, y_1\\ldots y_m)." }, { "math_id": 33, "text": " (x_1\\ldots x_k) < (y_1\\ldots y_k) " }, { "math_id": 34, "text": " \\Sigma_k (x_1\\ldots x_k) < \\Sigma_k (y_1\\ldots y_k) " }, { "math_id": 35, "text": " \\Sigma_k (x_1\\ldots x_k) = \\Sigma_k (y_1\\ldots y_k) " }, { "math_id": 36, "text": " (x_1\\ldots x_k) " }, { "math_id": 37, "text": " (y_1...y_k) " }, { "math_id": 38, "text": " \\Sigma_k (x_1\\ldots x_k) " }, { "math_id": 39, "text": " (a^n_1\\ldots a^n_k) " }, { "math_id": 40, "text": " B_n " }, { "math_id": 41, "text": " \\varphi(z_{a^n_1}\\ldots z_{a^n_k}, z_{(n-1)m+2}, z_{(n-1)m+3}\\ldots z_{nm+1}) " }, { "math_id": 42, "text": "D_n" }, { "math_id": 43, "text": " (\\exists z_1\\ldots z_{nm+1}) (B_1 \\wedge B_2 \\wedge \\cdots \\wedge B_n)." }, { "math_id": 44, "text": " \\varphi \\rightarrow D_n" }, { "math_id": 45, "text": " D_n \\Leftarrow D_{n-1} \\wedge (\\forall z_1\\ldots z_{(n-1)m+1})(\\exists z_{(n-1)m+2}\\ldots z_{nm+1}) B_n \\Leftarrow D_{n-1} \\wedge (\\forall z_{a^n_1}\\ldots z_{a^n_k})(\\exists y_1\\ldots y_m) \\varphi(z_{a^n_1}\\ldots z_{a^n_k}, y_1\\ldots y_m) " }, { "math_id": 46, "text": "(\\forall k)(a^n_1 \\ldots a^n_k) < (n-1)m + 2" }, { "math_id": 47, "text": " D_{n-1} \\wedge " }, { "math_id": 48, "text": " D_1 \\equiv (\\exists z_1\\ldots z_{m+1}) \\varphi(z_{a^1_1}\\ldots z_{a^1_k}, z_2, z_3\\ldots z_{m+1}) \\equiv (\\exists z_1\\ldots z_{m+1}) \\varphi(z_1\\ldots z_1, z_2, z_3\\ldots z_{m+1}) " }, { "math_id": 49, "text": " D_n " }, { "math_id": 50, "text": "E_h" }, { "math_id": 51, "text": " B_k " }, { "math_id": 52, "text": "E_1" }, { "math_id": 53, "text": "E_{h-1}" }, { "math_id": 54, "text": "B_k" }, { "math_id": 55, "text": "D_k" }, { "math_id": 56, "text": "(u_1\\ldots u_i)" }, { "math_id": 57, "text": "\\Psi(z_{u_1}\\ldots z_{u_i})" }, { "math_id": 58, "text": " (\\exists y_1\\ldots y_m) \\varphi(a^n_1\\ldots a^n_k, y_1...y_m) " }, { "math_id": 59, "text": "a^n" }, { "math_id": 60, "text": " (\\forall x) Eq(x, x) \\wedge (\\forall x,y,z) [Eq(x, y) \\rightarrow (Eq(x, z) \\rightarrow Eq(y, z))] " }, { "math_id": 61, "text": " \\wedge (\\forall x,y,z) [Eq(x, y) \\rightarrow (Eq(z, x) \\rightarrow Eq(z, y))] " }, { "math_id": 62, "text": " \\wedge " }, { "math_id": 63, "text": " (\\forall x_1\\ldots x_k, y_1\\ldots x_k) [(Eq(x_1, y_1) \\wedge \\cdots \\wedge Eq(x_k, y_k)) \\rightarrow (A(x_1\\ldots x_k) \\equiv A(y_1\\ldots y_k))] " }, { "math_id": 64, "text": " \\wedge \\cdots \\wedge " }, { "math_id": 65, "text": " (\\forall x_1\\ldots x_m, y_1\\ldots x_m) [(Eq(x_1, y_1) \\wedge \\cdots \\wedge Eq(x_m, y_m)) \\rightarrow (Z(x_1\\ldots x_m) \\equiv Z(y_1\\ldots y_m))] " }, { "math_id": 66, "text": " \\varphi'." }, { "math_id": 67, "text": "A \\ldots Z" }, { "math_id": 68, "text": "k \\ldots m" }, { "math_id": 69, "text": " \\varphi^i " }, { "math_id": 70, "text": " B^i_k " }, { "math_id": 71, "text": " D_k " }, { "math_id": 72, "text": " B^1_1\\ldots B^1_k, \\ldots , B^k_1\\ldots B^k_k " } ]
https://en.wikipedia.org/wiki?curid=12727
1272743
Volume of distribution
Measuring the relative affinity of a drug between blood constituents and tissue constituents. In pharmacology, the volume of distribution (VD, also known as apparent volume of distribution, literally, "volume of dilution") is the theoretical volume that would be necessary to contain the total amount of an administered drug at the same concentration that it is observed in the blood plasma. In other words, it is the ratio of "amount of drug in a body (dose)" to "concentration of the drug that is measured in blood, plasma, and un-bound in interstitial fluid". The VD of a drug represents the degree to which a drug is distributed in body tissue rather than the plasma. VD is directly proportional with the amount of drug distributed into tissue; a higher VD indicates a greater amount of tissue distribution. A VD greater than the total volume of body water (approximately 42 liters in humans) is possible, and would indicate that the drug is highly distributed into tissue. In other words, the volume of distribution is smaller in the drug staying in the plasma than that of a drug that is widely distributed in tissues. In rough terms, drugs with a high lipid solubility (non-polar drugs), low rates of ionization, or low plasma protein binding capabilities have higher volumes of distribution than drugs which are more polar, more highly ionized or exhibit high plasma protein binding in the body's environment. Volume of distribution may be increased by kidney failure (due to fluid retention) and liver failure (due to altered body fluid and plasma protein binding). Conversely it may be decreased in dehydration. The initial volume of distribution describes blood concentrations prior to attaining the apparent volume of distribution and uses the same formula. Equations. The volume of distribution is given by the following equation: formula_0 Therefore, the dose required to give a certain plasma concentration can be determined if the VD for that drug is known. The VD is not a physiological value; it is more a reflection of how a drug will distribute throughout the body depending on several physicochemical properties, e.g. solubility, charge, size, etc. The unit for Volume of Distribution is typically reported in litres. As body composition changes with age, VD decreases. The VD may also be used to determine how readily a drug will displace into the body tissue compartments relative to the blood: formula_1 Where: Examples. If you administer a dose D of a drug intravenously in one go (IV-bolus), you would naturally expect it to have an immediate blood concentration formula_2 which directly corresponds to the amount of blood contained in the body formula_3. Mathematically this would be: formula_4 But this is generally not what happens. Instead you observe that the drug has distributed out into some other volume (read organs/tissue). So probably the first question you want to ask is: how much of the drug is no longer in the blood stream? The volume of distribution formula_5 quantifies just that by specifying how big a volume you would need in order to observe the blood concentration actually measured. An example for a simple case (mono-compartmental) would be to administer D=8 mg/kg to a human. A human has a blood volume of around formula_60.08 L/kg This gives a formula_7100 μg/mL if the drug stays in the blood stream only, and thus its volume of distribution is the same as formula_3 that is formula_8 0.08 L/kg. If the drug distributes into all body water the volume of distribution would increase to approximately formula_80.57 L/kg If the drug readily diffuses into the body fat the volume of distribution may increase dramatically, an example is chloroquine which has a formula_8250-302 L/kg In the simple mono-compartmental case the volume of distribution is defined as: formula_9, where the formula_2 in practice is an extrapolated concentration at time = 0 from the first early plasma concentrations after an IV-bolus administration (generally taken around 5 min - 30 min after giving the drug). References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "{V_{D}} = \\frac{\\mathrm{total \\ amount \\ of \\ drug \\ in \\ the \\ body}}{\\mathrm{drug \\ blood \\ plasma \\ concentration}}" }, { "math_id": 1, "text": "{V_{D}} = {V_{P}} + {V_{T}} \\left(\\frac{fu}{fu_{t}}\\right)" }, { "math_id": 2, "text": "C_0" }, { "math_id": 3, "text": "V_{blood}" }, { "math_id": 4, "text": "C_0 = D/V_{blood}" }, { "math_id": 5, "text": "V_D" }, { "math_id": 6, "text": "V_{blood}=" }, { "math_id": 7, "text": "C_0=" }, { "math_id": 8, "text": "V_D=" }, { "math_id": 9, "text": "V_D=D/C_0" } ]
https://en.wikipedia.org/wiki?curid=1272743
12727767
Waldhausen category
In mathematics, a Waldhausen category is a category "C" equipped with some additional data, which makes it possible to construct the K-theory spectrum of "C" using a so-called S-construction. It's named after Friedhelm Waldhausen, who introduced this notion (under the term category with cofibrations and weak equivalences) to extend the methods of algebraic K-theory to categories not necessarily of algebraic origin, for example the category of topological spaces. Definition. Let "C" be a category, co("C") and we("C") two classes of morphisms in "C", called cofibrations and weak equivalences respectively. The triple ("C", co("C"), we("C")) is called a Waldhausen category if it satisfies the following axioms, motivated by the similar properties for the notions of cofibrations and weak homotopy equivalences of topological spaces: For example, if formula_0 is a cofibration and formula_1 is any map, then there must exist a pushout formula_2, and the natural map formula_3 should be cofibration: Relations with other notions. In algebraic K-theory and homotopy theory there are several notions of categories equipped with some specified classes of morphisms. If "C" has a structure of an exact category, then by defining we("C") to be isomorphisms, co("C") to be admissible monomorphisms, one obtains a structure of a Waldhausen category on "C". Both kinds of structure may be used to define K-theory of "C", using the Q-construction for an exact structure and S-construction for a Waldhausen structure. An important fact is that the resulting K-theory spaces are homotopy equivalent. If "C" is a model category with a zero object, then the full subcategory of cofibrant objects in "C" may be given a Waldhausen structure. S-construction. The Waldhausen S-construction produces from a Waldhausen category "C" a sequence of Kan complexes formula_4, which forms a spectrum. Let formula_5 denote the loop space of the geometric realization formula_6 of formula_7. Then the group formula_8 is the "n"-th "K"-group of "C". Thus, it gives a way to define higher "K"-groups. Another approach for higher "K"-theory is Quillen's Q-construction. The construction is due to Friedhelm Waldhausen. biWaldhausen categories. A category "C" is equipped with bifibrations if it has cofibrations and its opposite category "C"OP has so also. In that case, we denote the fibrations of "C"OP by quot("C"). In that case, "C" is a biWaldhausen category if "C" has bifibrations and weak equivalences such that both ("C", co("C"), we) and ("C"OP, quot("C"), weOP) are Waldhausen categories. Waldhausen and biWaldhausen categories are linked with algebraic K-theory. There, many interesting categories are complicial biWaldhausen categories. For example: The category formula_9 of bounded chain complexes on an exact category formula_10. The category formula_11 of functors formula_12 when formula_13 is so. And given a diagram formula_14, then formula_15 is a nice complicial biWaldhausen category when formula_16 is.
[ { "math_id": 0, "text": "\\scriptstyle A\\, \\rightarrowtail\\, B" }, { "math_id": 1, "text": "\\scriptstyle A\\,\\to\\, C" }, { "math_id": 2, "text": "\\scriptstyle B\\, \\cup_A\\, C" }, { "math_id": 3, "text": "\\scriptstyle C\\, \\rightarrowtail\\, B\\,\\cup_A\\, C" }, { "math_id": 4, "text": "S_n(C)" }, { "math_id": 5, "text": "K(C)" }, { "math_id": 6, "text": "|S_*(C)|" }, { "math_id": 7, "text": "S_*(C)" }, { "math_id": 8, "text": "\\pi_n K(C) = \\pi_{n+1} |S_*(C)|" }, { "math_id": 9, "text": "\\scriptstyle C^b(\\mathcal{A})" }, { "math_id": 10, "text": "\\scriptstyle \\mathcal{A}" }, { "math_id": 11, "text": "\\scriptstyle S_n \\mathcal{C}" }, { "math_id": 12, "text": "\\scriptstyle \\operatorname{Ar}(\\Delta ^n)\\, \\to\\, \\mathcal{C}" }, { "math_id": 13, "text": "\\scriptstyle\\mathcal{C}" }, { "math_id": 14, "text": "\\scriptstyle I" }, { "math_id": 15, "text": "\\scriptstyle \\mathcal{C}^I" }, { "math_id": 16, "text": "\\scriptstyle \\mathcal{C}" } ]
https://en.wikipedia.org/wiki?curid=12727767
12728109
Soil respiration
Chemical process produced by soil and the organisms within it Soil respiration refers to the production of carbon dioxide when soil organisms respire. This includes respiration of plant roots, the rhizosphere, microbes and fauna. Soil respiration is a key ecosystem process that releases carbon from the soil in the form of CO2. CO2 is acquired by plants from the atmosphere and converted into organic compounds in the process of photosynthesis. Plants use these organic compounds to build structural components or respire them to release energy. When plant respiration occurs below-ground in the roots, it adds to soil respiration. Over time, plant structural components are consumed by heterotrophs. This heterotrophic consumption releases CO2 and when this CO2 is released by below-ground organisms, it is considered soil respiration. The amount of soil respiration that occurs in an ecosystem is controlled by several factors. The temperature, moisture, nutrient content and level of oxygen in the soil can produce extremely disparate rates of respiration. These rates of respiration can be measured in a variety of methods. Other methods can be used to separate the source components, in this case the type of photosynthetic pathway (C3/C4), of the respired plant structures. Soil respiration rates can be largely affected by human activity. This is because humans have the ability to and have been changing the various controlling factors of soil respiration for numerous years. Global climate change is composed of numerous changing factors including rising atmospheric CO2, increasing temperature and shifting precipitation patterns. All of these factors can affect the rate of global soil respiration. Increased nitrogen fertilization by humans also has the potential to affect rates over the entire planet. Soil respiration and its rate across ecosystems is extremely important to understand. This is because soil respiration plays a large role in global carbon cycling as well as other nutrient cycles. The respiration of plant structures releases not only CO2 but also other nutrients in those structures, such as nitrogen. Soil respiration is also associated with positive feedback with global climate change. Positive feedback is when a change in a system produces response in the same direction of the change. Therefore, soil respiration rates can be affected by climate change and then respond by enhancing climate change. Sources of carbon dioxide in soil. All cellular respiration releases energy, water and CO2 from organic compounds. Any respiration that occurs below-ground is considered soil respiration. Respiration by plant roots, bacteria, fungi and soil animals all release CO2 in soils, as described below. Tricarboxylic acid (TCA) cycle. The tricarboxylic acid (TCA) cycle – or citric acid cycle – is an important step in cellular respiration. In the TCA cycle, a six carbon sugar is oxidized. This oxidation produces the CO2 and H2O from the sugar. Plants, fungi, animals and bacteria all use this cycle to convert organic compounds to energy. This is how the majority of soil respiration occurs at its most basic level. Since the process relies on oxygen to occur, this is referred to as aerobic respiration. Fermentation. Fermentation is another process in which cells gain energy from organic compounds. In this metabolic pathway, energy is derived from the carbon compound without the use of oxygen. The products of this reaction are carbon dioxide and usually either ethyl alcohol or lactic acid. Due to the lack of oxygen, this pathway is described as anaerobic respiration. This is an important source of CO2 in soil respiration in waterlogged ecosystems where oxygen is scarce, as in peat bogs and wetlands. However, most CO2 released from the soil occurs via respiration and one of the most important aspects of below-ground respiration occurs in the plant roots. Root respiration. Plants respire some of the carbon compounds which were generated by photosynthesis. When this respiration occurs in roots, it adds to soil respiration. Root respiration accounts for approximately half of all soil respiration. However, these values can range from 10 to 90% depending on the dominant plant types in an ecosystem and conditions under which the plants are subjected. Thus, the amount of CO2 produced through root respiration is determined by the root biomass and specific root respiration rates. Directly next to the root is the area known as the rhizosphere, which also plays an important role in soil respiration. Rhizosphere respiration. The rhizosphere is a zone immediately next to the root surface with its neighboring soil. In this zone there is a close interaction between the plant and microorganisms. Roots continuously release substances, or exudates, into the soil. These exudates include sugars, amino acids, vitamins, long chain carbohydrates, enzymes and lysates which are released when roots cells break. The amount of carbon lost as exudates varies considerably between plant species. It has been demonstrated that up to 20% of carbon acquired by photosynthesis is released into the soil as root exudates. These exudates are decomposed primarily by bacteria. These bacteria will respire the carbon compounds through the TCA cycle; however, fermentation is also present. This is due to the lack of oxygen due to greater oxygen consumption by the root as compared to the bulk soil, soil at a greater distance from the root. Another important organism in the rhizosphere are root-infecting fungi or mycorrhizae. These fungi increase the surface area of the plant root and allow the root to encounter and acquire a greater amount of soil nutrients necessary for plant growth. In return for this benefit, the plant will transfer sugars to the fungi. The fungi will respire these sugars for energy thereby increasing soil respiration. Fungi, along with bacteria and soil animals, also play a large role in the decomposition of litter and soil organic matter. Soil animals. Soil animals graze on populations of bacteria and fungi as well as ingest and break up litter to increase soil respiration. Microfauna are made up of the smallest soil animals. These include nematodes and mites. This group specializes on soil bacteria and fungi. By ingesting these organisms, carbon that was initially in plant organic compounds and was incorporated into bacterial and fungal structures will now be respired by the soil animal. Mesofauna are soil animals from in length and will ingest soil litter. The fecal material will hold a greater amount of moisture and have a greater surface area. This will allow for new attack by microorganisms and a greater amount of soil respiration. Macrofauna are organisms from , such as earthworms and termites. Most macrofauna fragment litter, thereby exposing a greater amount of area to microbial attack. Other macrofauna burrow or ingest litter, reducing soil bulk density, breaking up soil aggregates and increasing soil aeration and the infiltration of water. Regulation of soil respiration. Regulation of CO2 production in soil is due to various abiotic, or non-living, factors. Temperature, soil moisture and nitrogen all contribute to the rate of respiration in soil. Temperature. Temperature affects almost all aspects of respiration processes. Temperature will increase respiration exponentially to a maximum, at which point respiration will decrease to zero when enzymatic activity is interrupted. Root respiration increases exponentially with temperature in its low range when the respiration rate is limited mostly by the TCA cycle. At higher temperatures the transport of sugars and the products of metabolism become the limiting factor. At temperatures over , root respiration begins to shut down completely. Microorganisms are divided into three temperature groups; cryophiles, mesophiles and thermophiles. Cryophiles function optimally at temperatures below , mesophiles function best at temperatures between 20 and and thermophiles function optimally at over . In natural soils many different cohorts, or groups of microorganisms exist. These cohorts will all function best at different conditions, so respiration may occur over a very broad range. Temperature increases lead to greater rates of soil respiration until high values retard microbial function, this is the same pattern that is seen with soil moisture levels. Soil moisture. Soil moisture is another important factor influencing soil respiration. Soil respiration is low in dry conditions and increases to a maximum at intermediate moisture levels until it begins to decrease when moisture content excludes oxygen. This allows anaerobic conditions to prevail and depress aerobic microbial activity. Studies have shown that soil moisture only limits respiration at the lowest and highest conditions with a large plateau existing at intermediate soil moisture levels for most ecosystems. Many microorganisms possess strategies for growth and survival under low soil moisture conditions. Under high soil moisture conditions, many bacteria take in too much water causing their cell membrane to lyse, or break. This can decrease the rate of soil respiration temporarily, but the lysis of bacteria causes for a spike in resources for many other bacteria. This rapid increase in available labile substrates causes short-term enhanced soil respiration. Root respiration will increase with increasing soil moisture, especially in dry ecosystems; however, individual species' root respiration response to soil moisture will vary widely from species to species depending on life history traits. Upper levels of soil moisture will depress root respiration by restricting access to atmospheric oxygen. With the exception of wetland plants, which have developed specific mechanisms for root aeration, most plants are not adapted to wetland soil environments with low oxygen. The respiration dampening effect of elevated soil moisture is amplified when soil respiration also lowers soil redox through bioelectrogenesis. Soil-based microbial fuel cells are becoming popular educational tools for science classrooms. Nitrogen. Nitrogen directly affects soil respiration in several ways. Nitrogen must be taken in by roots to promote plant growth and life. Most available nitrogen is in the form of NO3−, which costs 0.4 units of CO2 to enter the root because energy must be used to move it up a concentration gradient. Once inside the root the NO3− must be reduced to NH3. This step requires more energy, which equals 2 units of CO2 per molecule reduced. In plants with bacterial symbionts, which fix atmospheric nitrogen, the energetic cost to the plant to acquire one molecule of NH3 from atmospheric N2 is 2.36 CO2. It is essential that plants uptake nitrogen from the soil or rely on symbionts to fix it from the atmosphere to assure growth, reproduction and long-term survival. Another way nitrogen affects soil respiration is through litter decomposition. High nitrogen litter is considered high quality and is more readily decomposed by microorganisms than low quality litter. Degradation of cellulose, a tough plant structural compound, is also a nitrogen limited process and will increase with the addition of nitrogen to litter. Methods of measurement. Different methods exist for the measurement of soil respiration rate and the determination of sources. Methods can be divided into field- and laboratory-based methods. The most common field methods include the use of long-term stand alone soil flux systems for measurement at one location at different times; survey soil respiration systems for measurement of different locations and at different times. The use of stable isotope ratios can be used both in laboratory of field measurements. Soil respiration can be measured alone or with added nutrients and (carbon) substrates that supply food sources to the microorganisms. Soil respiration without any additions of nutrients and substrates is called the basal soil respiration (BR). With the addition of nutrients (often nitrogen and phosphorus) and substrates (e.g. sugars), it is called the substrate-induced soil respiration (SIR). In both BR and SIR measurements, the moisture content can be adjusted with water. Field methods. Long-term stand-alone soil flux systems for measurement at one location over time. These systems measure at one location over long periods of time. Since they only measure at one location, it is common to use multiple stations to reduce measuring error caused by soil variability over small distances. Soil variability may be tested with survey soil respiration instruments. The long-term instruments are designed to expose the measuring site to ambient conditions as much as is possible between measurements. Types of long-term stand-alone instruments. Closed, non-steady state systems. Closed systems take short-term measurements (typically over few minutes only) in a chamber sealed over the soil. The rate of soil CO2 efflux is calculated on the basis of CO2 increased inside the chamber. As it is within the nature of closed chambers that CO2 continues to accumulate, measurement periods are reduced to a minimum to achieve a detectable, linear concentration increase, avoiding an excessive build-up of CO2 inside the chamber over time. Both individual assay information and diurnal CO2 respiration measuring information is accessible. It is also common for such systems to also measure soil temperature, soil moisture and PAR (photosynthetically active radiation). These variables are normally recorded in the measuring file along with CO2 values. For determination of soil respiration and the slope of CO2 increase, researchers have used linear regression analysis, the Pedersen (2001) algorithm, and exponential regression. There are more published references for linear regression analysis; however, the Pedersen algorithm and exponential regression analysis methods also have their following. Some systems offer a choice of mathematical methods. When using linear regression, multiple data points are graphed and the points can be fitted with a linear regression equation, which will provide a slope. This slope can provide the rate of soil respiration with the equation formula_0, where "F" is the rate of soil respiration, "b" is the slope, "V" is the volume of the chamber and "A" is the surface area of the soil covered by the chamber. It is important that the measurement is not allowed to run over a longer period of time as the increase in CO2 concentration in the chamber will also increase the concentration of CO2 in the porous top layer of the soil profile. This increase in concentration will cause an underestimation of soil respiration rate due to the additional CO2 being stored within the soil. Open, steady-state systems. Open mode systems are designed to find soil flux rates when measuring chamber equilibrium has been reached. Air flows through the chamber before the chamber is closed and sealed. This purges any non-ambient CO2 levels from the chamber before measurement. After the chamber is closed, fresh air is pumped into the chamber at a controlled and programmable flow rate. This mixes with the CO2 from the soil, and after a time, equilibrium is reached. The researcher specifies the equilibrium point as the difference in CO2 measurements between successive readings, in an elapsed time. During the assay, the rate of change slowly reduces until it meets the customer's rate of change criteria, or the maximum selected time for the assay. Soil flux or rate of change is then determined once equilibrium conditions are reached within the chamber. Chamber flow rates and times are programmable, accurately measured, and used in calculations. These systems have vents that are designed to prevent a possible unacceptable buildup of partial CO2 pressure discussed under closed mode systems. Since the air movement inside the chamber might cause increased chamber pressure, or external winds may produce reduced chamber pressure, a vent is provided that is designed to be as wind proof as possible. Open systems are also not as sensitive to soil structure variation, or to boundary layer resistance issues at the soil surface. Air flow in the chamber at the soil surface is designed to minimize boundary layer resistance phenomena. Hybrid Mode Systems. A hybrid system also exists. It has a vent that is designed to be as wind proof as possible, and prevent possible unacceptable partial CO2 pressure buildup, but is designed to operate like a closed mode design system in other regards. Survey soil respiration systems – for testing the variation of CO2 respiration at different locations and at different times. These are either open or closed mode instruments that are portable or semi-portable. They measure CO2 soil respiration variability at different locations and at different times. With this type of instrument, soil collars that can be connected to the survey measuring instrument are inserted into the ground and the soil is allowed to stabilize for a period of time. The insertion of the soil collar temporarily disturbs the soil, creating measuring artifacts. For this reason, it is common to have several soil collars inserted at different locations. Soil collars are inserted far enough to limit lateral diffusion of CO2. After soil stabilization, the researcher then moves from one collar to another according to experimental design to measure soil respiration. Survey soil respiration systems can also be used to determine the number of long-term stand-alone temporal instruments that are required to achieve an acceptable level of error. Different locations may require different numbers of long-term stand-alone units due to greater or lesser soil respiration variability. Isotope methods. Plants acquire CO2 and produce organic compounds with the use of one of three photosynthetic pathways. The two most prevalent pathways are the C3 and C4 processes. C3 plants are best adapted to cool and wet conditions while C4 plants do well in hot and dry ecosystems. Due to the different photosynthetic enzymes between the two pathways, different carbon isotopes are acquired preferentially. Isotopes are the same element that differ in the number of neutrons, thereby making one isotope heavier than the other. The two stable carbon isotopes are 12C and 13C. The C3 pathway will discriminate against the heavier isotope more than the C4 pathway. This will make the plant structures produced from C4 plants more enriched in the heavier isotope and therefore root exudates and litter from these plants will also be more enriched. When the carbon in these structures is respired, the CO2 will show a similar ratio of the two isotopes. Researchers will grow a C4 plant on soil that was previously occupied by a C3 plant or vice versa. By taking soil respiration measurements and analyzing the isotopic ratios of the CO2 it can be determined whether the soil respiration is mostly old versus recently formed carbon. For example, maize, a C4 plant, was grown on soil where spring wheat, a C3 plant, was previously grown. The results showed respiration of C3 SOM in the first 40 days, with a gradual linear increase in heavy isotope enrichment until day 70. The days after 70 showed a slowing enrichment to a peak at day 100. By analyzing stable carbon isotope data it is possible to determine the source components of respired SOM that was produced by different photosynthetic pathways. Substrate-induced respiration in the field using stable isotopes. One problem in the measurement of soil respiration in the field is that respiration of microorganisms can not be distinguished from respiration from plant roots and soil animals. This can be overcome using stable isotope techniques. Cane sugar is a C4 – sugar which can act as an isotopic tracer. Cane sugar has a slightly higher abundance of 13C (δ13C ≈ −10‰) than the endogenous (natural) carbon in a C3 ecosystem (δ13C=−25 to −28‰). Cane sugar can be sprayed on the soil in a solution and will infiltrate the upper soil, Only microorganisms will respire the added sugar because roots exclusively respire carbon products that are assimilated by the plant via photosynthesis. By analyses of the δ13C of the CO2 evolving from the soil with or without adding cane sugar, the fraction of C3 (root and microbial) and C4 (microbial respiration) can be calculated. Field respiration using stable isotopes can be used as a tool to measure microbial respiration in-situ without disturbing the microbial communities by mixing soil nutrients, oxygen, and soil contaminants that may be present. Responses to human disturbance. Throughout the past 160 years, humans have changed land use and industrial practices, which have altered the climate and global biogeochemical cycles. These changes have affected the rate of soil respiration around the planet. In addition, increasingly frequent extreme climatic events such as heat waves (involving high temperature disturbances and associated intense droughts), followed by intense rainfall, impact on microbial communities and soil physico-chemistry and may induce changes in soil respiration. Elevated carbon dioxide. Since the Industrial Revolution, humans have emitted vast amounts of CO2 into the atmosphere. These emissions have increased greatly over time and have increased global atmospheric CO2 levels to their highest in over 750,000 years. Soil respiration increases when ecosystems are exposed to elevated levels of CO2. Numerous free air CO2 enrichment (FACE) studies have been conducted to test soil respiration under predicted future elevated CO2 conditions. Recent FACE studies have shown large increases in soil respiration due to increased root biomass and microbial activity. Soil respiration has been found to increase up to 40.6% in a sweetgum forest in Tennessee and poplar forests in Wisconsin under elevated CO2 conditions. It is extremely likely that CO2 levels will exceed those used in these FACE experiments by the middle of this century due to increased human use of fossil fuels and land use practices. Climate warming. Due to the increase in temperature of the soil, CO2 levels in our atmosphere increase, and as such the mean average temperature of the Earth is rising. This is due to human activities such as forest clearing, soil denuding, and developments that destroy autotrophic processes. With the loss of photosynthetic plants covering and cooling the surface of the soil, the infrared energy penetrates the soil heating it up and causing a rise in heterotrophic bacteria. Heterotrophs in the soil quickly degrade the organic matter and soil structure crumbles, thus it dissolves into streams and rivers into the sea. Much of the organic matter swept away in floods caused by forest clearing goes into estuaries, wetlands and eventually into the open ocean. Increased turbidity of surface waters causes biological oxygen demand and more autotrophic organisms die. Carbon dioxide levels rise with increased respiration of soil bacteria after temperatures rise due to loss of soil cover. As mentioned earlier, temperature greatly affects the rate of soil respiration. This may have the most drastic influence in the Arctic. Large stores of carbon are locked in the frozen permafrost. With an increase in temperature, this permafrost is melting and aerobic conditions are beginning to prevail, thereby greatly increasing the rate of respiration in that ecosystem. Changes in precipitation. Due to the shifting patterns of temperature and changing oceanic conditions, precipitation patterns are expected to change in location, frequency and intensity. Larger and more frequent storms are expected when oceans can transfer more energy to the forming storm systems. This may have the greatest impact on xeric, or arid, ecosystems. It has been shown that soil respiration in arid ecosystems shows dynamic changes within a raining cycle. The rate of respiration in dry soil usually bursts to a very high level after rainfall and then gradually decreases as the soil dries. With an increase in rainfall frequency and intensity over area without previous extensive rainfall, a dramatic increase in soil respiration can be inferred. Nitrogen fertilization. Since the onset of the Green Revolution in the middle of the last century, vast amounts of nitrogen fertilizers have been produced and introduced to almost all agricultural systems. This has led to increases in plant available nitrogen in ecosystems around the world due to agricultural runoff and wind-driven fertilization. As discussed earlier, nitrogen can have a significant positive effect on the level and rate of soil respiration. Increases in soil nitrogen have been found to increase plant dark respiration, stimulate specific rates of root respiration and increase total root biomass. This is because high nitrogen rates are associated with high plant growth rates. High plant growth rates will lead to the increased respiration and biomass found in the study. With this increase in productivity, an increase in soil activities and therefore respiration can be assured. Importance. Soil respiration plays a significant role in the global carbon and nutrient cycles as well as being a driver for changes in climate. These roles are important to our understanding of the natural world and human preservation. Global carbon cycling. Soil respiration plays a critical role in the regulation of carbon cycling at the ecosystem level and at global scales. Each year approximately 120 petagrams (Pg) of carbon are taken up by land plants and a similar amount is released to the atmosphere through ecosystem respiration. The global soils contain up to 3150 Pg of carbon, of which 450 Pg exist in wetlands and 400 Pg in permanently frozen soils. The soils contain more than four times the carbon as the atmosphere. Researchers have estimated that soil respiration accounts for 77 Pg of carbon released to the atmosphere each year. This level of release is greater than the carbon release due to anthropogenic sources (56 Pg per year) such as fossil fuel burning. Thus, a small change in soil respiration can seriously alter the balance of atmosphere CO2 concentration versus soil carbon stores. Much like soil respiration can play a significant role in the global carbon cycle, it can also regulate global nutrient cycling. Nutrient cycling. A major component of soil respiration is from the decomposition of litter which releases CO2 to the environment while simultaneously immobilizing or mineralizing nutrients. During decomposition, nutrients such as nitrogen are immobilized by microbes for their own growth. As these microbes are ingested or die, nitrogen is added to the soil. Nitrogen is also mineralized from the degradation of proteins and nucleic acids in litter. This mineralized nitrogen is also added to the soil. Due to these processes, the rate of nitrogen added to the soil is coupled with rates of microbial respiration. Studies have shown that rates of soil respiration were associated with rates of microbial turnover and nitrogen mineralization. Alterations of the global cycles can further act to change the climate of the planet. Climate change. As stated earlier, the CO2 released by soil respiration is a greenhouse gas that will continue to trap energy and increase the global mean temperature if concentrations continue to rise. As global temperature rises, so will the rate of soil respiration across the globe thereby leading to a higher concentration of CO2 in the atmosphere, again leading to higher global temperatures. This is an example of a positive feedback loop. It is estimated that a rise in temperature by 2 °C will lead to an additional release of 10 Pg carbon per year to the atmosphere from soil respiration. This is a larger amount than current anthropogenic carbon emissions. There also exists a possibility that this increase in temperature will release carbon stored in permanently frozen soils, which are now melting. Climate models have suggested that this positive feedback between soil respiration and temperature will lead to a decrease in soil stored carbon by the middle of the 21st century. Summary. Soil respiration is a key ecosystem process that releases carbon from the soil in the form of carbon dioxide. Carbon is stored in the soil as organic matter and is respired by plants, bacteria, fungi and animals. When this respiration occurs below ground, it is considered soil respiration. Temperature, soil moisture and nitrogen all regulate the rate of this conversion from carbon in soil organic compounds to CO2. Many methods are used to measure soil respiration; however, the closed dynamic chamber and use of stable isotope ratios are two of the most prevalent techniques. Humans have altered atmospheric CO2 levels, precipitation patterns and fertilization rates, all of which have had a significant role on soil respiration rates. The changes in these rates can alter the global carbon and nutrient cycles as well as play a significant role in climate change. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "F=bV/A" } ]
https://en.wikipedia.org/wiki?curid=12728109
12728577
Behavioral momentum
Behavioral momentum is a theory in quantitative analysis of behavior and is a behavioral metaphor based on physical momentum. It describes the general relation between resistance to change (persistence of behavior) and the rate of reinforcement obtained in a given situation. B. F. Skinner (1938) proposed that all behavior is based on a fundamental unit of behavior called the discriminated operant. The discriminated operant, also known as the three-term contingency, has three components: an antecedent discriminative stimulus, a response, and a reinforcing or punishing consequence. The organism responds in the presence of the stimulus because past responses in the presence of that stimulus have produced reinforcement. Resistance to change. According to behavioral momentum theory, there are two separable factors that independently govern the rate with which a discriminated operant occurs and the persistence of that response in the face of disruptions such as punishment, extinction, or the differential reinforcement of alternative behaviors. (see Nevin &amp; Grace, 2000, for a review). First, the positive contingency between the response and a reinforcing consequence controls response rates (i.e., a response–reinforcer relation) by shaping a particular pattern of responding. This is governed by the relative law of effect (i.e., the matching law; Herrnstein, 1970). Secondly, the Pavlovian relation between surrounding, or context, stimuli and the rate or magnitude (but not both) of reinforcement obtained in the context (i.e., a stimulus–reinforcer relation) governs the resistance of the behavior to operations such as extinction. Resistance to change is assessed by measuring responding during operations such as extinction or satiation that tend to disrupt the behavior and comparing these measurements to stable, pre-disruption response rates. Resistance to disruption has been considered a better measure of response strength than a simple measure of response rate.(Nevin, 1974). This is because variations in reinforcement contingencies such as differential-reinforcement-of-high- or low-response-rate schedules can yield highly variable response rates even though overall reinforcement rates are equal. Thus it is questionable whether these differences in response rates indicate differences in the underlying strength of a response (see Morse, 1966, for a discussion). According to behavioral momentum theory, the relation between response rate and resistance to change is analogous to the relation between velocity and mass of a moving object, according to Newton's second law of motion (Nevin, Mandell &amp; Atak, 1983). Newton's second law states that the change in velocity of an object when a force is applied is directly related to that force and inversely related to the object's mass. Similarly, behavioral momentum theory states that the change in response rate under conditions of disruption ("Bx") relative to baseline response rate ("Bo") is directly related to the force or magnitude of disruption ("f") and inversely related to the rate of reinforcement in a stimulus context ("r"): formula_0(1) The free parameter "b" indicates the sensitivity of resistance to change to the rate of reinforcement in the stimulus context (i.e., the stimulus–reinforcer relation). Resistance to disruption typically is assessed when two distinctive discriminative stimulus contexts alternate and signal different schedules of reinforcement (i.e., a multiple schedule). Equation 1 can be rewritten to account for resistance to change across two stimulus contexts (Nevin, 1992; Nevin, Grace, &amp; McLean, 2001) when a disrupter is uniformly applied across contexts (i.e., "f"1 = "f"2): formula_1(2) The subscripts indicate the different stimulus contexts. Thus, Equation 2 states that relative resistance to change is a power function of the relative rate of reinforcement across stimulus contexts, with the "a" parameter indicating sensitivity to relative reinforcement rate. Consistent with behavioral momentum theory, resistance to disruption often has been found to be greater in stimulus contexts that signal higher rates or magnitudes of reinforcement (see Nevin, 1992, for a review). Studies that add response-independent (i.e., free) reinforcement to one stimulus context strongly support the theory that changes in response strength are determined by stimulus–reinforcer relations and are independent of response–reinforcer relations. For instance, Nevin, Tota, Torquato, and Shull (1990) had pigeons pecking lighted disks on separate variable-interval 60-s schedules of intermittent food reinforcement across two components of a multiple schedule. Additional free reinforcers were presented every 15 or 30 s on average when the disk was red, but not when the disk was green. Thus, the response–reinforcer relation was degraded when the disk was red because each reinforcer was not immediately preceded by a response. Consistent with the matching law, response rates were lower in the red context than in the green context. However, the stimulus–reinforcer relation was enhanced in the red context because the overall rate of food presentation was greater. Consistent with behavioral momentum theory, resistance to presession feeding (satiation) and discontinuing reinforcement in both contexts (extinction) was greater in the red context. Similar results have been found when reinforcers are added to a context by reinforcing an alternative response. The findings of Nevin et al. (1990) have been extended across a number of procedures and species including goldfish (Igaki &amp; Sakagami, 2004), rats (Harper, 1999a, 1999b; Shull, Gaynor &amp; Grimes, 2001), pigeons (Podlesnik &amp; Shahan, 2008), and humans (Ahearn, Clark, Gardenier, Chung &amp; Dube, 2003; Cohen, 1996; Mace et al., 1990). The behavioral momentum framework also has been used to account for the partial-reinforcement extinction effect (Nevin &amp; Grace, 1999), to assess the persistence of drug-maintained behavior (Jimenez-Gomez &amp; Shahan, 2007; Shahan &amp; Burke, 2004), to increase task compliance (e.g., Belfiore, Lee, Scheeler &amp; Klein, 2002), and to understand the effects of social policies on global problems (Nevin, 2005). Although behavioral momentum theory is a powerful framework for understanding how a context of reinforcement can affect the persistence of discriminated operant behavior, there are a number of findings that are inconsistent with the theory (see Nevin &amp; Grace, 2000, and accompanying commentary). For instance, with equal reinforcement rates across stimulus contexts, resistance to change has been shown to be affected by manipulations to response–reinforcer relations, including schedules that produce different baseline response rates (e.g., Lattal, 1989; Nevin, Grace, Holland &amp; McLean), delays to reinforcement (e.g., Bell, 1999; Grace, Schwendimann &amp; Nevin, 1998; Podlesnik, Jimenez-Gomez, Ward &amp; Shahan, 2006; Podlesnik &amp; Shahan, 2008), and by providing brief stimuli that accompany reinforcement (Reed &amp; Doughty, 2005). Also, it is unclear what factors affect relative resistance to change of responding maintained by conditioned reinforcement (Shahan &amp; Podlesnik, 2005) or two concurrently available responses when different rates of reinforcement are arranged within the same context for those responses (e.g., Bell &amp; Williams, 2002). Preference and resistance to change. As resistance to disruption across stimulus contexts is analogous to the inertial mass of a moving object, behavioral momentum theory also suggests that preference in concurrent-chains procedures for one stimulus context over another is analogous to the gravitational attraction of two bodies (see Nevin &amp; Grace, 2000). In concurrent-chains procedures, responding on the concurrently available initial links provides access to one of two mutually exclusive stimulus contexts called terminal links. As with multiple schedules, independent schedules of reinforcement can function in each terminal-link context. The relative allocation of responding across the two initial links indicates the extent to which an organism prefers one terminal-link context over the other. Moreover, behavioral momentum theory posits that preference provides a measure of the relative conditioned-reinforcing value of the two terminal-link contexts, as described by the contextual-choice model (Grace, 1994). Grace and Nevin (1997) assessed both relative resistance to change in a multiple schedule and preference in a concurrent-chains procedure with pigeons pecking lighted disks for food reinforcement. When the relative rate of reinforcement was manipulated identically and simultaneously across stimulus contexts in the multiple schedule and concurrent-chains procedure, both relative resistance to change and preference was greater with richer contexts of reinforcement. When all the extant resistance to change and preference data were summarized by Grace, Bedell, and Nevin (2002), they found that those measures were related by a structural relation slope of 0.29. Therefore, relative resistance to change and preference both have been conceptualized as expressions of an underlying construct termed response strength, conditioned reinforcement value, or more generally, behavioral mass of discriminated operant behavior (see Nevin &amp; Grace, 2000).
[ { "math_id": 0, "text": "\\log{\\left({Bx\\over Bo}\\right)}\\ = \\ -\\left({f\\over r^b}\\right)" }, { "math_id": 1, "text": "{ {\\log(Bx_1/Bo_1)} \\over{\\log(Bx_2/Bo_2)} }\\ = \\ \\left({r_2\\over r_1}\\right)^a" } ]
https://en.wikipedia.org/wiki?curid=12728577
1273101
Treynor ratio
The Treynor reward to volatility model (sometimes called the reward-to-volatility ratio or Treynor measure), named after Jack L. Treynor, is a measurement of the returns earned in excess of that which could have been earned on an investment that has no diversifiable risk (e.g., Treasury bills or a completely diversified portfolio), per unit of market risk assumed. The Treynor ratio relates excess return over the risk-free rate to the additional risk taken; however, systematic risk is used instead of total risk. The higher the Treynor ratio, the better the performance of the portfolio under analysis. formula_0 Formula. where: formula_1 Treynor ratio, formula_2 portfolio "i"'s return, formula_3 risk free rate formula_4 portfolio "i"'s beta Example. Taking the equation detailed above, let us assume that the expected portfolio return is 20%, the risk free rate is 5%, and the beta of the portfolio is 1.5. Substituting these values, we get the following formula_5 Limitations. Like the Sharpe ratio, the Treynor ratio ("T") does not quantify the value added, if any, of active portfolio management. It is a ranking criterion only. A ranking of portfolios based on the Treynor Ratio is only useful if the portfolios under consideration are sub-portfolios of a broader, fully diversified portfolio. If this is not the case, portfolios with identical systematic risk, but different total risk, will be rated the same. But the portfolio with a higher total risk is less diversified and therefore has a higher unsystematic risk which is not priced in the market. An alternative method of ranking portfolio management is Jensen's alpha, which quantifies the added return as the excess return above the security market line in the capital asset pricing model. As these two methods both determine rankings based on systematic risk alone, they will rank portfolios identically. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "T = \\frac{r_i - r_f}{\\beta_i} " }, { "math_id": 1, "text": "T \\equiv" }, { "math_id": 2, "text": "r_i \\equiv " }, { "math_id": 3, "text": "r_f \\equiv " }, { "math_id": 4, "text": "\\beta_i \\equiv " }, { "math_id": 5, "text": "T = \\frac{0.2 - 0.05}{1.5} = 0.1 " } ]
https://en.wikipedia.org/wiki?curid=1273101
12734062
Regulated rewriting
Regulated rewriting is a specific area of formal languages studying grammatical systems which are able to take some kind of control over the production applied in a derivation step. For this reason, the grammatical systems studied in Regulated Rewriting theory are also called "Grammars with Controlled Derivations". Among such grammars can be noticed: Matrix Grammars. Basic concepts. Definition&lt;br&gt; A Matrix Grammar, formula_0, is a four-tuple formula_1 where 1.- formula_2 is an alphabet of non-terminal symbols 2.- formula_3 is an alphabet of terminal symbols disjoint with formula_2 3.- formula_4 is a finite set of matrices, which are non-empty sequences formula_5, with formula_6, and formula_7, where each formula_8, is an ordered pair formula_9 being formula_10 these pairs are called "productions", and are denoted formula_11. In these conditions the matrices can be written down as formula_12 4.- S is the start symbol Definition&lt;br&gt; Let formula_13 be a matrix grammar and let formula_14 the collection of all productions on matrices of formula_0. We said that formula_0 is of type formula_15 according to Chomsky's hierarchy with formula_16, or "increasing length" or "linear" or "without formula_17-productions" if and only if the grammar formula_18 has the corresponding property. "Note: taken from Abraham 1965, with change of nonterminals names" The classic example. The context-sensitive language formula_19 is generated by the formula_20 formula_21 where formula_22 is the non-terminal set, formula_23 is the terminal set, and the set of matrices is defined as formula_24 formula_25, formula_26, formula_27, formula_28. Time Variant Grammars. Basic concepts Definition A Time Variant Grammar is a pair formula_29 where formula_30 is a grammar and formula_31 is a function from the set of natural numbers to the class of subsets of the set of productions. Programmed Grammars. Basic concepts Definition. A Programmed Grammar is a pair formula_32 where formula_30 is a grammar and formula_33 are the "success" and "fail" functions from the set of productions to the class of subsets of the set of productions. Grammars with regular control language. Basic concepts. Definition&lt;br&gt; A Grammar With Regular Control Language, formula_34, is a pair formula_35 where formula_30 is a grammar and formula_36 is a regular expression over the alphabet of the set of productions. A naive example. Consider the CFG formula_30 where formula_22 is the non-terminal set, formula_23 is the terminal set, and the productions set is defined as formula_37 being formula_38 formula_39, formula_40, formula_41 formula_42, formula_43, and formula_44. Clearly, formula_45. Now, considering the productions set formula_14 as an alphabet (since it is a finite set), define the regular expression over formula_14: formula_46. Combining the CFG grammar formula_47 and the regular expression formula_36, we obtain the CFGWRCL formula_48 which generates the language formula_19. Besides there are other grammars with regulated rewriting, the four cited above are good examples of how to extend context-free grammars with some kind of control mechanism to obtain a Turing machine powerful grammatical device.
[ { "math_id": 0, "text": "MG" }, { "math_id": 1, "text": "G = (N, T, M, S)" }, { "math_id": 2, "text": "N" }, { "math_id": 3, "text": "T" }, { "math_id": 4, "text": "M = {m_1, m_2,..., m_n}" }, { "math_id": 5, "text": "m_{i} = [p_{i_1},...,p_{i_{k(i)}}]" }, { "math_id": 6, "text": "k(i)\\geq 1" }, { "math_id": 7, "text": "1 \\leq i \\leq n" }, { "math_id": 8, "text": "p_{i_{j}} 1\\leq j\\leq k(i)" }, { "math_id": 9, "text": "p_{i_{j}} = (L, R)" }, { "math_id": 10, "text": "L \\in (N \\cup T)^*N(N\\cup T)^*, R \\in (N\\cup T)^*" }, { "math_id": 11, "text": "L\\rightarrow R" }, { "math_id": 12, "text": "m_i = [L_{i_{1}}\\rightarrow R_{i_{1}},...,L_{i_{k(i)}}\\rightarrow R_{i_{k(i)}}]" }, { "math_id": 13, "text": "MG = (N, T, M, S)" }, { "math_id": 14, "text": "P" }, { "math_id": 15, "text": "i" }, { "math_id": 16, "text": "i=0,1,2,3" }, { "math_id": 17, "text": "\\lambda" }, { "math_id": 18, "text": "G=(N, T, P, S)" }, { "math_id": 19, "text": "L(G) = \\{ a^nb^nc^n : n\\geq 1\\}" }, { "math_id": 20, "text": "CFMG" }, { "math_id": 21, "text": "G =(N, T, M, S)" }, { "math_id": 22, "text": "N = \\{S, A, B, C\\}" }, { "math_id": 23, "text": "T = \\{a, b, c\\}" }, { "math_id": 24, "text": "M :" }, { "math_id": 25, "text": "\\left[S\\rightarrow abc\\right]" }, { "math_id": 26, "text": "\\left[S\\rightarrow aAbBcC\\right]" }, { "math_id": 27, "text": "\\left[A\\rightarrow aA,B\\rightarrow bB,C\\rightarrow cC\\right]" }, { "math_id": 28, "text": "\\left[A\\rightarrow a,B\\rightarrow b,C\\rightarrow c\\right]" }, { "math_id": 29, "text": "(G, v)" }, { "math_id": 30, "text": "G = (N, T, P, S)" }, { "math_id": 31, "text": "v: \\mathbb{N}\\rightarrow 2^{P}" }, { "math_id": 32, "text": "(G, s)" }, { "math_id": 33, "text": "s, f: P\\rightarrow 2^{P}" }, { "math_id": 34, "text": "GWRCL" }, { "math_id": 35, "text": "(G, e)" }, { "math_id": 36, "text": "e" }, { "math_id": 37, "text": "P = \\{p_0, p_1, p_2, p_3, p_4, p_5, p_6\\}" }, { "math_id": 38, "text": "p_0 = S\\rightarrow ABC" }, { "math_id": 39, "text": "p_1 = A\\rightarrow aA" }, { "math_id": 40, "text": "p_2 = B\\rightarrow bB" }, { "math_id": 41, "text": "p_3 = C\\rightarrow cC" }, { "math_id": 42, "text": "p_4 = A\\rightarrow a" }, { "math_id": 43, "text": "p_5 = B\\rightarrow b" }, { "math_id": 44, "text": "p_6 = C\\rightarrow c" }, { "math_id": 45, "text": "L(G) = \\{ a^*b^*c^*\\}" }, { "math_id": 46, "text": "e=p_0(p_1p_2p_3)^*(p_4p_5p_6)" }, { "math_id": 47, "text": "G" }, { "math_id": 48, "text": "(G,e)\n=(G,p_0(p_1p_2p_3)^*(p_4p_5p_6))" } ]
https://en.wikipedia.org/wiki?curid=12734062
1273491
Exponential stability
Continuous-time linear system with only negative real parts In control theory, a continuous linear time-invariant system (LTI) is exponentially stable if and only if the system has eigenvalues (i.e., the poles of input-to-output systems) with strictly negative real parts (i.e., in the left half of the complex plane). A discrete-time input-to-output LTI system is exponentially stable if and only if the poles of its transfer function lie strictly within the unit circle centered on the origin of the complex plane. Systems that are not LTI are exponentially stable if their convergence is bounded by exponential decay. Exponential stability is a form of asymptotic stability, valid for more general dynamical systems. Practical consequences. An exponentially stable LTI system is one that will not "blow up" (i.e., give an unbounded output) when given a finite input or non-zero initial condition. Moreover, if the system is given a fixed, finite input (i.e., a step), then any resulting oscillations in the output will decay at an exponential rate, and the output will tend asymptotically to a new final, steady-state value. If the system is instead given a Dirac delta impulse as input, then induced oscillations will die away and the system will return to its previous value. If oscillations do not die away, or the system does not return to its original output when an impulse is applied, the system is instead marginally stable. Example exponentially stable LTI systems. The graph on the right shows the impulse response of two similar systems. The green curve is the response of the system with impulse response formula_0, while the blue represents the system formula_1. Although one response is oscillatory, both return to the original value of 0 over time. Real-world example. Imagine putting a marble in a ladle. It will settle itself into the lowest point of the ladle and, unless disturbed, will stay there. Now imagine giving the ball a push, which is an approximation to a Dirac delta impulse. The marble will roll back and forth but eventually resettle in the bottom of the ladle. Drawing the horizontal position of the marble over time would give a gradually diminishing sinusoid rather like the blue curve in the image above. A step input in this case requires supporting the marble away from the bottom of the ladle, so that it cannot roll back. It will stay in the same position and will not, as would be the case if the system were only marginally stable or entirely unstable, continue to move away from the bottom of the ladle under this constant force equal to its weight. It is important to note that in this example the system is not stable for all inputs. Give the marble a big enough push, and it will fall out of the ladle and fall, stopping only when it reaches the floor. For some systems, therefore, it is proper to state that a system is exponentially stable "over a certain range of inputs". References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "y(t) = e^{-\\frac{t}{5}}" }, { "math_id": 1, "text": "y(t) = e^{-\\frac{t}{5}}\\sin(t)" } ]
https://en.wikipedia.org/wiki?curid=1273491
12735825
Multiplicative digital root
Mathematical formula In number theory, the multiplicative digital root of a natural number formula_0 in a given number base formula_1 is found by multiplying the digits of formula_0 together, then repeating this operation until only a single-digit remains, which is called the multiplicative digital root of formula_0. The multiplicative digital root for the first few positive integers are: 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 0, 2, 4, 6, 8, 0, 2, 4, 6, 8, 0, 3, 6, 9, 2, 5, 8, 2, 8, 4, 0. (sequence in the OEIS) Multiplicative digital roots are the multiplicative equivalent of digital roots. Definition. Let formula_0 be a natural number. We define the digit product for base formula_2 formula_3 to be the following: formula_4 where formula_5 is the number of digits in the number in base formula_1, and formula_6 is the value of each digit of the number. A natural number formula_0 is a multiplicative digital root if it is a fixed point for formula_7, which occurs if formula_8. For example, in base formula_9, 0 is the multiplicative digital root of 9876, as formula_10 formula_11 formula_12 All natural numbers formula_0 are preperiodic points for formula_7, regardless of the base. This is because if formula_13, then formula_14 and therefore formula_15 If formula_16, then trivially formula_8 Therefore, the only possible multiplicative digital roots are the natural numbers formula_17, and there are no cycles other than the fixed points of formula_17. Multiplicative persistence. The number of iterations formula_18 needed for formula_19 to reach a fixed point is the multiplicative persistence of formula_0. The multiplicative persistence is undefined if it never reaches a fixed point. In base 10, it is conjectured that there is no number with a multiplicative persistence formula_20: this is known to be true for numbers formula_21. The smallest numbers with persistence 0, 1, ... are: 0, 10, 25, 39, 77, 679, 6788, 68889, 2677889, 26888999, 3778888999, 277777788888899. (sequence in the OEIS) The search for these numbers can be sped up by using additional properties of the decimal digits of these record-breaking numbers. These digits must be sorted, and, except for the first two digits, all digits must be 7, 8, or 9. There are also additional restrictions on the first two digits. Based on these restrictions, the number of candidates for formula_22-digit numbers with record-breaking persistence is only proportional to the square of formula_22, a tiny fraction of all possible formula_22-digit numbers. However, any number that is missing from the sequence above would have multiplicative persistence &gt; 11; such numbers are believed not to exist, and would need to have over 20,000 digits if they do exist. Extension to negative integers. The multiplicative digital root can be extended to the negative integers by use of a signed-digit representation to represent each integer. Programming example. The example below implements the digit product described in the definition above to search for multiplicative digital roots and multiplicative persistences in Python. def digit_product(x: int, b: int) -&gt; int: if x == 0: return 0 total = 1 while x &gt; 1: if x % b == 0: return 0 if x % b &gt; 1: total = total * (x % b) x = x // b return total def multiplicative_digital_root(x: int, b :int) -&gt; int: seen = [] while x not in seen: seen.append(x) x = digit_product(x, b) return x def multiplicative_persistence(x: int, b: int) -&gt; int: seen = [] while x not in seen: seen.append(x) x = digit_product(x, b) return len(seen) - 1
[ { "math_id": 0, "text": "n" }, { "math_id": 1, "text": "b" }, { "math_id": 2, "text": "b > 1" }, { "math_id": 3, "text": "F_{b} : \\mathbb{N} \\rightarrow \\mathbb{N}" }, { "math_id": 4, "text": "F_{b}(n) = \\prod_{i=0}^{k - 1} d_i" }, { "math_id": 5, "text": "k = \\lfloor \\log_{b}{n} \\rfloor + 1" }, { "math_id": 6, "text": "d_i = \\frac{n \\bmod{b^{i+1}} - n \\bmod b^i}{b^i}" }, { "math_id": 7, "text": "F_{b}" }, { "math_id": 8, "text": "F_{b}(n) = n" }, { "math_id": 9, "text": "b = 10" }, { "math_id": 10, "text": "F_{10}(9876) = (9)(8)(7)(6) = 3024" }, { "math_id": 11, "text": "F_{10}(3024) = (3)(0)(2)(4) = 0" }, { "math_id": 12, "text": "F_{10}(0) = 0" }, { "math_id": 13, "text": "n \\geq b" }, { "math_id": 14, "text": "n = \\sum_{i=0}^{k - 1} d_i b^i" }, { "math_id": 15, "text": "F_{b}(n) = \\prod_{i=0}^{k - 1} d_i = d_{k - 1} \\prod_{i=0}^{k - 2} d_i < d_{k - 1} b^{k - 1} < \\sum_{i=0}^{k - 1} d_i b^i = n" }, { "math_id": 16, "text": "n < b" }, { "math_id": 17, "text": "0 \\leq n < b" }, { "math_id": 18, "text": "i" }, { "math_id": 19, "text": "F_{b}^{i}(n)" }, { "math_id": 20, "text": "i > 11" }, { "math_id": 21, "text": "n \\leq 10^{20585}" }, { "math_id": 22, "text": "k" } ]
https://en.wikipedia.org/wiki?curid=12735825
1273629
Marginal stability
Dynamical system which is neither asymptotically stable nor unstable In the theory of dynamical systems and control theory, a linear time-invariant system is marginally stable if it is neither asymptotically stable nor unstable. Roughly speaking, a system is stable if it always returns to and stays near a particular state (called the steady state), and is unstable if it goes farther and farther away from any state, without being bounded. A marginal system, sometimes referred to as having neutral stability, is between these two types: when displaced, it does not return to near a common steady state, nor does it go away from where it started without limit. Marginal stability, like instability, is a feature that control theory seeks to avoid; we wish that, when perturbed by some external force, a system will return to a desired state. This necessitates the use of appropriately designed control algorithms. In econometrics, the presence of a unit root in observed time series, rendering them marginally stable, can lead to invalid regression results regarding effects of the independent variables upon a dependent variable, unless appropriate techniques are used to convert the system to a stable system. Continuous time. A homogeneous continuous linear time-invariant system is marginally stable if and only if the real part of every pole (eigenvalue) in the system's transfer-function is non-positive, one or more poles have zero real part, and all poles with zero real part are simple roots (i.e. the poles on the imaginary axis are all distinct from one another). In contrast, if all the poles have strictly negative real parts, the system is instead asymptotically stable. If the system is neither stable nor marginally stable, it is unstable. If the system is in state space representation, marginal stability can be analyzed by deriving the Jordan normal form: if and only if the Jordan blocks corresponding to poles with zero real part are scalar is the system marginally stable. Discrete time. A homogeneous discrete time linear time-invariant system is marginally stable if and only if the greatest magnitude of any of the poles (eigenvalues) of the transfer function is 1, and the poles with magnitude equal to 1 are all distinct. That is, the transfer function's spectral radius is 1. If the spectral radius is less than 1, the system is instead asymptotically stable. A simple example involves a single first-order linear difference equation: Suppose a state variable "x" evolves according to formula_0 with parameter "a" &gt; 0. If the system is perturbed to the value formula_1 its subsequent sequence of values is formula_2 If "a" &lt; 1, these numbers get closer and closer to 0 regardless of the starting value formula_1 while if "a" &gt; 1 the numbers get larger and larger without bound. But if "a" = 1, the numbers do neither of these: instead, all future values of "x" equal the value formula_3 Thus the case "a" = 1 exhibits marginal stability. System response. A marginally stable system is one that, if given an impulse of finite magnitude as input, will not "blow up" and give an unbounded output, but neither will the output return to zero. A bounded offset or oscillations in the output will persist indefinitely, and so there will in general be no final steady-state output. If a continuous system is given an input at a frequency equal to the frequency of a pole with zero real part, the system's output will increase indefinitely (this is known as pure resonance). This explains why for a system to be BIBO stable, the real parts of the poles have to be strictly negative (and not just non-positive). A continuous system having imaginary poles, i.e. having zero real part in the pole(s), will produce sustained oscillations in the output. For example, an undamped second-order system such as the suspension system in an automobile (a mass–spring–damper system), from which the damper has been removed and spring is ideal, i.e. no friction is there, will in theory oscillate forever once disturbed. Another example is a frictionless pendulum. A system with a pole at the origin is also marginally stable but in this case there will be no oscillation in the response as the imaginary part is also zero ("jw" = 0 means "w" = 0 rad/sec). An example of such a system is a mass on a surface with friction. When a sidewards impulse is applied, the mass will move and never returns to zero. The mass will come to rest due to friction however, and the sidewards movement will remain bounded. Since the locations of the marginal poles must be "exactly" on the imaginary axis or unit circle (for continuous time and discrete time systems respectively) for a system to be marginally stable, this situation is unlikely to occur in practice unless marginal stability is an inherent theoretical feature of the system. Stochastic dynamics. Marginal stability is also an important concept in the context of stochastic dynamics. For example, some processes may follow a random walk, given in discrete time as formula_4 where formula_5 is an i.i.d. error term. This equation has a unit root (a value of 1 for the eigenvalue of its characteristic equation), and hence exhibits marginal stability, so special time series techniques must be used in empirically modeling a system containing such an equation. Marginally stable Markov processes are those that possess null recurrent classes. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "x_t=ax_{t-1}" }, { "math_id": 1, "text": "x_0," }, { "math_id": 2, "text": "ax_0, \\, a^2x_0, \\, a^3x_0, \\, \\dots ." }, { "math_id": 3, "text": "x_0." }, { "math_id": 4, "text": "x_t=x_{t-1}+e_t," }, { "math_id": 5, "text": "e_t" } ]
https://en.wikipedia.org/wiki?curid=1273629
12737925
Abductive logic programming
Logic programming using abductive reasoning Abductive logic programming (ALP) is a high-level knowledge-representation framework that can be used to solve problems declaratively, based on abductive reasoning. It extends normal logic programming by allowing some predicates to be incompletely defined, declared as abducible predicates. Problem solving is effected by deriving hypotheses on these abducible predicates (abductive hypotheses) as solutions of problems to be solved. These problems can be either observations that need to be explained (as in classical abduction) or goals to be achieved (as in normal logic programming). It can be used to solve problems in diagnosis, planning, natural language and machine learning. It has also been used to interpret negation as failure as a form of abductive reasoning. Syntax. Abductive logic programs have three components, formula_0 where: Normally, the logic program P does not contain any clauses whose head (or conclusion) refers to an abducible predicate. (This restriction can be made without loss of generality.) Also in practice, many times, the integrity constraints in IC are often restricted to the form of denials, i.e. clauses of the form: false:- A1...,An, not B1, ..., not Bm. Such a constraint means that it is not possible for all A1...,An to be true and at the same time all of B1...,Bm to be false. Informal meaning and problem solving. The clauses in P define a set of non-abducible predicates and through this they provide a description (or model) of the problem domain. The integrity constraints in IC specify general properties of the problem domain that need to be respected in any solution of a problem. A problem, "G", which expresses either an observation that needs to be explained or a goal that is desired, is represented by a conjunction of positive and negative (NAF) literals. Such problems are solved by computing "abductive explanations" of "G". An abductive explanation of a problem "G" is a set of positive (and sometimes also negative) ground instances of the abducible predicates, such that, when these are added to the logic program P, the problem "G" and the integrity constraints IC both hold. Thus abductive explanations extend the logic program P by the addition of full or partial definitions of the abducible predicates. In this way, abductive explanations form solutions of the problem according to the description of the problem domain in P and IC. The extension or completion of the problem description given by the abductive explanations provides new information, hitherto not contained in the solution to the problem. Quality criteria to prefer one solution over another, often expressed via integrity constraints, can be applied to select specific abductive explanations of the problem "G". Computation in ALP combines the backwards reasoning of normal logic programming (to reduce problems to sub-problems) with a kind of integrity checking to show that the abductive explanations satisfy the integrity constraints. The following two examples, written in simple structured English rather than in the strict syntax of ALP, illustrate the notion of abductive explanation in ALP and its relation to problem solving. Example 1. The abductive logic program, formula_1, has in formula_2 the following sentences: Grass is wet if it rained. Grass is wet if the sprinkler was on. The sun was shining. The abducible predicates in formula_3 are "it rained" and "the sprinkler was on" and the only integrity constraint in formula_4 is: false if it rained and the sun was shining. The observation that the grass is wet has two potential explanations, "it rained" and "the sprinkler was on", which entail the observation. However, only the second potential explanation, "the sprinkler was on", satisfies the integrity constraint. Example 2. Consider the abductive logic program consisting of the following (simplified) clauses: X is a citizen if X is born in the USA. X is a citizen if X is born outside the USA and X is a resident of the USA and X is naturalized. X is a citizen if X is born outside the USA and Y is the mother of X and Y is a citizen and X is registered. Mary is the mother of John. Mary is a citizen. together with the five abducible predicates, "is born in the USA", "is born outside the USA", "is a resident of the USA", "is naturalized" and "is registered" and the integrity constraint: false if John is a resident of the USA. The goal "John is citizen" has two abductive solutions, one of which is "John is born in the USA", the other of which is "John is born outside the USA" and "John is registered". The potential solution of becoming a citizen by residence and naturalization fails because it violates the integrity constraint. A more complex example that is also written in the more formal syntax of ALP is the following. Example 3. The abductive logic program below describes a simple model of the lactose metabolism of the bacterium E. coli. The program, "P", describes (in its first rule) that E. coli can feed on the sugar lactose if it makes two enzymes permease and galactosidase. Like all enzymes, these are made if they are coded by a gene (Gene) that is expressed (described by the second rule). The two enzymes of permease and galactosidase are coded by two genes, lac(y) and lac(z) respectively (stated in the fifth and sixth rule of the program), in a cluster of genes (lac(X)) – called an operon – that is expressed when the amounts (amt) of glucose are low and lactose are high or when they are both at medium level (see the fourth and fifth rule). The abducibles, "A", declare all ground instances of the predicates "amount" as assumable. This reflects that in the model the amounts at any time of the various substances are unknown. This is incomplete information that is to be determined in each problem case. The integrity constraints, "IC", state that the amount of any substance (S) can only take one value. feed(lactose) :- make(permease), make(galactosidase). make(Enzyme) :- code(Gene, Enzyme), express(Gene). express(lac(X)) :- amount(glucose, low), amount(lactose, hi). express(lac(X)) :- amount(glucose, medium), amount(lactose, medium). code(lac(y), permease). code(lac(z), galactosidase). temperature(low) :- amount(glucose, low). false :- amount(S, V1), amount(S, V2), V1 ≠ V2. abducible_predicate(amount). The problem goal is formula_5. This can arise either as an observation to be explained or as a state of affairs to be achieved by finding a plan. This goal has two abductive explanations: formula_6 The decision which of the two to adopt could depend on additional information that is available, e.g. it may be known that when the level of glucose is low then the organism exhibits a certain behaviour – in the model such additional information is that the temperature of the organism is low – and by observing the truth or falsity of this it is possible to choose the first or second explanation respectively. Once an explanation has been chosen, then this becomes part of the theory, which can be used to draw new conclusions. The explanation and more generally these new conclusions form the solution of the problem. Default reasoning in ALP. As shown in the Theorist system, abduction can also be used for default reasoning. Moreover, abduction in ALP can simulate negation as failure in normal logic programming. Consider the classic example of reasoning by default that a bird can fly if it cannot be shown that the bird is abnormal. Here is a variant of the example using negation as failure: canfly(X) :- bird(X), not(abnormal_flying_bird(X)). abnormal_flying_bird(X):- wounded(X). bird(john). bird(mary). wounded(john). Here is the same example using an abducible predicate normal_flying_bird(_) with an integrity constraint in ALP: canfly(X) :- bird(X), normal_flying_bird(X). false :- normal_flying_bird(X), wounded(X). bird(john). bird(mary). wounded(john). The abducible predicate normal_flying_bird(_), is the contrary of the predicate abnormal_flying_bird(_). Using abduction in ALP it is possible to conclude canfly(mary) under the assumption normal_flying_bird(mary). The conclusion can be derived from the assumption because it cannot be shown that the integrity constraint is violated, which is because it cannot be shown that wounded(mary). In contrast, it is not possible to conclude canfly(john), because the assumption normal_flying_bird(john) together with the fact wounded(john) violates the integrity constraint. This manner of reasoning in ALP simulates reasoning with negation as failure. Conversely, it is possible to simulate abduction in ALP using negation as failure with the stable model semantics. This can be done by adding, for every abducible predicate p, an additional contrary predicate negp, and a pair of clauses: p :- not(negp). negp :- not(p). This pair of clauses has two stable models, one in which p, is true, and the other in which negp, is true. This technique for simulating abduction is commonly used in answer set programming to solve problems using a "generate and test" methodology. Formal semantics. The formal semantics of the central notion of an abductive explanation in ALP can be defined in the following way. Given an abductive logic program, formula_7, an abductive explanation for a problem formula_8 is a set formula_9 of ground atoms on abducible predicates such that: This definition leaves open the choice of the underlying semantics of logic programming through which we give the exact meaning of the entailment relation formula_13 and the notion of consistency of the (extended) logic programs. Any of the different semantics of logic programming such as the completion, stable or well-founded semantics can (and have been used in practice) to give different notions of abductive explanations and thus different forms of ALP frameworks. The above definition takes a particular view on the formalization of the role of the integrity constraints formula_4 as restrictions on the possible abductive solutions. It requires that these are entailed by the logic program extended with an abductive solution, thus meaning that in any model of the extended logic program (which one can think of as an ensuing world given formula_9) the requirements of the integrity constraints are met. In some cases this may be unnecessarily strong and the weaker requirement of consistency, namely that formula_14 is consistent, can be sufficient, meaning that there exists at least one model (possible ensuing world) of the extended program where the integrity constraints hold. In practice, in many cases, these two ways of formalizing the role of the integrity constraints coincide as the logic program and its extensions always have a unique model. Many of the ALP systems use the entailment view of the integrity constraints as this can be easily implemented without the need for any extra specialized procedures for the satisfaction of the integrity constraints since this view treats the constraints in the same way as the problem goal. In many practical cases, the third condition in this formal definition of an abductive explanation in ALP is either trivially satisfied or it is contained in the second condition via the use of specific integrity constraints that capture consistency. Implementation and systems. Most of the implementations of ALP extend the SLD resolution-based computational model of logic programming. ALP can also be implemented by means of its link with Answer Set Programming (ASP), where the ASP systems can be employed. Examples of systems of the former approach are ACLP, A-system, CIFF, SCIFF, ABDUAL and ProLogICA. Notes. &lt;templatestyles src="Reflist/styles.css" /&gt; References. &lt;templatestyles src="Refbegin/styles.css" /&gt;
[ { "math_id": 0, "text": "\\langle P,A,IC\\rangle," }, { "math_id": 1, "text": "\\langle P,A,\\mathit{IC} \\rangle" }, { "math_id": 2, "text": "P" }, { "math_id": 3, "text": "A" }, { "math_id": 4, "text": "\\mathit{IC}" }, { "math_id": 5, "text": "G=\\text{feed(lactose)}" }, { "math_id": 6, "text": "\\begin{cases}\n\\Delta_1=\\{\\text{amount(lactose, hi), amount(glucose, low)}\\}\n\\\\\n\\Delta_2=\\{\\text{amount(lactose, medium), amount(glucose, medium)}\\}\n\\end{cases}" }, { "math_id": 7, "text": "\\langle P,A,\\mathit{IC}\\rangle" }, { "math_id": 8, "text": "G" }, { "math_id": 9, "text": "\\Delta" }, { "math_id": 10, "text": "P \\cup \\Delta \\models G" }, { "math_id": 11, "text": "P \\cup \\Delta \\models IC" }, { "math_id": 12, "text": "P \\cup \\Delta" }, { "math_id": 13, "text": "\\models" }, { "math_id": 14, "text": "P \\cup \\mathit{IC} \\cup \\Delta" } ]
https://en.wikipedia.org/wiki?curid=12737925
12738235
List of scientific publications by Albert Einstein
Albert Einstein (1879–1955) was a renowned theoretical physicist of the 20th century, best known for his special and general theories of relativity. He also made important contributions to statistical mechanics, especially his treatment of Brownian motion, his resolution of the paradox of specific heats, and his connection of fluctuations and dissipation. Despite his reservations about its interpretation, Einstein also made seminal contributions to quantum mechanics and, indirectly, quantum field theory, primarily through his theoretical studies of the photon. Einstein's scientific publications are listed below in four tables: journal articles, book chapters, books and authorized translations. Each publication is indexed in the first column by its number in the Schilpp bibliography ("Albert Einstein: Philosopher–Scientist", pp. 694–730) and by its article number in Einstein's "Collected Papers". Complete references for these two bibliographies may be found below in the Bibliography section. The Schilpp numbers are used for cross-referencing in the Notes (the final column of each table), since they cover a greater time period of Einstein's life at present. The English translations of titles are generally taken from the published volumes of the "Collected Papers". For some publications, however, such official translations are not available; unofficial translations are indicated with a § superscript. Collaborative works by Einstein are highlighted in lavender, with the co-authors provided in the final column of the table. In addition to his scientific publications, the Schilpp bibliography notes over 130 of Einstein's non-scientific works, often on humanitarian or political topics (pp. 730–746). There were also five volumes of Einstein's "Collected Papers" (volumes 1, 5, 8–10) that are devoted to his correspondence, much of which is concerned with scientific questions, but were never prepared for publication. Chronology and major themes. The following chronology of Einstein's scientific discoveries provides a context for the publications listed below, and clarifies the major themes running through his work. The first four entries come from 1905, his "annus mirabilis" or miraculous year in physics. Einstein's epochal contributions from this phase in his career stemmed from a single problem, the fluctuations of a delicately suspended mirror inside a radiation cavity. It led him to examine the nature of light, the statistical mechanics of fluctuations, and the electrodynamics of moving bodies. Journal articles. Most of Einstein's original scientific work appeared as journal articles. Articles on which Einstein collaborated with other scientists are highlighted in lavender, with the co-authors listed in the "Classification and notes" column. These are the total of 272 scientific articles. Book chapters. With the exception of publication #288, the following book chapters were written by Einstein; he had no co-authors. Given that most of the chapters are already in English, the English translations are not given their own columns, but are provided in parentheses after the original title; this helps the table to fit within the margins of the page. These are the total of 31. Books. The following books were written by Einstein. With the exception of publication #278, he had no co-authors. These are the total of 16 books. Authorized translations. The following translations of his work were authorized by Einstein. Footnotes. &lt;templatestyles src="Reflist/styles.css" /&gt; References. The following references are drawn from Abraham Pais' biography of Albert Einstein, "Subtle is the Lord"; see the Bibliography for a complete reference. &lt;templatestyles src="Reflist/styles.css" /&gt; Bibliography. &lt;templatestyles src="Refbegin/styles.css" /&gt;
[ { "math_id": 0, "text": "E" }, { "math_id": 1, "text": "E = h \\nu" }, { "math_id": 2, "text": "h" }, { "math_id": 3, "text": "\\nu" }, { "math_id": 4, "text": "E = pc" }, { "math_id": 5, "text": "p" }, { "math_id": 6, "text": "c" }, { "math_id": 7, "text": "m" }, { "math_id": 8, "text": "E = mc^2" }, { "math_id": 9, "text": "\\Lambda" } ]
https://en.wikipedia.org/wiki?curid=12738235
1273836
Exsecant
Trigonometric function defined as secant minus one The external secant function (exsecant, symbolized exsec) is a trigonometric function defined in terms of the secant function: formula_0 It was introduced in 1855 by American civil engineer Charles Haslett, who used it in conjunction with the existing versine function, formula_1 for designing and measuring circular sections of railroad track. It was adopted by surveyors and civil engineers in the United States for railroad and road design, and since the early 20th century has sometimes been briefly mentioned in American trigonometry textbooks and general-purpose engineering manuals. For completeness, a few books also defined a coexsecant or excosecant function (symbolized coexsec or excsc), formula_2formula_3 the exsecant of the complementary angle, though it was not used in practice. While the exsecant has occasionally found other applications, today it is obscure and mainly of historical interest. As a line segment, an external secant of a circle has one endpoint on the circumference, and then extends radially outward. The length of this segment is the radius of the circle times the trigonometric exsecant of the central angle between the segment's inner endpoint and the point of tangency for a line through the outer endpoint and tangent to the circle. Etymology. The word "secant" comes from Latin for "to cut", and a general secant line "cuts" a circle, intersecting it twice; this concept dates to antiquity and can be found in Book 3 of Euclid's "Elements", as used e.g. in the intersecting secants theorem. 18th century sources in Latin called "any" non-tangential line segment external to a circle with one endpoint on the circumference a "secans exterior". The trigonometric "secant", named by Thomas Fincke (1583), is more specifically based on a line segment with one endpoint at the center of a circle and the other endpoint outside the circle; the circle divides this segment into a radius and an external secant. The external secant segment was used by Galileo Galilei (1632) under the name "secant". History and applications. In the 19th century, most railroad tracks were constructed out of arcs of circles, called "simple curves". Surveyors and civil engineers working for the railroad needed to make many repetitive trigonometrical calculations to measure and plan circular sections of track. In surveying, and more generally in practical geometry, tables of both "natural" trigonometric functions and their common logarithms were used, depending on the specific calculation. Using logarithms converts expensive multiplication of multi-digit numbers to cheaper addition, and logarithmic versions of trigonometric tables further saved labor by reducing the number of necessary table lookups. The "external secant" or "external distance" of a curved track section is the shortest distance between the track and the intersection of the tangent lines from the ends of the arc, which equals the radius times the trigonometric exsecant of half the central angle subtended by the arc, formula_4 By comparison, the "versed sine" of a curved track section is the furthest distance from the "long chord" (the line segment between endpoints) to the track – cf. Sagitta – which equals the radius times the trigonometric versine of half the central angle, formula_5 These are both natural quantities to measure or calculate when surveying circular arcs, which must subsequently be multiplied or divided by other quantities. Charles Haslett (1855) found that directly looking up the logarithm of the exsecant and versine saved significant effort and produced more accurate results compared to calculating the same quantity from values found in previously available trigonometric tables. The same idea was adopted by other authors, such as Searles (1880). By 1913 Haslett's approach was so widely adopted in the American railroad industry that, in that context, "tables of external secants and versed sines [were] more common than [were] tables of secants". In the late-19th and 20th century, railroads began using arcs of an Euler spiral as a track transition curve between straight or circular sections of differing curvature. These spiral curves can be approximately calculated using exsecants and versines. Solving the same types of problems is required when surveying circular sections of canals and roads, and the exsecant was still used in mid-20th century books about road surveying. The exsecant has sometimes been used for other applications, such as beam theory and depth sounding with a wire. In recent years, the availability of calculators and computers has removed the need for trigonometric tables of specialized functions such as this one. Exsecant is generally not directly built into calculators or computing environments (though it has sometimes been included in software libraries), and calculations in general are much cheaper than in the past, no longer requiring tedious manual labor. Catastrophic cancellation for small angles. Naïvely evaluating the expressions formula_6 (versine) and formula_7 (exsecant) is problematic for small angles where formula_8 Computing the difference between two approximately equal quantities results in catastrophic cancellation: because most of the digits of each quantity are the same, they cancel in the subtraction, yielding a lower-precision result. For example, the secant of 1° is sec 1° ≈ &lt;wbr /&gt;​, with the leading several digits wasted on zeros, while the common logarithm of the exsecant of 1° is log exsec 1° ≈ &lt;wbr /&gt;​, all of whose digits are meaningful. If the logarithm of exsecant is calculated by looking up the secant in a six-place trigonometric table and then subtracting 1, the difference sec 1° − 1 ≈ &lt;wbr /&gt;​ has only 3 significant digits, and after computing the logarithm only three digits are correct, log(sec 1° − 1) ≈ &lt;wbr /&gt;​−3.818156. For even smaller angles loss of precision is worse. If a table or computer implementation of the exsecant function is not available, the exsecant can be accurately computed as formula_9 or using versine, formula_10 which can itself be computed as formula_11&lt;wbr /&gt;​formula_12; Haslett used these identities to compute his 1855 exsecant and versine tables. For a sufficiently small angle, a circular arc is approximately shaped like a parabola, and the versine and exsecant are approximately equal to each-other and both proportional to the square of the arclength. Mathematical identities. Inverse function. The inverse of the exsecant function, which might be symbolized arcexsec, is well defined if its argument formula_13 or formula_14 and can be expressed in terms of other inverse trigonometric functions (using radians for the angle): formula_15 the arctangent expression is well behaved for small angles. Calculus. While historical uses of the exsecant did not explicitly involve calculus, its derivative and antiderivative (for x in radians) are: formula_16 where ln is the natural logarithm. See also Integral of the secant function. Double angle identity. The exsecant of twice an angle is: formula_17 Notes and references. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\operatorname{exsec} \\theta = \\sec\\theta - 1 = \\frac{1}{\\cos\\theta} - 1." }, { "math_id": 1, "text": "\\operatorname{vers}\\theta = 1 - \\cos\\theta," }, { "math_id": 2, "text": "\\operatorname{coexsec} \\theta = {}" }, { "math_id": 3, "text": "\\csc\\theta - 1," }, { "math_id": 4, "text": "R\\operatorname{exsec}\\tfrac12\\Delta." }, { "math_id": 5, "text": "R\\operatorname{vers}\\tfrac12\\Delta." }, { "math_id": 6, "text": "1 - \\cos \\theta" }, { "math_id": 7, "text": "\\sec \\theta - 1" }, { "math_id": 8, "text": "\\sec \\theta \\approx \\cos \\theta \\approx 1." }, { "math_id": 9, "text": "\\operatorname{exsec} \\theta = \\tan \\theta \\, \\tan \\tfrac12\\theta\\vphantom\\Big|," }, { "math_id": 10, "text": "\\operatorname{exsec} \\theta = \\operatorname{vers} \\theta \\, \\sec \\theta," }, { "math_id": 11, "text": "\\operatorname{vers} \\theta = 2 \\bigl({\\sin \\tfrac12\\theta}\\bigr)\\vphantom)^2\\vphantom\\Big| = {}" }, { "math_id": 12, "text": "\\sin \\theta \\, \\tan \\tfrac12\\theta\\,\\vphantom\\Big|" }, { "math_id": 13, "text": "y \\geq 0" }, { "math_id": 14, "text": "y \\leq -2" }, { "math_id": 15, "text": "\n\\operatorname{arcexsec}y = \\arcsec(y+1) = \\begin{cases}\n{\\arctan}\\bigl(\\!{\\textstyle \\sqrt{y^2+2y}}\\,\\bigr) & \\text{if}\\ \\ y \\geq 0, \\\\[6mu]\n\\text{undefined} & \\text{if}\\ \\ {-2} < y < 0, \\\\[4mu]\n\\pi - {\\arctan}\\bigl(\\!{\\textstyle \\sqrt{y^2+2y}}\\,\\bigr) & \\text{if}\\ \\ y \\leq {-2}; \\\\\n\\end{cases}_{\\vphantom.}\n" }, { "math_id": 16, "text": "\\begin{align}\n\\frac{\\mathrm{d}}{\\mathrm{d}x}\\operatorname{exsec} x &= \\tan x\\,\\sec x, \\\\[10mu]\n\\int\\operatorname{exsec} x\\,\\mathrm{d}x &= \\ln\\bigl|\\sec x + \\tan x\\bigr| - x + C,\\vphantom{\\int_|}\n\\end{align}" }, { "math_id": 17, "text": "\\operatorname{exsec} 2\\theta = \\frac{2 \\sin^2 \\theta} {1 - 2 \\sin^2 \\theta}." }, { "math_id": 18, "text": "x \\mapsto e^x - 1," } ]
https://en.wikipedia.org/wiki?curid=1273836
12739994
Whole-body counting
In health physics, whole-body counting refers to the measurement of radioactivity "within" the human body. The technique is primarily applicable to radioactive material that emits gamma rays. Alpha particle decays can also be detected indirectly by their coincident gamma radiation. In certain circumstances, beta emitters can be measured, but with degraded sensitivity. The instrument used is normally referred to as a whole body counter. This must not be confused with a "whole body monitor" which used for personnel exit monitoring, which is the term used in radiation protection for checking for external contamination of a whole body of a person leaving a radioactive contamination controlled area. Principles. If a gamma ray is emitted from a radioactive element within the human body due to radioactive decay, and its energy is sufficient to escape then it can be detected. This would be by means of either a scintillation detector or a semiconductor detector placed in close proximity to the body. Radioactive decay may give rise to gamma radiation which cannot escape the body due to being absorbed or other interaction whereby it can lose energy; so account must be taken of this in any measurement analysis. Whole-body counting is suitable to detect radioactive elements that emit neutron radiation or high-energy beta radiation (by measuring secondary x-rays or gamma radiation) only in experimental applications. There are many ways a person can be positioned for this measurement: sitting, lying, standing. The detectors can be single or multiple and can either be stationary or moving. The advantages of whole-body counting are that it measures body contents directly, does not rely on indirect bio-assay methods (such as urinalysis) as it can measure insoluble radionuclides in the lungs. On the other hand, disadvantages of whole-body counting are that except in special circumstances it can only be used for gamma emitters due to self-shielding of the human body, and it can misinterpret external contamination as an internal contamination. To prevent this latter case scrupulous de-contamination of the individual must be performed first. Whole body counting may be unable to distinguish between radioisotopes that have similar gamma energies. Alpha and beta radiation is largely shielded by the body and will not be detected externally, but the coincident gamma from alpha decay may be detected, as well as radiation from the parent or daughter nuclides. Calibration. Any radiation detector is a relative instrument, that is to say the measurement value can only be converted to an amount of material present by comparing the response signal (usually counts per minute, or per second) to the signal obtained from a standard whose quantity (activity) is well known. A whole-body counter is calibrated with a device known as a "phantom" containing a known distribution and known activity of radioactive material. The accepted industry standard is the Bottle Manikin Absorber phantom (BOMAB). The BOMAB phantom consists of 10 high-density polyethylene containers and is used to calibrate "in vivo" counting systems that are designed to measure the radionuclides that emit high energy photons (200 keV &lt; E &lt; 3 MeV). Because many different types of phantoms had been used to calibrate "in vivo" counting systems, the importance of establishing standard specifications for phantoms was emphasized at the 1990 international meeting of "in vivo" counting professionals held at the National Institute of Standards and Technology (NIST). The consensus of the meeting attendees was that standard specifications were needed for the BOMAB phantom. The standard specifications for the BOMAB phantom provide the basis for a consistent phantom design for calibrating "in vivo" measurement systems. Such systems are designed to measure radionuclides that emit high-energy photons and that are assumed to be homogeneously distributed in the body. Sensitivity. A well designed counting system can detect levels of most gamma emitters (&gt;200 keV) at levels far below that which would cause adverse health effects in people. A typical detection limit for radioactive caesium (Cs-137) is about 40 Bq. The Annual Limit on Intake (i.e., the amount that would give a person a dose equal to the worker limit that is 20 mSv) is about 2,000,000 Bq. The amount of naturally occurring radioactive potassium present in all humans is also easily detectable. Risk of death by potassium deficiency approaches 100% as whole-body count approaches zero. The reason that these instruments are so sensitive is that they are often housed in low background counting chambers. Typically this is a small room with very thick walls made of low-background steel (~20 cm) and sometimes lined with a thin layer of lead (~1 cm). This shielding can reduce background radiation inside the chamber by several orders of magnitude. Count times and detection limit. Depending on the counting geometry of the system, count times can be from 1 minute to about 30 minutes. The sensitivity of a counter does depend on counting time so the longer the count, for the same system, the better the detection limit. The detection limit, often referred to as the Minimum Detectable Activity (MDA), is given by the formula: formula_0 ...where N is the number of counts of background in the region of interest; E is the counting efficiency; and T is the counting time. This quantity is approximately twice the Decision Limit, another statistical quantity, that can be used to decide if there is any activity present. (i.e., a trigger point for more analysis). History. In 1950, Leonidas D. Marinelli developed and applied a low-level gamma-ray Whole Body Counter to measure people who had been injected with radium in the early 1920s and 1930s, contaminated by exposure to atomic explosions, and by accidental exposures in industry and medicine The sensitive methods of dosimetry and spectrometry Marinelli developed obtained the total content of natural Potassium in the human body. Marinelli's Whole Body Counter was first used at Billings Hospital at the University of Chicago in 1952. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "MDA = \\frac{2.707+4.65\\sqrt{N}}{ET} " } ]
https://en.wikipedia.org/wiki?curid=12739994
12740519
Edge-matching puzzle
Tiling puzzle An edge-matching puzzle is a type of tiling puzzle involving tiling an area with (typically regular) polygons whose edges are distinguished with colours or patterns, in such a way that the edges of adjacent tiles match. Edge-matching puzzles are known to be NP-complete, and capable of conversion to and from equivalent jigsaw puzzles and polyomino packing puzzle. The first edge-matching puzzles were patented in the U.S. by E. L. Thurston in 1892. Current examples of commercial edge-matching puzzles include the Eternity II puzzle, Tantrix, Kadon Enterprises' range of edge-matching puzzles, and the Edge Match Puzzles iPhone app. Notable variations. MacMahon Squares. MacMahon Squares is the name given to a recreational math puzzle suggested by British mathematician Percy MacMahon, who published a treatise on edge-colouring of a variety of shapes in 1921. This particular puzzle uses 24 tiles consisting of all permutations of 3 colors for the edges of a square. The tiles must be arranged into a 6×4 rectangular area such that all edges match and, furthermore, only one color is used for the outside edge of the rectangle. This puzzle can be extended to tiles with permutations of 4 colors, arranged in 10×7. In either case, the squares are a subset of the Wang tiles, reducing tiles that are similar under rotation. Solutions number well into the thousands. MacMahon Squares, along with variations on the idea, was commercialized as Multimatch. TetraVex. TetraVex is a computer game that presents the player with a square grid and a collection of tiles, by default nine square tiles for a 3×3 grid. Each tile has four single-digit numbers, one on each edge. The objective of the game is to place the tiles into the grid in the proper position, completing this puzzle as quickly as possible. The tiles cannot be rotated, and two can be placed next to each other only if the numbers on adjacent edges match. TetraVex was inspired by "the problem of tiling the plane" as described by Donald Knuth on page 382 of "Volume 1: Fundamental Algorithms", the first book in his series "The Art of Computer Programming". It was named by Scott Ferguson, the development lead and an architect of the first version of Visual Basic, who wrote it for Windows Entertainment Pack 3. TetraVex is also available as an open source game in the GNOME Games collection. The possible number of TetraVex can be counted. On a formula_0 board there are formula_1 horizontal and vertical pairs that must match and formula_2 numbers along the edges that can be chosen arbitrarily. Hence there are formula_3 choices of 10 digits, i.e. formula_4 possible boards. Deciding if a TetraVex puzzle has a solution is in general NP-complete. Its computational approach involves the Douglas-Rachford algorithm. Hexagons. Serpentiles are the hexagonal tiles used in various abstract strategy games such as Psyche-Paths, Kaliko, and Tantrix. Within each serpentile, the edges are paired, thus restricting the set of tiles in such a way that no edge color occurs an odd number of times within the hexagon. Three dimensions. Mathematically, edge-matching puzzles are two-dimensional. A 3D edge-matching puzzle is such a puzzle that is not flat in Euclidean space, so involves tiling a three-dimensional area such as the surface of a regular polyhedron. As before, polygonal pieces have distinguished edges to require that the edges of adjacent pieces match. 3D edge-matching puzzles are not currently under direct U.S. patent protection, since the 1892 patent by E. L. Thurston has expired. Current examples of commercial puzzles include the Dodek Duo, The Enigma, Mental Misery, and Kadon Enterprises' range of three-dimensional edge-matching puzzles. Incorporation of edge matching. The Carcassonne board game employs edge matching to constrain where its square tiles may be placed. The original game has three types of edges: fields, roads and cities.
[ { "math_id": 0, "text": "n\\times{}n" }, { "math_id": 1, "text": "n(n-1)" }, { "math_id": 2, "text": "4n" }, { "math_id": 3, "text": "2n(n-1)+4n=2n(n+1)" }, { "math_id": 4, "text": "10^{2n(n+1)}" } ]
https://en.wikipedia.org/wiki?curid=12740519
12741867
Steffensen's inequality
Equation in mathematics Steffensen's inequality is an equation in mathematics named after Johan Frederik Steffensen. It is an integral inequality in real analysis, stating: If ƒ : ["a", "b"] → R is a non-negative, monotonically decreasing, integrable function and "g" : ["a", "b"] → [0, 1] is another integrable function, then formula_0 where formula_1
[ { "math_id": 0, "text": "\\int_{b - k}^{b} f(x) \\, dx \\leq \\int_{a}^{b} f(x) g(x) \\, dx \\leq \\int_{a}^{a + k} f(x) \\, dx," }, { "math_id": 1, "text": "k = \\int_{a}^{b} g(x) \\, dx." } ]
https://en.wikipedia.org/wiki?curid=12741867
12744141
Lagrange bracket
Lagrange brackets are certain expressions closely related to Poisson brackets that were introduced by Joseph Louis Lagrange in 1808–1810 for the purposes of mathematical formulation of classical mechanics, but unlike the Poisson brackets, have fallen out of use. Definition. Suppose that ("q"1, ..., "q""n", "p"1, ..., "p""n") is a system of canonical coordinates on a phase space. If each of them is expressed as a function of two variables, "u" and "v", then the Lagrange bracket of "u" and "v" is defined by the formula formula_0 formula_1 is a canonical transformation, then the Lagrange bracket is an invariant of the transformation, in the sense that formula_2 Therefore, the subscripts indicating the canonical coordinates are often omitted. formula_3 Properties. where the matrix formula_4:: represents the components of Ω, viewed as a tensor, in the coordinates "u". This matrix is the inverse of the matrix formed by the Poisson brackets formula_5 of the coordinates "u". formula_6 Lagrange matrix in canonical transformations. The concept of Lagrange brackets can be expanded to that of matrices by defining the Lagrange matrix. Consider the following canonical transformation:formula_7 Defining formula_8, the Lagrange matrix is defined as formula_9, where formula_10 is the symplectic matrix under the same conventions used to order the set of coordinates. It follows from the definition that: formula_11 The Lagrange matrix satisfies the following known properties:formula_12where the formula_13 is known as a Poisson matrix and whose elements correspond to Poisson brackets. The last identity can also be stated as the following:formula_14Note that the summation here involves generalized coordinates as well as generalized momentum. The invariance of Lagrange bracket can be expressed as: formula_15, which directly leads to the symplectic condition: formula_16. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\n[ u, v ]_{p,q} = \\sum_{i=1}^n \\left(\\frac{\\partial q_i}{\\partial u} \\frac{\\partial p_i}{\\partial v} - \\frac{\\partial p_i}{\\partial u} \\frac{\\partial q_i}{\\partial v} \\right).\n" }, { "math_id": 1, "text": " Q=Q(q,p), P=P(q,p) " }, { "math_id": 2, "text": " [ u, v]_{q,p} = [u , v]_{Q,P}" }, { "math_id": 3, "text": " \\Omega = \\frac 12 \\Omega_{ij} du^i \\wedge du^j " }, { "math_id": 4, "text": " \\Omega_{ij} = [ u_i, u_j ]_{p,q}, \\quad 1\\leq i,j\\leq 2n " }, { "math_id": 5, "text": " \\left(\\Omega^{-1}\\right)_{ij} = \\{u_i, u_j\\}, \\quad 1 \\leq i,j \\leq 2n " }, { "math_id": 6, "text": " [Q_i, Q_j]_{p,q}=0, \\quad [P_i,P_j]_{p,q}=0,\\quad [Q_i, P_j]_{p,q}=-[P_j, Q_i]_{p,q}=\\delta_{ij}. " }, { "math_id": 7, "text": "\\eta = \n \\begin{bmatrix}\n q_1\\\\\n \\vdots \\\\\n q_N\\\\\n p_1\\\\\n \\vdots\\\\\n p_N\\\\ \n \\end{bmatrix} \\quad \\rightarrow \\quad \\varepsilon = \n \\begin{bmatrix}\n Q_1\\\\\n \\vdots \\\\\n Q_N\\\\\n P_1\\\\\n \\vdots\\\\\n P_N\\\\ \n \\end{bmatrix} " }, { "math_id": 8, "text": "M := \\frac{\\partial (\\mathbf{Q}, \\mathbf{P})}{\\partial (\\mathbf{q}, \\mathbf{p})}" }, { "math_id": 9, "text": "\\mathcal L(\\eta) = M^TJM\n" }, { "math_id": 10, "text": "J" }, { "math_id": 11, "text": "\\mathcal L_{ij}(\\eta) = [M^TJM]_{ij} = \\sum_{k=1}^{N} \\left(\\frac{\\partial \\varepsilon_k}{\\partial \\eta_{i}} \\frac{\\partial \\varepsilon_{N+k}}{\\partial \\eta_j} - \\frac{\\partial \\varepsilon_{N+k}}{\\partial \\eta_i } \\frac{\\partial \\varepsilon_{k}}{\\partial \\eta_j}\\right) = \\sum_{k=1}^{N} \\left(\\frac{\\partial Q_k}{\\partial \\eta_{i}} \\frac{\\partial P_{k}}{\\partial \\eta_j} - \\frac{\\partial P_{k}}{\\partial \\eta_i } \\frac{\\partial Q_{k}}{\\partial \\eta_j}\\right)= [\\eta_i,\\eta_j]_\\varepsilon\n" }, { "math_id": 12, "text": "\\begin{align}\n\\mathcal L^T &= - \\mathcal L \\\\\n|\\mathcal L| &= {|M|^2}\\\\\n\\mathcal L^{-1}(\\eta)&= -M^{-1} J (M^{-1})^T = - \\mathcal P(\\eta)\\\\\n\\end{align}\n" }, { "math_id": 13, "text": "\\mathcal P(\\eta)\n" }, { "math_id": 14, "text": "\\sum_{k=1}^{2N} \\{\\eta_i,\\eta_k\\}[\\eta_k,\\eta_j] = -\\delta_{ij} \n" }, { "math_id": 15, "text": "[\\eta_i,\\eta_j]_\\varepsilon=[ \\eta_i,\\eta_j]_\\eta = J_{ij}\n" }, { "math_id": 16, "text": "M^TJM = J \n" } ]
https://en.wikipedia.org/wiki?curid=12744141
12744687
Right half-plane
In complex analysis, the (open) right half-plane is the set of all points in the complex plane whose real part is strictly positive, that is, the set formula_0.
[ { "math_id": 0, "text": "\\{z \\in \\Complex\\, :\\, \\mbox{Re}(z) > 0\\}" } ]
https://en.wikipedia.org/wiki?curid=12744687
12745577
Quadrature of the Parabola
Quadrature of the Parabola () is a treatise on geometry, written by Archimedes in the 3rd century BC and addressed to his Alexandrian acquaintance Dositheus. It contains 24 propositions regarding parabolas, culminating in two proofs showing that the area of a parabolic segment (the region enclosed by a parabola and a line) is formula_0 that of a certain inscribed triangle. It is one of the best-known works of Archimedes, in particular for its ingenious use of the method of exhaustion and in the second part of a geometric series. Archimedes dissects the area into infinitely many triangles whose areas form a geometric progression. He then computes the sum of the resulting geometric series, and proves that this is the area of the parabolic segment. This represents the most sophisticated use of a "reductio ad absurdum" argument in ancient Greek mathematics, and Archimedes' solution remained unsurpassed until the development of integral calculus in the 17th century, being succeeded by Cavalieri's quadrature formula. Main theorem. A parabolic segment is the region bounded by a parabola and line. To find the area of a parabolic segment, Archimedes considers a certain inscribed triangle. The base of this triangle is the given chord of the parabola, and the third vertex is the point on the parabola such that the tangent to the parabola at that point is parallel to the chord. Proposition 1 of the work states that a line from the third vertex drawn parallel to the axis divides the chord into equal segments. The main theorem claims that the area of the parabolic segment is formula_0 that of the inscribed triangle. Structure of the text. Conic sections such as the parabola were already well known in Archimedes' time thanks to Menaechmus a century earlier. However, before the advent of the differential and integral calculus, there were no easy means to find the area of a conic section. Archimedes provides the first attested solution to this problem by focusing specifically on the area bounded by a parabola and a chord. Archimedes gives two proofs of the main theorem: one using abstract mechanics and the other one by pure geometry. In the first proof, Archimedes considers a lever in equilibrium under the action of gravity, with weighted segments of a parabola and a triangle suspended along the arms of a lever at specific distances from the fulcrum. When the center of gravity of the triangle is known, the equilibrium of the lever yields the area of the parabola in terms of the area of the triangle which has the same base and equal height. Archimedes here deviates from the procedure found in "On the Equilibrium of Planes" in that he has the centers of gravity at a level below that of the balance. The second and more famous proof uses pure geometry, particularly the sum of a geometric series. Of the twenty-four propositions, the first three are quoted without proof from Euclid's "Elements of Conics" (a lost work by Euclid on conic sections). Propositions 4 and 5 establish elementary properties of the parabola. Propositions 6–17 give the mechanical proof of the main theorem; propositions 18–24 present the geometric proof. Geometric proof. Dissection of the parabolic segment. The main idea of the proof is the dissection of the parabolic segment into infinitely many triangles, as shown in the figure to the right. Each of these triangles is inscribed in its own parabolic segment in the same way that the blue triangle is inscribed in the large segment. Areas of the triangles. In propositions eighteen through twenty-one, Archimedes proves that the area of each green triangle is formula_1 the area of the blue triangle, so that both green triangles together sum to formula_2 the area of the blue triangle. From a modern point of view, this is because the green triangle has formula_3 the width and formula_2 the height of the blue triangle: &lt;templatestyles src="Block indent/styles.css"/&gt; Following the same argument, each of the formula_4 yellow triangles has formula_1 the area of a green triangle or formula_5 the area of the blue triangle, summing to formula_6 the area of the blue triangle; each of the formula_7 red triangles has formula_1 the area of a yellow triangle, summing to formula_8 the area of the blue triangle; etc. Using the method of exhaustion, it follows that the total area of the parabolic segment is given by formula_9 Here "T" represents the area of the large blue triangle, the second term represents the total area of the two green triangles, the third term represents the total area of the four yellow triangles, and so forth. This simplifies to give formula_10 Sum of the series. To complete the proof, Archimedes shows that formula_11 The formula above is a geometric series—each successive term is one fourth of the previous term. In modern mathematics, that formula is a special case of the sum formula for a geometric series. Archimedes evaluates the sum using an entirely geometric method, illustrated in the adjacent picture. This picture shows a unit square which has been dissected into an infinity of smaller squares. Each successive purple square has one fourth the area of the previous square, with the total purple area being the sum formula_12 However, the purple squares are congruent to either set of yellow squares, and so cover formula_13 of the area of the unit square. It follows that the series above sums to formula_0 (since formula_14). Notes. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\tfrac43" }, { "math_id": 1, "text": "\\tfrac18" }, { "math_id": 2, "text": "\\tfrac14" }, { "math_id": 3, "text": "\\tfrac12" }, { "math_id": 4, "text": "4" }, { "math_id": 5, "text": "\\tfrac1{64}" }, { "math_id": 6, "text": "\\tfrac4{64} = \\tfrac1{16}" }, { "math_id": 7, "text": "2^3 = 8" }, { "math_id": 8, "text": "\\tfrac{2^3}{8^3} = \\tfrac1{64}" }, { "math_id": 9, "text": "\\text{Area}\\;=\\;T \\,+\\, \\frac14T \\,+\\, \\frac1{4^2}T \\,+\\, \\frac1{4^3}T \\,+\\, \\cdots." }, { "math_id": 10, "text": "\\text{Area}\\;=\\;\\left(1 \\,+\\, \\frac{1}{4} \\,+\\, \\frac{1}{16} \\,+\\, \\frac{1}{64} \\,+\\, \\cdots\\right)T." }, { "math_id": 11, "text": "1 \\,+\\, \\frac{1}{4} \\,+\\, \\frac{1}{16} \\,+\\, \\frac{1}{64} \\,+\\, \\cdots\\;=\\; \\frac{4}{3}." }, { "math_id": 12, "text": "\\frac{1}{4} \\,+\\, \\frac{1}{16} \\,+\\, \\frac{1}{64} \\,+\\, \\cdots." }, { "math_id": 13, "text": "\\tfrac13" }, { "math_id": 14, "text": "1 + \\tfrac13 = \\tfrac43" } ]
https://en.wikipedia.org/wiki?curid=12745577
12745947
Permutohedron
In mathematics, the permutohedron (also spelled permutahedron) of order "n" is an ("n" − 1)-dimensional polytope embedded in an "n"-dimensional space. Its vertex coordinates (labels) are the permutations of the first "n" natural numbers. The edges identify the shortest possible paths (sets of transpositions) that connect two vertices (permutations). Two permutations connected by an edge differ in only two places (one transposition), and the numbers on these places are neighbors (differ in value by 1). The image on the right shows the permutohedron of order 4, which is the truncated octahedron. Its vertices are the 24 permutations of (1, 2, 3, 4). Parallel edges have the same edge color. The 6 edge colors correspond to the 6 possible transpositions of 4 elements, i.e. they indicate in which two places the connected permutations differ. (E.g. red edges connect permutations that differ in the last two places.) History. According to Günter M. Ziegler (1995), permutohedra were first studied by Pieter Hendrik Schoute (1911). The name "permutoèdre" was coined by Georges Th. Guilbaud and Pierre Rosenstiehl (1963). They describe the word as barbaric, but easy to remember, and submit it to the criticism of their readers. The alternative spelling "permutahedron" is sometimes also used. Permutohedra are sometimes called "permutation polytopes", but this terminology is also used for the related Birkhoff polytope, defined as the convex hull of permutation matrices. More generally, V. Joseph Bowman (1972) uses that term for any polytope whose vertices have a bijection with the permutations of some set. Vertices, edges, and facets. The permutohedron of order "n" has "n"! vertices, each of which is adjacent to "n" − 1 others. The number of edges is , and their length is . Two connected vertices differ by swapping two coordinates, whose values differ by 1. The pair of swapped places corresponds to the direction of the edge. The number of facets is 2"n" − 2, because they correspond to non-empty proper subsets "S" of {1 ... "n"}. The vertices of a facet corresponding to subset "S" have in common, that their coordinates on places in "S" are smaller than the rest. More generally, the faces of dimensions 0 (vertices) to "n" − 1 (the permutohedron itself) correspond to the strict weak orderings of the set {1 ... "n"}. So the number of all faces is the "n"-th ordered Bell number. A face of dimension "d" corresponds to an ordering with "k" "n" − "d" equivalence classes. The number of faces of dimension "d" "n" − "k" in the permutohedron of order "n" is given by the triangle "T" (sequence in the OEIS): formula_0          with formula_1 representing the Stirling numbers of the second kind It is shown on the right together with its row sums, the ordered Bell numbers. Other properties. The permutohedron is vertex-transitive: the symmetric group S"n" acts on the permutohedron by permutation of coordinates. The permutohedron is a zonotope; a translated copy of the permutohedron can be generated as the Minkowski sum of the "n"("n" − 1)/2 line segments that connect the pairs of the standard basis vectors. The vertices and edges of the permutohedron are isomorphic to one of the Cayley graphs of the symmetric group, namely the one generated by the transpositions that swap consecutive elements. The vertices of the Cayley graph are the inverse permutations of those in the permutohedron. The image on the right shows the Cayley graph of S. Its edge colors represent the 3 generating transpositions: (1, 2), (2, 3), (3, 4) This Cayley graph is Hamiltonian; a Hamiltonian cycle may be found by the Steinhaus–Johnson–Trotter algorithm. Tessellation of the space. The permutohedron of order "n" lies entirely in the ("n" − 1)-dimensional hyperplane consisting of all points whose coordinates sum to the number 1 + 2 + ... + "n" = "n"("n" + 1)/2. Moreover, this hyperplane can be tiled by infinitely many translated copies of the permutohedron. Each of them differs from the basic permutohedron by an element of a certain ("n" − 1)-dimensional lattice, which consists of the "n"-tuples of integers that sum to zero and whose residues (modulo "n") are all equal: "x"1 + "x"2 + ... + "x""n" = 0 "x"1 ≡ "x"2 ≡ ... ≡ "x""n" (mod "n"). This is the lattice formula_2, the dual lattice of the root lattice formula_3. In other words, the permutohedron is the Voronoi cell for formula_2. Accordingly, this lattice is sometimes called the permutohedral lattice. Thus, the permutohedron of order 4 shown above tiles the 3-dimensional space by translation. Here the 3-dimensional space is the affine subspace of the 4-dimensional space R4 with coordinates "x", "y", "z", "w" that consists of the 4-tuples of real numbers whose sum is 10, "x" + "y" + "z" + "w" = 10. One easily checks that for each of the following four vectors, (1,1,1,−3), (1,1,−3,1), (1,−3,1,1) and (−3,1,1,1), the sum of the coordinates is zero and all coordinates are congruent to 1 (mod 4). Any three of these vectors generate the translation lattice. The tessellations formed in this way from the order-2, order-3, and order-4 permutohedra, respectively, are the apeirogon, the regular hexagonal tiling, and the bitruncated cubic honeycomb. The dual tessellations contain all simplex facets, although they are not regular polytopes beyond order-3. Notes. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "T(n,k) = k! \\cdot \\left\\{{n\\atop k}\\right\\}" }, { "math_id": 1, "text": "\\textstyle \\left\\{{n\\atop k}\\right\\}" }, { "math_id": 2, "text": "A_{n-1}^*" }, { "math_id": 3, "text": "A_{n-1}" } ]
https://en.wikipedia.org/wiki?curid=12745947
12746
Generalization
Form of abstraction A generalization is a form of abstraction whereby common properties of specific instances are formulated as general concepts or claims. Generalizations posit the existence of a domain or set of elements, as well as one or more common characteristics shared by those elements (thus creating a conceptual model). As such, they are the essential basis of all valid deductive inferences (particularly in logic, mathematics and science), where the process of verification is necessary to determine whether a generalization holds true for any given situation. Generalization can also be used to refer to the process of identifying the parts of a whole, as belonging to the whole. The parts, which might be unrelated when left on their own, may be brought together as a group, hence belonging to the whole by establishing a common relation between them. However, the parts cannot be generalized into a whole—until a common relation is established among "all" parts. This does not mean that the parts are unrelated, only that no common relation has been established yet for the generalization. The concept of generalization has broad application in many connected disciplines, and might sometimes have a more specific meaning in a specialized context (e.g. generalization in psychology, generalization in learning). In general, given two related concepts "A" and "B," "A" is a "generalization" of "B" (equiv., "B" is a special case of "A") if and only if both of the following hold: For example, the concept "animal" is a generalization of the concept "bird", since every bird is an animal, but not all animals are birds (dogs, for instance). For more, see Specialisation (biology). Hypernym and hyponym. The connection of "generalization" to "specialization" (or "particularization") is reflected in the contrasting words hypernym and hyponym. A hypernym as a generic stands for a class or group of equally ranked items, such as the term "tree" which stands for equally ranked items such as "peach" and "oak", and the term "ship" which stands for equally ranked items such as "cruiser" and "steamer". In contrast, a hyponym is one of the items included in the generic, such as "peach" and "oak" which are included in "tree", and "cruiser" and "steamer" which are included in "ship". A hypernym is superordinate to a hyponym, and a hyponym is subordinate to a hypernym. Examples. Biological generalization. An animal is a generalization of a mammal, a bird, a fish, an amphibian and a reptile. Cartographic generalization of geo-spatial data. Generalization has a long history in cartography as an art of creating maps for different scale and purpose. Cartographic generalization is the process of selecting and representing information of a map in a way that adapts to the scale of the display medium of the map. In this way, every map has, to some extent, been generalized to match the criteria of display. This includes small cartographic scale maps, which cannot convey every detail of the real world. As a result, cartographers must decide and then adjust the content within their maps, to create a suitable and useful map that conveys the geospatial information within their representation of the world. Generalization is meant to be context-specific. That is to say, correctly generalized maps are those that emphasize the most important map elements, while still representing the world in the most faithful and recognizable way. The level of detail and importance in what is remaining on the map must outweigh the insignificance of items that were generalized—so as to preserve the distinguishing characteristics of what makes the map useful and important. Mathematical generalizations. In mathematics, one commonly says that a concept or a result B is a "generalization" of A if A is defined or proved before B (historically or conceptually) and A is a special case of B.
[ { "math_id": 0, "text": "(1+x)^n" } ]
https://en.wikipedia.org/wiki?curid=12746
12747184
Out(Fn)
Outer automorphism group of a free group on n generators In mathematics, Out("Fn") is the outer automorphism group of a free group on "n" generators. These groups play an important role in geometric group theory. Structure. The abelianization map formula_0 induces a homomorphism from formula_1 to the general linear group formula_2, the latter being the automorphism group of formula_3. This map is onto, making formula_1 a group extension, formula_4. The kernel formula_5 is the Torelli group of formula_6. In the case formula_7, the map formula_8 is an isomorphism. Analogy with mapping class groups. Because formula_6 is the fundamental group of a bouquet of "n" circles, formula_1 can be described topologically as the mapping class group of a bouquet of "n" circles (in the homotopy category), in analogy to the mapping class group of a closed surface which is isomorphic to the outer automorphism group of the fundamental group of that surface. Outer space. Out("Fn") acts geometrically on a cell complex known as Culler–Vogtmann Outer space, which can be thought of as the Teichmüller space for a bouquet of circles. Definition. A point of the outer space is essentially an formula_9-graph "X" homotopy equivalent to a bouquet of "n" circles together with a certain choice of a free homotopy class of a homotopy equivalence from "X" to the bouquet of "n" circles. An formula_9-graph is just a weighted graph with weights in formula_9. The sum of all weights should be 1 and all weights should be positive. To avoid ambiguity (and to get a finite dimensional space) it is furthermore required that the valency of each vertex should be at least 3. A more descriptive view avoiding the homotopy equivalence "f" is the following. We may fix an identification of the fundamental group of the bouquet of "n" circles with the free group formula_6 in "n" variables. Furthermore, we may choose a maximal tree in "X" and choose for each remaining edge a direction. We will now assign to each remaining edge "e" a word in formula_6 in the following way. Consider the closed path starting with "e" and then going back to the origin of "e" in the maximal tree. Composing this path with "f" we get a closed path in a bouquet of "n" circles and hence an element in its fundamental group formula_6. This element is not well defined; if we change "f" by a free homotopy we obtain another element. It turns out, that those two elements are conjugate to each other, and hence we can choose the unique cyclically reduced element in this conjugacy class. It is possible to reconstruct the free homotopy type of "f" from these data. This view has the advantage, that it avoids the extra choice of "f" and has the disadvantage that additional ambiguity arises, because one has to choose a maximal tree and an orientation of the remaining edges. The operation of Out("Fn") on the outer space is defined as follows. Every automorphism "g" of formula_6 induces a self homotopy equivalence "g′" of the bouquet of "n" circles. Composing "f" with "g′" gives the desired action. And in the other model it is just application of "g" and making the resulting word cyclically reduced. Connection to length functions. Every point in the outer space determines a unique length function formula_10. A word in formula_6 determines via the chosen homotopy equivalence a closed path in "X". The length of the word is then the minimal length of a path in the free homotopy class of that closed path. Such a length function is constant on each conjugacy class. The assignment formula_11 defines an embedding of the outer space to some infinite dimensional projective space. Simplicial structure on the outer space. In the second model an open simplex is given by all those formula_9-graphs, which have combinatorically the same underlying graph and the same edges are labeled with the same words (only the length of the edges may differ). The boundary simplices of such a simplex consists of all graphs, that arise from this graph by collapsing an edge. If that edge is a loop it cannot be collapsed without changing the homotopy type of the graph. Hence there is no boundary simplex. So one can think about the outer space as a simplicial complex with some simplices removed. It is easy to verify, that the action of formula_1 is simplicial and has finite isotropy groups.
[ { "math_id": 0, "text": "F_n \\to \\Z^n" }, { "math_id": 1, "text": "\\mathrm{Out}(F_n)" }, { "math_id": 2, "text": "\\mathrm{GL}(n,\\Z)" }, { "math_id": 3, "text": "\\Z^n" }, { "math_id": 4, "text": "1\\to \\mathrm{Tor}(F_n) \\to \\mathrm{Out}(F_n) \\to \\mathrm{GL}(n,\\Z)\\to 1" }, { "math_id": 5, "text": "\\mathrm{Tor}(F_n)" }, { "math_id": 6, "text": "F_n" }, { "math_id": 7, "text": "n= 2" }, { "math_id": 8, "text": "\\mathrm{Out}(F_n) \\to \\mathrm{GL}(n,\\Z)" }, { "math_id": 9, "text": "\\R" }, { "math_id": 10, "text": "l_X \\colon F_n \\to \\R" }, { "math_id": 11, "text": "X \\mapsto l_X" } ]
https://en.wikipedia.org/wiki?curid=12747184
12747246
Outer space (mathematics)
In the mathematical subject of geometric group theory, the Culler–Vogtmann Outer space or just Outer space of a free group "F""n" is a topological space consisting of the so-called "marked metric graph structures" of volume 1 on "F""n". The Outer space, denoted "X""n" or "CV""n", comes equipped with a natural action of the group of outer automorphisms Out("F""n") of "F""n". The Outer space was introduced in a 1986 paper of Marc Culler and Karen Vogtmann, and it serves as a free group analog of the Teichmüller space of a hyperbolic surface. Outer space is used to study homology and cohomology groups of Out("F""n") and to obtain information about algebraic, geometric and dynamical properties of Out("F""n"), of its subgroups and individual outer automorphisms of "F""n". The space "X""n" can also be thought of as the set of "F""n"-equivariant isometry types of minimal free discrete isometric actions of "F""n" on R-trees "T" such that the quotient metric graph "T"/"F""n" has volume 1. History. The Outer space formula_0 was introduced in a 1986 paper of Marc Culler and Karen Vogtmann, inspired by analogy with the Teichmüller space of a hyperbolic surface. They showed that the natural action of formula_1 on formula_0 is properly discontinuous, and that formula_0 is contractible. In the same paper Culler and Vogtmann constructed an embedding, via the "translation length functions" discussed below, of formula_0 into the infinite-dimensional projective space formula_2, where formula_3 is the set of nontrivial conjugacy classes of elements of formula_4. They also proved that the closure formula_5 of formula_0 in formula_6 is compact. Later a combination of the results of Cohen and Lustig and of Bestvina and Feighn identified (see Section 1.3 of ) the space formula_5 with the space formula_7 of projective classes of "very small" minimal isometric actions of formula_4 on formula_8-trees. Formal definition. Marked metric graphs. Let "n" ≥ 2. For the free group "F""n" fix a "rose" "R""n", that is a wedge, of "n" circles wedged at a vertex "v", and fix an isomorphism between "F""n" and the fundamental group π1("R""n", "v") of "R""n". From this point on we identify "F""n" and π1("R""n", "v") via this isomorphism. A marking on "F""n" consists of a homotopy equivalence "f" : "R""n" → Γ where Γ is a finite connected graph without degree-one and degree-two vertices. Up to a (free) homotopy, "f" is uniquely determined by the isomorphism "f"# : , that is by an isomorphism "F""n" → π1(Γ). A metric graph is a finite connected graph formula_9 together with the assignment to every topological edge "e" of Γ of a positive real number "L"("e") called the "length" of "e". The "volume" of a metric graph is the sum of the lengths of its topological edges. A marked metric graph structure on "F""n" consists of a marking "f" : "R""n" → Γ together with a metric graph structure "L" on Γ. Two marked metric graph structures "f"1 : "R""n" → Γ1 and "f"2 : "R""n" → Γ2 are "equivalent" if there exists an isometry "θ" : Γ1 → Γ2 such that, up to free homotopy, we have "θ" o "f"1 = "f"2. The Outer space "X""n" consists of equivalence classes of all the volume-one marked metric graph structures on "F""n". Weak topology on the Outer space. Open simplices. Let "f" : "R""n" → Γ where Γ is a marking and let "k" be the number of topological edges in Γ. We order the edges of Γ as "e"1, ..., "e""k". Let formula_10 be the standard ("k" − 1)-dimensional open simplex in R"k". Given "f", there is a natural map "j" : Δ"k" → "X""n", where for "x" = ("x"1, ..., "x""k") ∈ Δ"k", the point "j"("x") of "X""n" is given by the marking "f" together with the metric graph structure "L" on Γ such that "L"("e""i") = "x""i" for "i" = 1, ..., "k". One can show that "j" is in fact an injective map, that is, distinct points of Δ"k" correspond to non-equivalent marked metric graph structures on "F""n". The set "j"(Δ"k") is called "open simplex" in "X""n" corresponding to "f" and is denoted "S"("f"). By construction, "X""n" is the union of open simplices corresponding to all markings on "F""n". Note that two open simplices in "X""n" either are disjoint or coincide. Closed simplices. Let "f" : "R""n" → Γ where Γ is a marking and let "k" be the number of topological edges in Γ. As before, we order the edges of Γ as "e"1, ..., "e""k". Define Δ"k"′ ⊆ R"k" as the set of all "x" = ("x"1, ..., "x""k") ∈ R"k", such that formula_11, such that each "x""i" ≥ 0 and such that the set of all edges "e""i" in "formula_12" with "x""i" = 0 is a subforest in Γ. The map "j" : Δ"k" → "X""n" extends to a map "h" : Δ"k"′ → "X""n" as follows. For "x" in Δ"k" put "h"("x") = "j"("x"). For "x" ∈ Δ"k"′ − Δ"k" the point "h"("x") of "X""n" is obtained by taking the marking "f", contracting all edges "e""i" of "formula_12" with "x""i" = 0 to obtain a new marking "f"1 : "R""n" → Γ1 and then assigning to each surviving edge "e""i" of Γ1 length "x""i" &gt; 0. It can be shown that for every marking "f" the map "h" : Δ"k"′ → "X""n" is still injective. The image of "h" is called the "closed simplex" in "X""n" corresponding to "f" and is denoted by "S"′("f"). Every point in "X""n" belongs to only finitely many closed simplices and a point of "X""n" represented by a marking "f" : "R""n" → Γ where the graph Γ is tri-valent belongs to a unique closed simplex in "X""n", namely "S"′("f"). The weak topology on the Outer space "X""n" is defined by saying that a subset "C" of "X""n" is closed if and only if for every marking "f" : "R""n" → Γ the set "h"−1("C") is closed in Δ"k"′. In particular, the map "h" : Δ"k"′ → "X""n" is a topological embedding. Points of Outer space as actions on trees. Let "x" be a point in "X""n" given by a marking "f" : "R""n" → Γ with a volume-one metric graph structure "L" on Γ. Let "T" be the universal cover of Γ. Thus "T" is a simply connected graph, that is "T" is a topological tree. We can also lift the metric structure "L" to "T" by giving every edge of "T" the same length as the length of its image in Γ. This turns "T" into a metric space ("T", "d") which is a real tree. The fundamental group π1(Γ) acts on "T" by covering transformations which are also isometries of ("T", "d"), with the quotient space "T"/π1(Γ) = Γ. Since the induced homomorphism "f"# is an isomorphism between "F""n" = π1("R""n") and π1(Γ), we also obtain an isometric action of "F""n" on "T" with "T"/"F""n" = Γ. This action is free and discrete. Since Γ is a finite connected graph with no degree-one vertices, this action is also "minimal", meaning that "T" has no proper "F""n"-invariant subtrees. Moreover, every minimal free and discrete isometric action of "F""n" on a real tree with the quotient being a metric graph of volume one arises in this fashion from some point "x" of "X""n". This defines a bijective correspondence between "X""n" and the set of equivalence classes of minimal free and discrete isometric actions of "F""n" on a real trees with volume-one quotients. Here two such actions of "F""n" on real trees "T"1 and "T"2 are "equivalent" if there exists an "F""n"-equivariant isometry between "T"1 and "T"2. Length functions. Give an action of "F""n" on a real tree "T" as above, one can define the "translation length function" associate with this action: formula_13 For "g" ≠ 1 there is a (unique) isometrically embedded copy of R in "T", called the "axis" of "g", such that "g" acts on this axis by a translation of magnitude formula_14. For this reason formula_15 is called the "translation length" of "g". For any "g", "u" in "F""n" we have formula_16, that is the function formula_17 is constant on each conjugacy class in "G". In the marked metric graph model of Outer space translation length functions can be interpreted as follows. Let "T" in "X""n" be represented by a marking "f" : "R""n" → Γ with a volume-one metric graph structure "L" on Γ. Let "g" ∈ "F""n" = π1("R""n"). First push "g" forward via "f"# to get a closed loop in Γ and then tighten this loop to an immersed circuit in Γ. The "L"-length of this circuit is the translation length formula_15 of "g". A basic general fact from the theory of group actions on real trees says that a point of the Outer space is uniquely determined by its translation length function. Namely if two trees with minimal free isometric actions of "F""n" define equal translation length functions on "F""n" then the two trees are "F""n"-equivariantly isometric. Hence the map formula_18 from "X""n" to the set of R-valued functions on "F""n" is injective. One defines the length function topology or axes topology on "X""n" as follows. For every "T" in "X""n", every finite subset "K" of "F""n" and every "ε" &gt; 0 let formula_19 In the length function topology for every "T" in "X""n" a basis of neighborhoods of "T" in "X""n" is given by the family "V""T"("K", "ε") where "K" is a finite subset of "F""n" and where "ε" &gt; 0. Convergence of sequences in the length function topology can be characterized as follows. For "T" in "X""n" and a sequence "T""i" in "X""n" we have formula_20 if and only if for every "g" in "F""n" we have formula_21 Gromov topology. Another topology on formula_0 is the so-called "Gromov topology" or the "equivariant Gromov–Hausdorff convergence topology", which provides a version of Gromov–Hausdorff convergence adapted to the setting of an isometric group action. When defining the Gromov topology, one should think of points of formula_0 as actions of formula_4 on formula_8-trees. Informally, given a tree formula_22, another tree formula_23 is "close" to formula_24 in the Gromov topology, if for some large finite subtrees of formula_25 and a large finite subset formula_26 there exists an "almost isometry" between formula_27 and formula_28 with respect to which the (partial) actions of formula_29 on formula_27 and formula_28 almost agree. For the formal definition of the Gromov topology see. Coincidence of the weak, the length function and Gromov topologies. An important basic result states that the Gromov topology, the weak topology and the length function topology on "X""n" coincide. Action of Out("F""n") on Outer space. The group Out("F""n") admits a natural right action by homeomorphisms on "X""n". First we define the action of the automorphism group Aut("F""n") on "X""n". Let "α" ∈ Aut("F""n") be an automorphism of "F""n". Let "x" be a point of "X""n" given by a marking "f" : "R""n" → Γ with a volume-one metric graph structure "L" on Γ. Let "τ" : "R""n" → "R""n" be a homotopy equivalence whose induced homomorphism at the fundamental group level is the automorphism "α" of "F""n" = π1("R""n"). The element "xα" of "X""n" is given by the marking "f" ∘ "τ" : "R""n" → Γ with the metric structure "L" on Γ. That is, to get "xα" from "x" we simply precompose the marking defining "x" with "τ". In the real tree model this action can be described as follows. Let "T" in "X""n" be a real tree with a minimal free and discrete co-volume-one isometric action of "F""n". Let "α" ∈ Aut("F""n"). As a metric space, "Tα" is equal to "T". The action of "F""n" is twisted by "α". Namely, for any "t" in "T" and "g" in "F""n" we have: formula_30 At the level of translation length functions the tree "Tα" is given as: formula_31 One then checks that for the above action of Aut("F""n") on Outer space "X""n" the subgroup of inner automorphisms Inn("F""n") is contained in the kernel of this action, that is every inner automorphism acts trivially on "X""n". It follows that the action of Aut("F""n") on "X""n" quotients through to an action of Out("F""n") = Aut("F""n")/Inn("F""n") on "X""n". namely, if "φ" ∈ Out("F""n") is an outer automorphism of "F""n" and if "α" in Aut("F""n") is an actual automorphism representing "φ" then for any "x" in "X""n" we have "xφ" = "xα". The right action of Out("F""n") on "X""n" can be turned into a left action via a standard conversion procedure. Namely, for "φ" ∈ Out("F""n") and "x" in "X""n" set "φx" = "xφ"−1. This left action of Out("F""n") on "X""n" is also sometimes considered in the literature although most sources work with the right action. Moduli space. The quotient space "M""n" = "X""n"/Out("F""n") is the moduli space which consists of isometry types of finite connected graphs Γ without degree-one and degree-two vertices, with fundamental groups isomorphic to "F""n" (that is, with the first Betti number equal to "n") equipped with volume-one metric structures. The quotient topology on "M""n" is the same as that given by the Gromov–Hausdorff distance between metric graphs representing points of "M""n". The moduli space "M""n" is not compact and the "cusps" in "M""n" arise from decreasing towards zero lengths of edges for homotopically nontrivial subgraphs (e.g. an essential circuit) of a metric graph Γ. Unprojectivized Outer space. The "unprojectivized Outer space" formula_32 consists of equivalence classes of all marked metric graph structures on "F""n" where the volume of the metric graph in the marking is allowed to be any positive real number. The space formula_32 can also be thought of as the set of all free minimal discrete isometric actions of "F""n" on R-trees, considered up to "F""n"-equivariant isometry. The unprojectivized Outer space inherits the same structures that formula_0 has, including the coincidence of the three topologies (Gromov, axes, weak), and an formula_1-action. In addition, there is a natural action of formula_33 on formula_32 by scalar multiplication. Topologically, formula_32 is homeomorphic to formula_34. In particular, formula_32 is also contractible. Projectivized Outer space. The projectivized Outer space is the quotient space formula_35 under the action of formula_33 on formula_32 by scalar multiplication. The space formula_36 is equipped with the quotient topology. For a tree formula_37 its projective equivalence class is denoted formula_38. The action of formula_1 on formula_39 naturally quotients through to the action of formula_1 on formula_36. Namely, for formula_40 and formula_37 put formula_41. A key observation is that the map formula_42 is an formula_1-equivariant homeomorphism. For this reason the spaces formula_0 and formula_36 are often identified. Lipschitz distance. The Lipschitz distance, named for Rudolf Lipschitz, for Outer space corresponds to the Thurston metric in Teichmüller space. For two points formula_43 in "X""n" the (right) Lipschitz distance formula_44 is defined as the (natural) logarithm of the maximally stretched closed path from formula_45 to formula_46: formula_47 and formula_48 This is an asymmetric metric (also sometimes called a quasimetric), i.e. it only fails symmetry formula_49. The symmetric Lipschitz metric normally denotes: formula_50 The supremum formula_51 is always obtained and can be calculated by a finite set the so called candidates of formula_45. formula_52 Where formula_53 is the finite set of conjugacy classes in "F""n" which correspond to embeddings of a simple loop, a figure of eight, or a barbell into formula_45 via the marking (see the diagram). The stretching factor also equals the minimal Lipschitz constant of a homotopy equivalence carrying over the marking, i.e. formula_54 Where formula_55 are the continuous functions formula_56 such that for the marking formula_57 on formula_45 the marking formula_58 is freely homotopic to the marking formula_59 on formula_46. The induced topology is the same as the weak topology and the isometry group is formula_1 for both, the symmetric and asymmetric Lipschitz distance. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "X_n" }, { "math_id": 1, "text": "\\operatorname{Out}(F_n)" }, { "math_id": 2, "text": "\\mathbb{P}^{\\,\\mathcal{C}} = \\mathbb{R}^{\\mathcal{C}} \\!-\\! \\{0\\}/\\mathbb{R}_{>0}" }, { "math_id": 3, "text": "\\mathcal{C}" }, { "math_id": 4, "text": "F_n" }, { "math_id": 5, "text": "\\overline X_n" }, { "math_id": 6, "text": "\\mathbb{P}^{\\,\\mathcal{C}}" }, { "math_id": 7, "text": "\\overline{CV}_n" }, { "math_id": 8, "text": "\\mathbb R" }, { "math_id": 9, "text": "\\gamma" }, { "math_id": 10, "text": "\\Delta_k = \\left\\{ (x_1, \\dots, x_k)\\in \\mathbb{R}^k \\,\\Big|\\, \\sum_{i=1}^k x_i =1, x_i >0 \\text{ for } i=1,\\dots, k\\right\\} " }, { "math_id": 11, "text": "\\textstyle{\\sum_{i=1}^k x_i=1}" }, { "math_id": 12, "text": "\\Gamma" }, { "math_id": 13, "text": "\\ell_T: F_n \\to \\mathbb R, \\quad \\ell_T(g)=\\min_{t\\in T} d(t,gt), \\quad\\text{ for } g\\in F_n. " }, { "math_id": 14, "text": "\\ell_T(g)>0" }, { "math_id": 15, "text": "\\ell_T(g)" }, { "math_id": 16, "text": "\\ell_T(ugu^{-1}) = \\ell_T(g)" }, { "math_id": 17, "text": "\\ell_T" }, { "math_id": 18, "text": " T\\mapsto \\ell_T" }, { "math_id": 19, "text": " V_T(K,\\epsilon)= \\{ T'\\in X_n: |\\ell_T(g)-\\ell_{T'}(g)| < \\epsilon \\text{ for every } g\\in K\\}. " }, { "math_id": 20, "text": "\\lim_{i\\to\\infty} T_i = T" }, { "math_id": 21, "text": " \\lim_{i\\to\\infty} \\ell_{T_i}(g) = \\ell_T(g)." }, { "math_id": 22, "text": " T\\in X_n" }, { "math_id": 23, "text": "T'\\in X_n" }, { "math_id": 24, "text": "T" }, { "math_id": 25, "text": " Y\\subseteq T, Y'\\subseteq T'\\in X_n" }, { "math_id": 26, "text": "B\\subseteq F_n" }, { "math_id": 27, "text": "Y'" }, { "math_id": 28, "text": "Y" }, { "math_id": 29, "text": "B" }, { "math_id": 30, "text": " g\\underset{T\\alpha}{\\cdot} t= \\alpha(g) \\underset{T}{\\cdot} t." }, { "math_id": 31, "text": "\\ell_{T\\alpha}(g)=\\ell_T(\\alpha(g)) \\quad \\text{ for } g\\in F_n." }, { "math_id": 32, "text": "cv_n " }, { "math_id": 33, "text": "\\mathbb R_{>0}" }, { "math_id": 34, "text": "X_n\\times (0,\\infty)" }, { "math_id": 35, "text": "CV_n:=cv_n/\\mathbb R_{>0}" }, { "math_id": 36, "text": "CV_n" }, { "math_id": 37, "text": "T\\in cv_n" }, { "math_id": 38, "text": "[T]=\\{cT \\mid c>0\\}\\subseteq cv_n" }, { "math_id": 39, "text": "cv_n" }, { "math_id": 40, "text": "\\phi\\in \\operatorname{Out}(F_n)" }, { "math_id": 41, "text": "[T]\\phi:= [T\\phi]" }, { "math_id": 42, "text": "X_n \\to CV_n, T\\mapsto [T]" }, { "math_id": 43, "text": "x, y" }, { "math_id": 44, "text": "d_R" }, { "math_id": 45, "text": "x" }, { "math_id": 46, "text": "y" }, { "math_id": 47, "text": "\\Lambda_R(x,y):=\\sup_{\\gamma \\in F_n\\setminus \\{1\\}} \\frac{\\ell_y(\\gamma)}{\\ell_x(\\gamma)}" }, { "math_id": 48, "text": "d_R(x,y):= \\log \\Lambda_R(x,y)" }, { "math_id": 49, "text": "d_R(x,y)=d_R(y,x)" }, { "math_id": 50, "text": "d(x,y):=d_R(x,y)+d_R(y,x)" }, { "math_id": 51, "text": " \\Lambda_R(x,y)" }, { "math_id": 52, "text": "\\Lambda_R(x,y) = \\max_{\\gamma \\in \\operatorname{cand}(x)} \\frac{\\ell_y(\\gamma)}{\\ell_x(\\gamma)}" }, { "math_id": 53, "text": "\\operatorname{cand}(x)" }, { "math_id": 54, "text": "\\Lambda_R(x,y)=\\min_{h \\in H(x,y)} Lip(h) " }, { "math_id": 55, "text": "H(x,y)" }, { "math_id": 56, "text": "h: x \\to y" }, { "math_id": 57, "text": "f_x" }, { "math_id": 58, "text": "h \\circ f_x" }, { "math_id": 59, "text": "f_y" }, { "math_id": 60, "text": "\\overline{cv}_n" }, { "math_id": 61, "text": "\\overline{CV}_n " }, { "math_id": 62, "text": "X_n " }, { "math_id": 63, "text": "A_n" }, { "math_id": 64, "text": "\\operatorname{Aut}(F_n)" } ]
https://en.wikipedia.org/wiki?curid=12747246
12747259
Quasi-isometry
Function between two metric spaces that only respects their large-scale geometry In mathematics, a quasi-isometry is a function between two metric spaces that respects large-scale geometry of these spaces and ignores their small-scale details. Two metric spaces are quasi-isometric if there exists a quasi-isometry between them. The property of being quasi-isometric behaves like an equivalence relation on the class of metric spaces. The concept of quasi-isometry is especially important in geometric group theory, following the work of Gromov. Definition. Suppose that formula_0 is a (not necessarily continuous) function from one metric space formula_1 to a second metric space formula_2. Then formula_0 is called a "quasi-isometry" from formula_1 to formula_2 if there exist constants formula_3, formula_4, and formula_5 such that the following two properties both hold: The two metric spaces formula_1 and formula_2 are called quasi-isometric if there exists a quasi-isometry formula_0 from formula_1 to formula_2. A map is called a quasi-isometric embedding if it satisfies the first condition but not necessarily the second (i.e. it is coarsely Lipschitz but may fail to be coarsely surjective). In other words, if through the map, formula_1 is quasi-isometric to a subspace of formula_2. Two metric spaces "M1" and "M2" are said to be quasi-isometric, denoted formula_15, if there exists a quasi-isometry formula_16. Examples. The map between the Euclidean plane and the plane with the Manhattan distance that sends every point to itself is a quasi-isometry: in it, distances are multiplied by a factor of at most formula_17. Note that there can be no isometry, since, for example, the points formula_18 are of equal distance to each other in Manhattan distance, but in the Euclidean plane, there are no 4 points that are of equal distance to each other. The map formula_19 (both with the Euclidean metric) that sends every formula_20-tuple of integers to itself is a quasi-isometry: distances are preserved exactly, and every real tuple is within distance formula_21 of an integer tuple. In the other direction, the discontinuous function that rounds every tuple of real numbers to the nearest integer tuple is also a quasi-isometry: each point is taken by this map to a point within distance formula_21 of it, so rounding changes the distance between pairs of points by adding or subtracting at most formula_22. Every pair of finite or bounded metric spaces is quasi-isometric. In this case, every function from one space to the other is a quasi-isometry. Equivalence relation. If formula_23 is a quasi-isometry, then there exists a quasi-isometry formula_24. Indeed, formula_25 may be defined by letting formula_7 be any point in the image of formula_0 that is within distance formula_13 of formula_6, and letting formula_25 be any point in formula_26. Since the identity map is a quasi-isometry, and the composition of two quasi-isometries is a quasi-isometry, it follows that the property of being quasi-isometric behaves like an equivalence relation on the class of metric spaces. Use in geometric group theory. Given a finite generating set "S" of a finitely generated group "G", we can form the corresponding Cayley graph of "S" and "G". This graph becomes a metric space if we declare the length of each edge to be 1. Taking a different finite generating set "T" results in a different graph and a different metric space, however the two spaces are quasi-isometric. This quasi-isometry class is thus an invariant of the group "G". Any property of metric spaces that only depends on a space's quasi-isometry class immediately yields another invariant of groups, opening the field of group theory to geometric methods. More generally, the Švarc–Milnor lemma states that if a group "G" acts properly discontinuously with compact quotient on a proper geodesic space "X" then "G" is quasi-isometric to "X" (meaning that any Cayley graph for "G" is). This gives new examples of groups quasi-isometric to each other: Quasigeodesics and the Morse lemma. A "quasi-geodesic" in a metric space formula_27 is a quasi-isometric embedding of formula_28 into formula_29. More precisely a map formula_30 such that there exists formula_31 so that formula_32 is called a formula_33-quasi-geodesic. Obviously geodesics (parametrised by arclength) are quasi-geodesics. The fact that in some spaces the converse is coarsely true, i.e. that every quasi-geodesic stays within bounded distance of a true geodesic, is called the "Morse Lemma" (not to be confused with the Morse lemma in differential topology). Formally the statement is: "Let formula_34 and formula_29 a proper δ-hyperbolic space. There exists formula_35 such that for any formula_36-quasi-geodesic formula_37 there exists a geodesic formula_38 in formula_29 such that formula_39 for all formula_40. " It is an important tool in geometric group theory. An immediate application is that any quasi-isometry between proper hyperbolic spaces induces a homeomorphism between their boundaries. This result is the first step in the proof of the Mostow rigidity theorem. Furthermore, this result has found utility in analyzing user interaction design in applications similar to Google Maps. Examples of quasi-isometry invariants of groups. The following are some examples of properties of group Cayley graphs that are invariant under quasi-isometry: Hyperbolicity. A group is called "hyperbolic" if one of its Cayley graphs is a δ-hyperbolic space for some δ. When translating between different definitions of hyperbolicity, the particular value of δ may change, but the resulting notions of a hyperbolic group turn out to be equivalent. Hyperbolic groups have a solvable word problem. They are biautomatic and automatic.: indeed, they are strongly geodesically automatic, that is, there is an automatic structure on the group, where the language accepted by the word acceptor is the set of all geodesic words. Growth. The growth rate of a group with respect to a symmetric generating set describes the size of balls in the group. Every element in the group can be written as a product of generators, and the growth rate counts the number of elements that can be written as a product of length "n". According to Gromov's theorem, a group of polynomial growth is virtually nilpotent, i.e. it has a nilpotent subgroup of finite index. In particular, the order of polynomial growth formula_41 has to be a natural number and in fact formula_42. If formula_43 grows more slowly than any exponential function, "G" has a subexponential growth rate. Any such group is amenable. Ends. The ends of a topological space are, roughly speaking, the connected components of the “ideal boundary” of the space. That is, each end represents a topologically distinct way to move to infinity within the space. Adding a point at each end yields a compactification of the original space, known as the end compactification. The ends of a finitely generated group are defined to be the ends of the corresponding Cayley graph; this definition is independent of the choice of a finite generating set. Every finitely-generated infinite group has either 0,1, 2, or infinitely many ends, and Stallings theorem about ends of groups provides a decomposition for groups with more than one end. If two connected locally finite graphs are quasi-isometric then they have the same number of ends. In particular, two quasi-isometric finitely generated groups have the same number of ends. Amenability. An amenable group is a locally compact topological group "G" carrying a kind of averaging operation on bounded functions that is invariant under translation by group elements. The original definition, in terms of a finitely additive invariant measure (or mean) on subsets of "G", was introduced by John von Neumann in 1929 under the German name "messbar" ("measurable" in English) in response to the Banach–Tarski paradox. In 1949 Mahlon M. Day introduced the English translation "amenable", apparently as a pun. In discrete group theory, where "G" has the discrete topology, a simpler definition is used. In this setting, a group is amenable if one can say what proportion of "G" any given subset takes up. If a group has a Følner sequence then it is automatically amenable. Asymptotic cone. An ultralimit is a geometric construction that assigns to a sequence of metric spaces "Xn" a limiting metric space. An important class of ultralimits are the so-called "asymptotic cones" of metric spaces. Let ("X","d") be a metric space, let "ω" be a non-principal ultrafilter on formula_44 and let "pn" ∈ "X" be a sequence of base-points. Then the "ω"–ultralimit of the sequence formula_45 is called the asymptotic cone of "X" with respect to "ω" and formula_46 and is denoted formula_47. One often takes the base-point sequence to be constant, "pn" = "p" for some "p ∈ X"; in this case the asymptotic cone does not depend on the choice of "p ∈ X" and is denoted by formula_48 or just formula_49. The notion of an asymptotic cone plays an important role in geometric group theory since asymptotic cones (or, more precisely, their topological types and bi-Lipschitz types) provide quasi-isometry invariants of metric spaces in general and of finitely generated groups in particular. Asymptotic cones also turn out to be a useful tool in the study of relatively hyperbolic groups and their generalizations. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "f" }, { "math_id": 1, "text": "(M_1,d_1)" }, { "math_id": 2, "text": "(M_2,d_2)" }, { "math_id": 3, "text": "A\\ge 1" }, { "math_id": 4, "text": "B\\ge 0" }, { "math_id": 5, "text": "C\\ge 0" }, { "math_id": 6, "text": "x" }, { "math_id": 7, "text": "y" }, { "math_id": 8, "text": "M_1" }, { "math_id": 9, "text": "B" }, { "math_id": 10, "text": "A" }, { "math_id": 11, "text": "\\forall x,y\\in M_1: \\frac{1}{A}\\; d_1(x,y)-B\\leq d_2(f(x),f(y))\\leq A\\; d_1(x,y)+B." }, { "math_id": 12, "text": "M_2" }, { "math_id": 13, "text": "C" }, { "math_id": 14, "text": "\\forall z\\in M_2:\\exists x\\in M_1: d_2(z,f(x))\\le C." }, { "math_id": 15, "text": "M_1\\underset{q.i.}{\\sim} M_2 " }, { "math_id": 16, "text": "f:M_1\\to M_2" }, { "math_id": 17, "text": "\\sqrt 2" }, { "math_id": 18, "text": "(1, 0), (-1, 0), (0, 1), (0, -1)" }, { "math_id": 19, "text": "f:\\mathbb{Z}^n\\to\\mathbb{R}^n" }, { "math_id": 20, "text": "n" }, { "math_id": 21, "text": "\\sqrt{n/4}" }, { "math_id": 22, "text": "2\\sqrt{n/4}" }, { "math_id": 23, "text": "f:M_1\\mapsto M_2" }, { "math_id": 24, "text": "g:M_2\\mapsto M_1" }, { "math_id": 25, "text": "g(x)" }, { "math_id": 26, "text": "f^{-1}(y)" }, { "math_id": 27, "text": "(X, d)" }, { "math_id": 28, "text": "\\mathbb R" }, { "math_id": 29, "text": "X" }, { "math_id": 30, "text": "\\phi: \\mathbb R \\to X" }, { "math_id": 31, "text": "C,K > 0" }, { "math_id": 32, "text": "\\forall s, t \\in \\mathbb R : C^{-1} |s - t| - K \\le d(\\phi(t), \\phi(s)) \\le C|s - t| + K" }, { "math_id": 33, "text": "(C,K)" }, { "math_id": 34, "text": "\\delta, C, K > 0" }, { "math_id": 35, "text": "M" }, { "math_id": 36, "text": "(C, K)" }, { "math_id": 37, "text": "\\phi" }, { "math_id": 38, "text": "L" }, { "math_id": 39, "text": "d(\\phi(t), L) \\le M" }, { "math_id": 40, "text": "t \\in \\mathbb R" }, { "math_id": 41, "text": "k_0" }, { "math_id": 42, "text": "\\#(n)\\sim n^{k_0}" }, { "math_id": 43, "text": "\\#(n)" }, { "math_id": 44, "text": "\\mathbb N " }, { "math_id": 45, "text": "(X, \\frac{d}{n}, p_n)" }, { "math_id": 46, "text": "(p_n)_n\\," }, { "math_id": 47, "text": "Cone_\\omega(X,d, (p_n)_n)\\," }, { "math_id": 48, "text": "Cone_\\omega(X,d)\\," }, { "math_id": 49, "text": "Cone_\\omega(X)\\," } ]
https://en.wikipedia.org/wiki?curid=12747259
12747933
Quasivariety
In mathematics, a quasivariety is a class of algebraic structures generalizing the notion of variety by allowing equational conditions on the axioms defining the class. Definition. A "trivial algebra" contains just one element. A quasivariety is a class "K" of algebras with a specified signature satisfying any of the following equivalent conditions: Examples. Every variety is a quasivariety by virtue of an equation being a quasi-identity for which "n" = 0. The cancellative semigroups form a quasivariety. Let "K" be a quasivariety. Then the class of orderable algebras from "K" forms a quasivariety, since the preservation-of-order axioms are Horn clauses. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "s_1 \\approx t_1 \\land \\ldots \\land s_n \\approx t_n \\rightarrow s \\approx t" }, { "math_id": 1, "text": "s, s_1, \\ldots, s_n,t, t_1, \\ldots, t_n" } ]
https://en.wikipedia.org/wiki?curid=12747933
12756005
Euclidean field
Ordered field where every nonnegative element is a square In mathematics, a Euclidean field is an ordered field "K" for which every non-negative element is a square: that is, "x" ≥ 0 in "K" implies that "x" = "y"2 for some "y" in "K". The constructible numbers form a Euclidean field. It is the smallest Euclidean field, as every Euclidean field contains it as an ordered subfield. In other words, the constructible numbers form the Euclidean closure of the rational numbers. Examples. Every real closed field is a Euclidean field. The following examples are also real closed fields. Euclidean closure. The Euclidean closure of an ordered field K is an extension of K in the quadratic closure of K which is maximal with respect to being an ordered field with an order extending that of K. It is also the smallest subfield of the algebraic closure of K that is a Euclidean field and is an ordered extension of K. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\mathbb{R}" }, { "math_id": 1, "text": "\\mathbb{R}\\cap\\mathbb{\\overline Q}" }, { "math_id": 2, "text": "\\mathbb Q" }, { "math_id": 3, "text": "\\mathbb C" } ]
https://en.wikipedia.org/wiki?curid=12756005
12756318
Feit–Thompson conjecture
Feit–Thompson conjecture In mathematics, the Feit–Thompson conjecture is a conjecture in number theory, suggested by Walter Feit and John G. Thompson (1962). The conjecture states that there are no distinct prime numbers "p" and "q" such that formula_0 divides formula_1. If the conjecture were true, it would greatly simplify the final chapter of the proof of the Feit–Thompson theorem that every finite group of odd order is solvable. A stronger conjecture that the two numbers are always coprime was disproved by with the counterexample "p" = 17 and "q" = 3313 with common factor 2"pq" + 1 = 112643. It is known that the conjecture is true for "q" = 2  and "q" = 3 . Informal probability arguments suggest that the "expected" number of counterexamples to the Feit–Thompson conjecture is very close to 0, suggesting that the Feit–Thompson conjecture is likely to be true. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\frac{p^{q} - 1}{p - 1}" }, { "math_id": 1, "text": "\\frac{q^{p} - 1}{q - 1}" } ]
https://en.wikipedia.org/wiki?curid=12756318
12756616
Flat manifold
In mathematics, a Riemannian manifold is said to be flat if its Riemann curvature tensor is everywhere zero. Intuitively, a flat manifold is one that "locally looks like" Euclidean space (or Minkowski space in the relativistic case) in terms of distances and angles, e.g. the interior angles of a triangle add up to 180°. The universal cover of a complete flat manifold is Euclidean space. This can be used to prove the theorem of Bieberbach (1911, 1912) that all compact flat manifolds are finitely covered by tori; the 3-dimensional case was proved earlier by . Examples. The following manifolds can be endowed with a flat metric. Note that this may not be their 'standard' metric (for example, the flat metric on the 2-dimensional torus is not the metric induced by its usual embedding into formula_0). Dimension 1. Every one-dimensional Riemannian manifold is flat. Conversely, given that every connected one-dimensional smooth manifold is diffeomorphic to either formula_1 or formula_2 it is straightforward to see that every connected one-dimensional Riemannian manifold is isometric to one of the following (each with their standard Riemannian structure): Only the first and last are complete. If one includes Riemannian manifolds-with-boundary, then the half-open and closed intervals must also be included. The simplicity of a complete description in this case could be ascribed to the fact that every one-dimensional Riemannian manifold has a smooth unit-length vector field, and that an isometry from one of the above model examples is provided by considering an integral curve. Dimension 2. The five possibilities, up to diffeomorphism. If formula_9 is a smooth two-dimensional connected complete flat Riemannian manifold, then formula_10 must be diffeomorphic to formula_11 formula_12 formula_13 the Möbius strip, or the Klein bottle. Note that the only compact possibilities are formula_14 and the Klein bottle, while the only orientable possibilities are formula_11 formula_15 and formula_16 It takes more effort to describe the distinct complete flat Riemannian metrics on these spaces. For instance, the two factors of formula_14 can have any two real numbers as their radii. These metrics are distinguished from each other by the ratio of their two radii, so this space has infinitely many different flat product metrics which are not isometric up to a scale factor. In order to talk uniformly about the five possibilities, and in particular to work concretely with the Möbius strip and the Klein bottle as abstract manifolds, it is useful to use the language of group actions. The five possibilities, up to isometry. Given formula_17 let formula_18 denote the translation formula_19 given by formula_20 Let formula_21 denote the reflection formula_19 given by formula_22 Given two positive numbers formula_23 consider the following subgroups of formula_24 the group of isometries of formula_25 with its standard metric. These are all groups acting freely and properly discontinuously on formula_11 and so the various coset spaces formula_32 all naturally have the structure of two-dimensional complete flat Riemannian manifolds. None of them are isometric to one another, and any smooth two-dimensional complete flat connected Riemannian manifold is isometric to one of them. Orbifolds. There are 17 compact 2-dimensional orbifolds with flat metric (including the torus and Klein bottle), listed in the article on orbifolds, that correspond to the 17 wallpaper groups. Remarks. Note that the standard 'picture' of the torus as a doughnut does not present it with a flat metric, since the points furthest from the center have positive curvature while the points closest to the center have negative curvature. According to Kuiper's formulation of the Nash embedding theorem, there is a formula_33 embedding formula_34 which induces any of the flat product metrics which exist on formula_13 but these are not easily visualizable. Since formula_35 is presented as an embedded submanifold of formula_11 any of the (flat) product structures on formula_14 are naturally presented as submanifolds of formula_36 Likewise, the standard three-dimensional visualizations of the Klein bottle do not present a flat metric. The standard construction of a Möbius strip, by gluing ends of a strip of paper together, does indeed give it a flat metric, but it is not complete. Dimension 3. There are 6 orientable and 4 non-orientable compact flat 3-manifolds, which are all Seifert fiber spaces; they are the quotient groups of formula_0 by the 10 torsion-free crystallographic groups. There are also 4 orientable and 4 non-orientable non-compact spaces. Orientable. The 10 orientable flat 3-manifolds are: Non-orientable. The 8 non-orientable 3-manifolds are: Relation to amenability. Among all closed manifolds with non-positive sectional curvature, flat manifolds are characterized as precisely those with an amenable fundamental group. This is a consequence of the Adams-Ballmann theorem (1998), which establishes this characterization in the much more general setting of discrete cocompact groups of isometries of Hadamard spaces. This provides a far-reaching generalisation of Bieberbach's theorem. The discreteness assumption is essential in the Adams-Ballmann theorem: otherwise, the classification must include symmetric spaces, Bruhat-Tits buildings and Bass-Serre trees in view of the "indiscrete" Bieberbach theorem of Caprace-Monod. References. Notes. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\mathbb{R}^3" }, { "math_id": 1, "text": "\\mathbb{R}" }, { "math_id": 2, "text": "S^1," }, { "math_id": 3, "text": "(0,x)" }, { "math_id": 4, "text": "x>0" }, { "math_id": 5, "text": "(0,\\infty)" }, { "math_id": 6, "text": "\\{(x,y)\\in \\mathbb{R}^2:x^2+y^2=r^2\\}" }, { "math_id": 7, "text": "r," }, { "math_id": 8, "text": "r>0." }, { "math_id": 9, "text": "(M,g)" }, { "math_id": 10, "text": "M" }, { "math_id": 11, "text": "\\mathbb{R}^2," }, { "math_id": 12, "text": "S^1\\times\\mathbb{R}," }, { "math_id": 13, "text": "S^1\\times S^1," }, { "math_id": 14, "text": "S^1\\times S^1" }, { "math_id": 15, "text": "S^1\\times \\mathbb{R}," }, { "math_id": 16, "text": "S^1\\times S^1." }, { "math_id": 17, "text": "(x_0,y_0)\\in\\mathbb{R}^2," }, { "math_id": 18, "text": "T_{(x_0,y_0)}" }, { "math_id": 19, "text": "\\mathbb{R}^2\\to\\mathbb{R}^2" }, { "math_id": 20, "text": "(x,y)\\mapsto(x+x_0,y+y_0)." }, { "math_id": 21, "text": "R" }, { "math_id": 22, "text": "(x,y)\\mapsto(x,-y)." }, { "math_id": 23, "text": "a,b," }, { "math_id": 24, "text": "\\operatorname{Isom}(\\mathbb{R}^2)," }, { "math_id": 25, "text": "\\mathbb{R}^2" }, { "math_id": 26, "text": "G_{e}=\\{T_{(0,0)}\\}" }, { "math_id": 27, "text": "G_{\\text{cyl}}(a)=\\{T_{(an,0)}:n\\in\\mathbb{Z}\\}" }, { "math_id": 28, "text": "G_{\\text{tor}}(a,b)=\\{T_{(na,mb)}:m,n\\in\\mathbb{Z}\\}" }, { "math_id": 29, "text": "a<b." }, { "math_id": 30, "text": "G_{\\text{Moeb}}(a)=\\{T_{(2na,0)}:n\\in\\mathbb{Z}\\}\\cup\\{T_{((2n+1)a,0)}\\circ R:n\\in\\mathbb{Z}\\}" }, { "math_id": 31, "text": "G_{\\text{KB}}(b)=\\{T_{(2na,bm)}:n,m\\in\\mathbb{Z}\\}\\cup\\{T_{((2n+1)a,bm)}\\circ R:n,m\\in\\mathbb{Z}\\}" }, { "math_id": 32, "text": "\\mathbb{R}^2/G" }, { "math_id": 33, "text": "C^1" }, { "math_id": 34, "text": "S^1\\times S^1\\to\\mathbb{R}^3" }, { "math_id": 35, "text": "S^1" }, { "math_id": 36, "text": "\\mathbb{R}^2\\times\\mathbb{R}^2=\\mathbb{R}^4." }, { "math_id": 37, "text": "T^3" }, { "math_id": 38, "text": "S^1 \\times \\mathbb{R}^2" }, { "math_id": 39, "text": "T^2 \\times \\mathbb{R}" }, { "math_id": 40, "text": "S^1 \\times K" }, { "math_id": 41, "text": "K \\times \\mathbb{R}" } ]
https://en.wikipedia.org/wiki?curid=12756616
12756944
Freiheitssatz
In mathematics, the Freiheitssatz (German: "freedom/independence theorem": "Freiheit" + "Satz") is a result in the presentation theory of groups, stating that certain subgroups of a one-relator group are free groups. Statement. Consider a group presentation formula_0 given by n generators "x""i" and a single cyclically reduced relator r. If "x"1 appears in r, then (according to the freiheitssatz) the subgroup of G generated by "x"2, ..., "x""n" is a free group, freely generated by "x"2, ..., "x""n". In other words, the only relations involving "x"2, ..., "x""n" are the trivial ones. History. The result was proposed by the German mathematician Max Dehn and proved by his student, Wilhelm Magnus, in his doctoral thesis. Although Dehn expected Magnus to find a topological proof, Magnus instead found a proof based on mathematical induction and amalgamated products of groups. Different induction-based proofs were given later by and . Significance. The freiheitssatz has become "the cornerstone of one-relator group theory", and motivated the development of the theory of amalgamated products. It also provides an analogue, in non-commutative group theory, of certain results on vector spaces and other commutative groups. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "G = \\langle x_{1}, \\dots, x_{n} | r = 1 \\rangle" } ]
https://en.wikipedia.org/wiki?curid=12756944
1275978
Seifert conjecture
In mathematics, the Seifert conjecture states that every nonsingular, continuous vector field on the 3-sphere has a closed orbit. It is named after Herbert Seifert. In a 1950 paper, Seifert asked if such a vector field exists, but did not phrase non-existence as a conjecture. He also established the conjecture for perturbations of the Hopf fibration. The conjecture was disproven in 1974 by Paul Schweitzer, who exhibited a formula_0 counterexample. Schweitzer's construction was then modified by Jenny Harrison in 1988 to make a formula_1 counterexample for some formula_2. The existence of smoother counterexamples remained an open question until 1993 when Krystyna Kuperberg constructed a very different formula_3 counterexample. Later this construction was shown to have real analytic and piecewise linear versions. In 1997 for the particular case of incompressible fluids it was shown that all formula_4 steady state flows on formula_5 possess closed flowlines based on similar results for Beltrami flows on the Weinstein conjecture. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "C^1" }, { "math_id": 1, "text": "C^{2+\\delta}" }, { "math_id": 2, "text": "\\delta > 0" }, { "math_id": 3, "text": "C^\\infty" }, { "math_id": 4, "text": "C^\\omega" }, { "math_id": 5, "text": "S^3" } ]
https://en.wikipedia.org/wiki?curid=1275978
12761741
Robinson's joint consistency theorem
Theorem of mathematical logic Robinson's joint consistency theorem is an important theorem of mathematical logic. It is related to Craig interpolation and Beth definability. The classical formulation of Robinson's joint consistency theorem is as follows: Let formula_0 and formula_1 be first-order theories. If formula_0 and formula_1 are consistent and the intersection formula_2 is complete (in the common language of formula_0 and formula_1), then the union formula_3 is consistent. A theory formula_4 is called complete if it decides every formula, meaning that for every sentence formula_5 the theory contains the sentence or its negation but not both (that is, either formula_6 or formula_7). Since the completeness assumption is quite hard to fulfill, there is a variant of the theorem: Let formula_0 and formula_1 be first-order theories. If formula_0 and formula_1 are consistent and if there is no formula formula_8 in the common language of formula_0 and formula_1 such that formula_9 and formula_10 then the union formula_11 is consistent. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "T_1" }, { "math_id": 1, "text": "T_2" }, { "math_id": 2, "text": "T_1 \\cap T_2" }, { "math_id": 3, "text": "T_1 \\cup T_2" }, { "math_id": 4, "text": "T" }, { "math_id": 5, "text": "\\varphi," }, { "math_id": 6, "text": "T \\vdash \\varphi" }, { "math_id": 7, "text": "T \\vdash \\neg \\varphi" }, { "math_id": 8, "text": "\\varphi" }, { "math_id": 9, "text": "T_1 \\vdash \\varphi" }, { "math_id": 10, "text": "T_2 \\vdash \\neg \\varphi," }, { "math_id": 11, "text": "T_1\\cup T_2" } ]
https://en.wikipedia.org/wiki?curid=12761741
1276320
Proper transfer function
In control theory, a proper transfer function is a transfer function in which the degree of the numerator does not exceed the degree of the denominator. A strictly proper transfer function is a transfer function where the degree of the numerator is less than the degree of the denominator. The difference between the degree of the denominator (number of poles) and degree of the numerator (number of zeros) is the "relative degree" of the transfer function. Example. The following transfer function: formula_0 is proper, because formula_1. is biproper, because formula_2. but is not strictly proper, because formula_3. The following transfer function is not proper (or strictly proper) formula_4 because formula_5. A not proper transfer function can be made proper by using the method of long division. The following transfer function is strictly proper formula_6 because formula_7. Implications. A proper transfer function will never grow unbounded as the frequency approaches infinity: formula_8 A strictly proper transfer function will approach zero as the frequency approaches infinity (which is true for all physical processes): formula_9 Also, the integral of the real part of a strictly proper transfer function is zero.
[ { "math_id": 0, "text": " \\textbf{G}(s) = \\frac{\\textbf{N}(s)}{\\textbf{D}(s)} = \\frac{s^{4} + n_{1}s^{3} + n_{2}s^{2} + n_{3}s + n_{4}}{s^{4} + d_{1}s^{3} + d_{2}s^{2} + d_{3}s + d_{4}}" }, { "math_id": 1, "text": " \\deg(\\textbf{N}(s)) = 4 \\leq \\deg(\\textbf{D}(s)) = 4 " }, { "math_id": 2, "text": " \\deg(\\textbf{N}(s)) = 4 = \\deg(\\textbf{D}(s)) = 4 " }, { "math_id": 3, "text": " \\deg(\\textbf{N}(s)) = 4 \\nless \\deg(\\textbf{D}(s)) = 4 " }, { "math_id": 4, "text": " \\textbf{G}(s) = \\frac{\\textbf{N}(s)}{\\textbf{D}(s)} = \\frac{s^{4} + n_{1}s^{3} + n_{2}s^{2} + n_{3}s + n_{4}}{d_{1}s^{3} + d_{2}s^{2} + d_{3}s + d_{4}}" }, { "math_id": 5, "text": " \\deg(\\textbf{N}(s)) = 4 \\nleq \\deg(\\textbf{D}(s)) = 3 " }, { "math_id": 6, "text": " \\textbf{G}(s) = \\frac{\\textbf{N}(s)}{\\textbf{D}(s)} = \\frac{n_{1}s^{3} + n_{2}s^{2} + n_{3}s + n_{4}}{s^{4} + d_{1}s^{3} + d_{2}s^{2} + d_{3}s + d_{4}}" }, { "math_id": 7, "text": " \\deg(\\textbf{N}(s)) = 3 < \\deg(\\textbf{D}(s)) = 4 " }, { "math_id": 8, "text": " |\\textbf{G}(\\pm j\\infty)| < \\infty " }, { "math_id": 9, "text": " \\textbf{G}(\\pm j\\infty) = 0 " } ]
https://en.wikipedia.org/wiki?curid=1276320
1276437
Soil mechanics
Branch of soil physics and applied mechanics that describes the behavior of soils Soil mechanics is a branch of soil physics and applied mechanics that describes the behavior of soils. It differs from fluid mechanics and solid mechanics in the sense that soils consist of a heterogeneous mixture of fluids (usually air and water) and particles (usually clay, silt, sand, and gravel) but soil may also contain organic solids and other matter. Along with rock mechanics, soil mechanics provides the theoretical basis for analysis in geotechnical engineering, a subdiscipline of civil engineering, and engineering geology, a subdiscipline of geology. Soil mechanics is used to analyze the deformations of and flow of fluids within natural and man-made structures that are supported on or made of soil, or structures that are buried in soils. Example applications are building and bridge foundations, retaining walls, dams, and buried pipeline systems. Principles of soil mechanics are also used in related disciplines such as geophysical engineering, coastal engineering, agricultural engineering, hydrology and soil physics. This article describes the genesis and composition of soil, the distinction between "pore water pressure" and inter-granular "effective stress", capillary action of fluids in the soil pore spaces, "soil classification", "seepage" and "permeability", time dependent change of volume due to squeezing water out of tiny pore spaces, also known as "consolidation", "shear strength" and stiffness of soils. The shear strength of soils is primarily derived from friction between the particles and interlocking, which are very sensitive to the effective stress. The article concludes with some examples of applications of the principles of soil mechanics such as slope stability, lateral earth pressure on retaining walls, and bearing capacity of foundations. Genesis and composition of soils. Genesis. The primary mechanism of soil creation is the weathering of rock. All rock types (igneous rock, metamorphic rock and sedimentary rock) may be broken down into small particles to create soil. Weathering mechanisms are physical weathering, chemical weathering, and biological weathering Human activities such as excavation, blasting, and waste disposal, may also create soil. Over geologic time, deeply buried soils may be altered by pressure and temperature to become metamorphic or sedimentary rock, and if melted and solidified again, they would complete the geologic cycle by becoming igneous rock. Physical weathering includes temperature effects, freeze and thaw of water in cracks, rain, wind, impact and other mechanisms. Chemical weathering includes dissolution of matter composing a rock and precipitation in the form of another mineral. Clay minerals, for example can be formed by weathering of feldspar, which is the most common mineral present in igneous rock. The most common mineral constituent of silt and sand is quartz, also called silica, which has the chemical name silicon dioxide. The reason that feldspar is most common in rocks but silica is more prevalent in soils is that feldspar is much more soluble than silica. Silt, Sand, and Gravel are basically little pieces of broken rocks. According to the Unified Soil Classification System, silt particle sizes are in the range of 0.002 mm to 0.075 mm and sand particles have sizes in the range of 0.075 mm to 4.75 mm. Gravel particles are broken pieces of rock in the size range 4.75 mm to 100 mm. Particles larger than gravel are called cobbles and boulders. Transport. Soil deposits are affected by the mechanism of transport and deposition to their location. Soils that are not transported are called residual soils—they exist at the same location as the rock from which they were generated. Decomposed granite is a common example of a residual soil. The common mechanisms of transport are the actions of gravity, ice, water, and wind. Wind blown soils include dune sands and loess. Water carries particles of different size depending on the speed of the water, thus soils transported by water are graded according to their size. Silt and clay may settle out in a lake, and gravel and sand collect at the bottom of a river bed. Wind blown soil deposits (aeolian soils) also tend to be sorted according to their grain size. Erosion at the base of glaciers is powerful enough to pick up large rocks and boulders as well as soil; soils dropped by melting ice can be a well graded mixture of widely varying particle sizes. Gravity on its own may also carry particles down from the top of a mountain to make a pile of soil and boulders at the base; soil deposits transported by gravity are called colluvium. The mechanism of transport also has a major effect on the particle shape. For example, low velocity grinding in a river bed will produce rounded particles. Freshly fractured colluvium particles often have a very angular shape. Soil composition. Soil mineralogy. Silts, sands and gravels are classified by their size, and hence they may consist of a variety of minerals. Owing to the stability of quartz compared to other rock minerals, quartz is the most common constituent of sand and silt. Mica, and feldspar are other common minerals present in sands and silts. The mineral constituents of gravel may be more similar to that of the parent rock. The common clay minerals are montmorillonite or smectite, illite, and kaolinite or kaolin. These minerals tend to form in sheet or plate like structures, with length typically ranging between 10−7 m and 4x10−6 m and thickness typically ranging between 10−9 m and 2x10−6 m, and they have a relatively large specific surface area. The specific surface area (SSA) is defined as the ratio of the surface area of particles to the mass of the particles. Clay minerals typically have specific surface areas in the range of 10 to 1,000 square meters per gram of solid. Due to the large surface area available for chemical, electrostatic, and van der Waals interaction, the mechanical behavior of clay minerals is very sensitive to the amount of pore fluid available and the type and amount of dissolved ions in the pore fluid. The minerals of soils are predominantly formed by atoms of oxygen, silicon, hydrogen, and aluminum, organized in various crystalline forms. These elements along with calcium, sodium, potassium, magnesium, and carbon constitute over 99 per cent of the solid mass of soils. Grain size distribution. Soils consist of a mixture of particles of different size, shape and mineralogy. Because the size of the particles obviously has a significant effect on the soil behavior, the grain size and grain size distribution are used to classify soils. The grain size distribution describes the relative proportions of particles of various sizes. The grain size is often visualized in a cumulative distribution graph which, for example, plots the percentage of particles finer than a given size as a function of size. The median grain size, formula_0, is the size for which 50% of the particle mass consists of finer particles. Soil behavior, especially the hydraulic conductivity, tends to be dominated by the smaller particles, hence, the term "effective size", denoted by formula_1, is defined as the size for which 10% of the particle mass consists of finer particles. Sands and gravels that possess a wide range of particle sizes with a smooth distribution of particle sizes are called "well graded" soils. If the soil particles in a sample are predominantly in a relatively narrow range of sizes, the sample is "uniformly graded". If a soil sample has distinct gaps in the gradation curve, e.g., a mixture of gravel and fine sand, with no coarse sand, the sample may be "gap graded". "Uniformly graded" and "gap graded" soils are both considered to be "poorly graded". There are many methods for measuring particle-size distribution. The two traditional methods are sieve analysis and hydrometer analysis. Sieve analysis. The size distribution of gravel and sand particles are typically measured using sieve analysis. The formal procedure is described in ASTM D6913-04(2009). A stack of sieves with accurately dimensioned holes between a mesh of wires is used to separate the particles into size bins. A known volume of dried soil, with clods broken down to individual particles, is put into the top of a stack of sieves arranged from coarse to fine. The stack of sieves is shaken for a standard period of time so that the particles are sorted into size bins. This method works reasonably well for particles in the sand and gravel size range. Fine particles tend to stick to each other, and hence the sieving process is not an effective method. If there are a lot of fines (silt and clay) present in the soil it may be necessary to run water through the sieves to wash the coarse particles and clods through. A variety of sieve sizes are available. The boundary between sand and silt is arbitrary. According to the Unified Soil Classification System, a #4 sieve (4 openings per inch) having 4.75 mm opening size separates sand from gravel and a #200 sieve with an 0.075 mm opening separates sand from silt and clay. According to the British standard, 0.063 mm is the boundary between sand and silt, and 2 mm is the boundary between sand and gravel. Hydrometer analysis. The classification of fine-grained soils, i.e., soils that are finer than sand, is determined primarily by their Atterberg limits, not by their grain size. If it is important to determine the grain size distribution of fine-grained soils, the hydrometer test may be performed. In the hydrometer tests, the soil particles are mixed with water and shaken to produce a dilute suspension in a glass cylinder, and then the cylinder is left to sit. A hydrometer is used to measure the density of the suspension as a function of time. Clay particles may take several hours to settle past the depth of measurement of the hydrometer. Sand particles may take less than a second. Stoke's law provides the theoretical basis to calculate the relationship between sedimentation velocity and particle size. ASTM provides the detailed procedures for performing the Hydrometer test. Clay particles can be sufficiently small that they never settle because they are kept in suspension by Brownian motion, in which case they may be classified as colloids. Mass-volume relations. There are a variety of parameters used to describe the relative proportions of air, water and solid in a soil. This section defines these parameters and some of their interrelationships. The basic notation is as follows: formula_2, formula_3, and formula_4 represent the volumes of air, water and solids in a soil mixture; formula_5, formula_6, and formula_7 represent the weights of air, water and solids in a soil mixture; formula_8, formula_9, and formula_10 represent the masses of air, water and solids in a soil mixture; formula_11, formula_12, and formula_13 represent the densities of the constituents (air, water and solids) in a soil mixture; Note that the weights, W, can be obtained by multiplying the mass, M, by the acceleration due to gravity, g; e.g., formula_14 Specific Gravity is the ratio of the density of one material compared to the density of pure water (formula_15). Specific gravity of solids, formula_16 Note that specific weight, conventionally denoted by the symbol formula_17 may be obtained by multiplying the density ( formula_18 ) of a material by the acceleration due to gravity, formula_19. Density, Bulk Density, or Wet Density, formula_18, are different names for the density of the mixture, i.e., the total mass of air, water, solids divided by the total volume of air water and solids (the mass of air is assumed to be zero for practical purposes): formula_20 Dry Density, formula_21, is the mass of solids divided by the total volume of air water and solids: formula_22 Buoyant Density, formula_23, defined as the density of the mixture minus the density of water is useful if the soil is submerged under water: formula_24 where formula_25 is the density of water Water Content, formula_26 is the ratio of mass of water to mass of solid. It is easily measured by weighing a sample of the soil, drying it out in an oven and re-weighing. Standard procedures are described by ASTM. formula_27 Void ratio, formula_28, is the ratio of the volume of voids to the volume of solids: formula_29 Porosity, formula_30, is the ratio of volume of voids to the total volume, and is related to the void ratio: formula_31 Degree of saturation, formula_32, is the ratio of the volume of water to the volume of voids: formula_33 From the above definitions, some useful relationships can be derived by use of basic algebra. formula_34 formula_35 formula_36 Soil classification. Geotechnical engineers classify the soil particle types by performing tests on disturbed (dried, passed through sieves, and remolded) samples of the soil. This provides information about the characteristics of the soil grains themselves. Classification of the types of grains present in a soil does not account for important effects of the "structure" or "fabric" of the soil, terms that describe compactness of the particles and patterns in the arrangement of particles in a load carrying framework as well as the pore size and pore fluid distributions. Engineering geologists also classify soils based on their genesis and depositional history. Classification of soil grains. In the US and other countries, the Unified Soil Classification System (USCS) is often used for soil classification. Other classification systems include the British Standard BS 5930 and the AASHTO soil classification system. Classification of sands and gravels. In the USCS, gravels (given the symbol "G") and sands (given the symbol "S") are classified according to their grain size distribution. For the USCS, gravels may be given the classification symbol "GW" (well-graded gravel), "GP" (poorly graded gravel), "GM" (gravel with a large amount of silt), or "GC" (gravel with a large amount of clay). Likewise sands may be classified as being "SW", "SP", "SM" or "SC". Sands and gravels with a small but non-negligible amount of fines (5–12%) may be given a dual classification such as "SW-SC". Atterberg limits. Clays and Silts, often called 'fine-grained soils', are classified according to their Atterberg limits; the most commonly used Atterberg limits are the Liquid Limit (denoted by "LL" or formula_37), Plastic Limit (denoted by "PL" or formula_38), and Shrinkage Limit (denoted by "SL"). The Liquid Limit is the water content at which the soil behavior transitions from a plastic solid to a liquid. The Plastic Limit is the water content at which the soil behavior transitions from that of a plastic solid to a brittle solid. The Shrinkage Limit corresponds to a water content below which the soil will not shrink as it dries. The consistency of fine grained soil varies in proportional to the water content in a soil. As the transitions from one state to another are gradual, the tests have adopted arbitrary definitions to determine the boundaries of the states. The liquid limit is determined by measuring the water content for which a groove closes after 25 blows in a standard test. Alternatively, a fall cone test apparatus may be used to measure the liquid limit. The undrained shear strength of remolded soil at the liquid limit is approximately 2 kPa. The Plastic Limit is the water content below which it is not possible to roll by hand the soil into 3 mm diameter cylinders. The soil cracks or breaks up as it is rolled down to this diameter. Remolded soil at the plastic limit is quite stiff, having an undrained shear strength of the order of about 200 kPa. The Plasticity Index of a particular soil specimen is defined as the difference between the Liquid Limit and the Plastic Limit of the specimen; it is an indicator of how much water the soil particles in the specimen can absorb, and correlates with many engineering properties like permeability, compressibility, shear strength and others. Generally, the clay having high plasticity have lower permeability and also they are also difficult to be compacted. Classification of silts and clays. According to the Unified Soil Classification System (USCS), silts and clays are classified by plotting the values of their plasticity index and liquid limit on a plasticity chart. The A-Line on the chart separates clays (given the USCS symbol "C") from silts (given the symbol "M"). LL=50% separates high plasticity soils (given the modifier symbol "H") from low plasticity soils (given the modifier symbol "L"). A soil that plots above the A-line and has LL&gt;50% would, for example, be classified as "CH". Other possible classifications of silts and clays are "ML", "CL" and "MH". If the Atterberg limits plot in the"hatched" region on the graph near the origin, the soils are given the dual classification 'CL-ML'. Indices related to soil strength. Liquidity index. The effects of the water content on the strength of saturated remolded soils can be quantified by the use of the "liquidity index", "LI": formula_39 When the LI is 1, remolded soil is at the liquid limit and it has an undrained shear strength of about 2 kPa. When the soil is at the plastic limit, the LI is 0 and the undrained shear strength is about 200 kPa. Relative density. The density of sands (cohesionless soils) is often characterized by the relative density, formula_40 formula_41 where: formula_42 is the "maximum void ratio" corresponding to a very loose state, formula_43 is the "minimum void ratio" corresponding to a very dense state and formula_28 is the "in situ" void ratio. Methods used to calculate relative density are defined in ASTM D4254-00(2006). Thus if formula_44 the sand or gravel is very dense, and if formula_45 the soil is extremely loose and unstable. Seepage: steady state flow of water. If fluid pressures in a soil deposit are uniformly increasing with depth according to formula_46 then hydrostatic conditions will prevail and the fluids will not be flowing through the soil. formula_47 is the depth below the water table. However, if the water table is sloping or there is a perched water table as indicated in the accompanying sketch, then seepage will occur. For steady state seepage, the seepage velocities are not varying with time. If the water tables are changing levels with time, or if the soil is in the process of consolidation, then steady state conditions do not apply. Darcy's law. Darcy's law states that the volume of flow of the pore fluid through a porous medium per unit time is proportional to the rate of change of excess fluid pressure with distance. The constant of proportionality includes the viscosity of the fluid and the intrinsic permeability of the soil. For the simple case of a horizontal tube filled with soil formula_48 The total discharge, formula_49 (having units of volume per time, e.g., ft3/s or m3/s), is proportional to the intrinsic permeability, formula_50, the cross sectional area, formula_51, and rate of pore pressure change with distance, formula_52, and inversely proportional to the dynamic viscosity of the fluid, formula_53. The negative sign is needed because fluids flow from high pressure to low pressure. So if the change in pressure is negative (in the formula_54-direction) then the flow will be positive (in the formula_54-direction). The above equation works well for a horizontal tube, but if the tube was inclined so that point b was a different elevation than point a, the equation would not work. The effect of elevation is accounted for by replacing the pore pressure by "excess pore pressure", formula_55 defined as: formula_56 where formula_57 is the depth measured from an arbitrary elevation reference (datum). Replacing formula_58 by formula_55 we obtain a more general equation for flow: formula_59 Dividing both sides of the equation by formula_51, and expressing the rate of change of excess pore pressure as a derivative, we obtain a more general equation for the apparent velocity in the x-direction: formula_60 where formula_61 has units of velocity and is called the "Darcy velocity" (or the "specific discharge", "filtration velocity", or "superficial velocity"). The "pore" or "interstitial velocity" formula_62 is the average velocity of fluid molecules in the pores; it is related to the Darcy velocity and the porosity formula_30 through the "Dupuit-Forchheimer relationship" formula_63 Civil engineers predominantly work on problems that involve water and predominantly work on problems on earth (in earth's gravity). For this class of problems, civil engineers will often write Darcy's law in a much simpler form: formula_64 where formula_65 is the hydraulic conductivity, defined as formula_66, and formula_67 is the hydraulic gradient. The hydraulic gradient is the rate of change of total head with distance. The total head, formula_68 at a point is defined as the height (measured relative to the datum) to which water would rise in a piezometer at that point. The total head is related to the excess water pressure by: formula_69 and the formula_70 is zero if the datum for head measurement is chosen at the same elevation as the origin for the depth, z used to calculate formula_71. Typical values of hydraulic conductivity. Values of hydraulic conductivity, formula_65, can vary by many orders of magnitude depending on the soil type. Clays may have hydraulic conductivity as small as about formula_72, gravels may have hydraulic conductivity up to about formula_73. Layering and heterogeneity and disturbance during the sampling and testing process make the accurate measurement of soil hydraulic conductivity a very difficult problem. Flownets. Darcy's Law applies in one, two or three dimensions. In two or three dimensions, steady state seepage is described by Laplace's equation. Computer programs are available to solve this equation. But traditionally two-dimensional seepage problems were solved using a graphical procedure known as flownet. One set of lines in the flownet are in the direction of the water flow (flow lines), and the other set of lines are in the direction of constant total head (equipotential lines). Flownets may be used to estimate the quantity of seepage under dams and sheet piling. Seepage forces and erosion. When the seepage velocity is great enough, erosion can occur because of the frictional drag exerted on the soil particles. Vertically upwards seepage is a source of danger on the downstream side of sheet piling and beneath the toe of a dam or levee. Erosion of the soil, known as "soil piping", can lead to failure of the structure and to sinkhole formation. Seeping water removes soil, starting from the exit point of the seepage, and erosion advances upgradient. The term "sand boil" is used to describe the appearance of the discharging end of an active soil pipe. Seepage pressures. Seepage in an upward direction reduces the effective stress within the soil. When the water pressure at a point in the soil is equal to the total vertical stress at that point, the effective stress is zero and the soil has no frictional resistance to deformation. For a surface layer, the vertical effective stress becomes zero within the layer when the upward hydraulic gradient is equal to the critical gradient. At zero effective stress soil has very little strength and layers of relatively impermeable soil may heave up due to the underlying water pressures. The loss in strength due to upward seepage is a common contributor to levee failures. The condition of zero effective stress associated with upward seepage is also called liquefaction, quicksand, or a boiling condition. Quicksand was so named because the soil particles move around and appear to be 'alive' (the biblical meaning of 'quick' – as opposed to 'dead'). (Note that it is not possible to be 'sucked down' into quicksand. On the contrary, you would float with about half your body out of the water.) Effective stress and capillarity: hydrostatic conditions. To understand the mechanics of soils it is necessary to understand how normal stresses and shear stresses are shared by the different phases. Neither gas nor liquid provide significant resistance to shear stress. The shear resistance of soil is provided by friction and interlocking of the particles. The friction depends on the intergranular contact stresses between solid particles. The normal stresses, on the other hand, are shared by the fluid and the particles. Although the pore air is relatively compressible, and hence takes little normal stress in most geotechnical problems, liquid water is relatively incompressible and if the voids are saturated with water, the pore water must be squeezed out in order to pack the particles closer together. The principle of effective stress, introduced by Karl Terzaghi, states that the effective stress "σ"' (i.e., the average intergranular stress between solid particles) may be calculated by a simple subtraction of the pore pressure from the total stress: formula_74 where "σ" is the total stress and "u" is the pore pressure. It is not practical to measure "σ"' directly, so in practice the vertical effective stress is calculated from the pore pressure and vertical total stress. The distinction between the terms pressure and stress is also important. By definition, pressure at a point is equal in all directions but stresses at a point can be different in different directions. In soil mechanics, compressive stresses and pressures are considered to be positive and tensile stresses are considered to be negative, which is different from the solid mechanics sign convention for stress. Total stress. For level ground conditions, the total vertical stress at a point, formula_75, on average, is the weight of everything above that point per unit area. The vertical stress beneath a uniform surface layer with density formula_18, and thickness formula_76 is for example: formula_77 where formula_19 is the acceleration due to gravity, and formula_17 is the unit weight of the overlying layer. If there are multiple layers of soil or water above the point of interest, the vertical stress may be calculated by summing the product of the unit weight and thickness of all of the overlying layers. Total stress increases with increasing depth in proportion to the density of the overlying soil. It is not possible to calculate the horizontal total stress in this way. Lateral earth pressures are addressed elsewhere. Pore water pressure. Hydrostatic conditions. If there is no pore water flow occurring in the soil, the pore water pressures will be hydrostatic. The water table is located at the depth where the water pressure is equal to the atmospheric pressure. For hydrostatic conditions, the water pressure increases linearly with depth below the water table: formula_78 where formula_12 is the density of water, and formula_47 is the depth below the water table. Capillary action. Due to surface tension, water will rise up in a small capillary tube above a free surface of water. Likewise, water will rise up above the water table into the small pore spaces around the soil particles. In fact the soil may be completely saturated for some distance above the water table. Above the height of capillary saturation, the soil may be wet but the water content will decrease with elevation. If the water in the capillary zone is not moving, the water pressure obeys the equation of hydrostatic equilibrium, formula_78, but note that formula_79, is negative above the water table. Hence, hydrostatic water pressures are negative above the water table. The thickness of the zone of capillary saturation depends on the pore size, but typically, the heights vary between a centimeter or so for coarse sand to tens of meters for a silt or clay. In fact the pore space of soil is a uniform fractal e.g. a set of uniformly distributed D-dimensional fractals of average linear size L. For the clay soil it has been found that L=0.15 mm and D=2.7. The surface tension of water explains why the water does not drain out of a wet sand castle or a moist ball of clay. Negative water pressures make the water stick to the particles and pull the particles to each other, friction at the particle contacts make a sand castle stable. But as soon as a wet sand castle is submerged below a free water surface, the negative pressures are lost and the castle collapses. Considering the effective stress equation, formula_80, if the water pressure is negative, the effective stress may be positive, even on a free surface (a surface where the total normal stress is zero). The negative pore pressure pulls the particles together and causes compressive particle to particle contact forces. Negative pore pressures in clayey soil can be much more powerful than those in sand. Negative pore pressures explain why clay soils shrink when they dry and swell as they are wetted. The swelling and shrinkage can cause major distress, especially to light structures and roads. Later sections of this article address the pore water pressures for and problems. Consolidation: transient flow of water. Consolidation is a process by which soils decrease in volume. It occurs when stress is applied to a soil that causes the soil particles to pack together more tightly, therefore reducing volume. When this occurs in a soil that is saturated with water, water will be squeezed out of the soil. The time required to squeeze the water out of a thick deposit of clayey soil layer might be years. For a layer of sand, the water may be squeezed out in a matter of seconds. A building foundation or construction of a new embankment will cause the soil below to consolidate and this will cause settlement which in turn may cause distress to the building or embankment. Karl Terzaghi developed the theory of one-dimensional consolidation which enables prediction of the amount of settlement and the time required for the settlement to occur. Afterwards, Maurice Biot fully developed the three-dimensional soil consolidation theory, extending the one-dimensional model previously developed by Terzaghi to more general hypotheses and introducing the set of basic equations of Poroelasticity. Soils are tested with an oedometer test to determine their compression index and coefficient of consolidation. When stress is removed from a consolidated soil, the soil will rebound, drawing water back into the pores and regaining some of the volume it had lost in the consolidation process. If the stress is reapplied, the soil will re-consolidate again along a recompression curve, defined by the recompression index. Soil that has been consolidated to a large pressure and has been subsequently unloaded is considered to be "overconsolidated". The maximum past vertical effective stress is termed the "preconsolidation stress". A soil which is currently experiencing the maximum past vertical effective stress is said to be "normally consolidated". The "overconsolidation ratio", (OCR) is the ratio of the maximum past vertical effective stress to the current vertical effective stress. The OCR is significant for two reasons: firstly, because the compressibility of normally consolidated soil is significantly larger than that for overconsolidated soil, and secondly, the shear behavior and dilatancy of clayey soil are related to the OCR through critical state soil mechanics; highly overconsolidated clayey soils are dilatant, while normally consolidated soils tend to be contractive. Shear behavior: stiffness and strength. The shear strength and stiffness of soil determines whether or not soil will be stable or how much it will deform. Knowledge of the strength is necessary to determine if a slope will be stable, if a building or bridge might settle too far into the ground, and the limiting pressures on a retaining wall. It is important to distinguish between failure of a soil element and the failure of a geotechnical structure (e.g., a building foundation, slope or retaining wall); some soil elements may reach their peak strength prior to failure of the structure. Different criteria can be used to define the "shear strength" and the "yield point" for a soil element from a stress–strain curve. One may define the peak shear strength as the peak of a stress–strain curve, or the shear strength at critical state as the value after large strains when the shear resistance levels off. If the stress–strain curve does not stabilize before the end of shear strength test, the "strength" is sometimes considered to be the shear resistance at 15–20% strain. The shear strength of soil depends on many factors including the effective stress and the void ratio. The shear stiffness is important, for example, for evaluation of the magnitude of deformations of foundations and slopes prior to failure and because it is related to the shear wave velocity. The slope of the initial, nearly linear, portion of a plot of shear stress as a function of shear strain is called the shear modulus Friction, interlocking and dilation. Soil is an assemblage of particles that have little to no cementation while rock (such as sandstone) may consist of an assembly of particles that are strongly cemented together by chemical bonds. The shear strength of soil is primarily due to interparticle friction and therefore, the shear resistance on a plane is approximately proportional to the effective normal stress on that plane. The angle of internal friction is thus closely related to the maximum stable slope angle, often called the angle of repose. But in addition to friction, soil derives significant shear resistance from interlocking of grains. If the grains are densely packed, the grains tend to spread apart from each other as they are subject to shear strain. The expansion of the particle matrix due to shearing was called dilatancy by Osborne Reynolds. If one considers the energy required to shear an assembly of particles there is energy input by the shear force, T, moving a distance, x and there is also energy input by the normal force, N, as the sample expands a distance, y. Due to the extra energy required for the particles to dilate against the confining pressures, dilatant soils have a greater peak strength than contractive soils. Furthermore, as dilative soil grains dilate, they become looser (their void ratio increases), and their rate of dilation decreases until they reach a critical void ratio. Contractive soils become denser as they shear, and their rate of contraction decreases until they reach a critical void ratio. The tendency for a soil to dilate or contract depends primarily on the confining pressure and the void ratio of the soil. The rate of dilation is high if the confining pressure is small and the void ratio is small. The rate of contraction is high if the confining pressure is large and the void ratio is large. As a first approximation, the regions of contraction and dilation are separated by the critical state line. Failure criteria. After a soil reaches the critical state, it is no longer contracting or dilating and the shear stress on the failure plane formula_81 is determined by the effective normal stress on the failure plane formula_82 and critical state friction angle formula_83: formula_84 The peak strength of the soil may be greater, however, due to the interlocking (dilatancy) contribution. This may be stated: formula_85 Where formula_86. However, use of a friction angle greater than the critical state value for design requires care. The peak strength will not be mobilized everywhere at the same time in a practical problem such as a foundation, slope or retaining wall. The critical state friction angle is not nearly as variable as the peak friction angle and hence it can be relied upon with confidence. Not recognizing the significance of dilatancy, Coulomb proposed that the shear strength of soil may be expressed as a combination of adhesion and friction components: formula_87 It is now known that the formula_88 and formula_89 parameters in the last equation are not fundamental soil properties. In particular, formula_90 and formula_91 are different depending on the magnitude of effective stress. According to Schofield (2006), the longstanding use of formula_92 in practice has led many engineers to wrongly believe that formula_93 is a fundamental parameter. This assumption that formula_90 and formula_91 are constant can lead to overestimation of peak strengths. Structure, fabric, and chemistry. In addition to the friction and interlocking (dilatancy) components of strength, the structure and fabric also play a significant role in the soil behavior. The structure and fabric include factors such as the spacing and arrangement of the solid particles or the amount and spatial distribution of pore water; in some cases cementitious material accumulates at particle-particle contacts. Mechanical behavior of soil is affected by the density of the particles and their structure or arrangement of the particles as well as the amount and spatial distribution of fluids present (e.g., water and air voids). Other factors include the electrical charge of the particles, chemistry of pore water, chemical bonds (i.e. cementation -particles connected through a solid substance such as recrystallized calcium carbonate) Drained and undrained shear. The presence of nearly incompressible fluids such as water in the pore spaces affects the ability for the pores to dilate or contract. If the pores are saturated with water, water must be sucked into the dilating pore spaces to fill the expanding pores (this phenomenon is visible at the beach when apparently dry spots form around feet that press into the wet sand). Similarly, for contractive soil, water must be squeezed out of the pore spaces to allow contraction to take place. Dilation of the voids causes negative water pressures that draw fluid into the pores, and contraction of the voids causes positive pore pressures to push the water out of the pores. If the rate of shearing is very large compared to the rate that water can be sucked into or squeezed out of the dilating or contracting pore spaces, then the shearing is called "undrained shear", if the shearing is slow enough that the water pressures are negligible, the shearing is called "drained shear". During undrained shear, the water pressure u changes depending on volume change tendencies. From the effective stress equation, the change in u directly effects the effective stress by the equation: formula_74 and the strength is very sensitive to the effective stress. It follows then that the undrained shear strength of a soil may be smaller or larger than the drained shear strength depending upon whether the soil is contractive or dilative. Shear tests. Strength parameters can be measured in the laboratory using direct shear test, triaxial shear test, simple shear test, fall cone test and (hand) shear vane test; there are numerous other devices and variations on these devices used in practice today. Tests conducted to characterize the strength and stiffness of the soils in the ground include the Cone penetration test and the Standard penetration test. Other factors. The stress–strain relationship of soils, and therefore the shearing strength, is affected by: Applications. Lateral earth pressure. Lateral earth stress theory is used to estimate the amount of stress soil can exert perpendicular to gravity. This is the stress exerted on retaining walls. A lateral earth stress coefficient, K, is defined as the ratio of lateral (horizontal) effective stress to vertical effective stress for cohesionless soils (K=σ'h/σ'v). There are three coefficients: at-rest, active, and passive. At-rest stress is the lateral stress in the ground before any disturbance takes place. The active stress state is reached when a wall moves away from the soil under the influence of lateral stress, and results from shear failure due to reduction of lateral stress. The passive stress state is reached when a wall is pushed into the soil far enough to cause shear failure within the mass due to increase of lateral stress. There are many theories for estimating lateral earth stress; some are empirically based, and some are analytically derived. Bearing capacity. The bearing capacity of soil is the average contact stress between a foundation and the soil which will cause shear failure in the soil. Allowable bearing stress is the bearing capacity divided by a factor of safety. Sometimes, on soft soil sites, large settlements may occur under loaded foundations without actual shear failure occurring; in such cases, the allowable bearing stress is determined with regard to the maximum allowable settlement. It is important during construction and design stage of a project to evaluate the subgrade strength. The California Bearing Ratio (CBR) test is commonly used to determine the suitability of a soil as a subgrade for design and construction. The field Plate Load Test is commonly used to predict the deformations and failure characteristics of the soil/subgrade and modulus of subgrade reaction (ks). The Modulus of subgrade reaction (ks) is used in foundation design, soil-structure interaction studies and design of highway pavements. Slope stability. The field of slope stability encompasses the analysis of static and dynamic stability of slopes of earth and rock-fill dams, slopes of other types of embankments, excavated slopes, and natural slopes in soil and soft rock. As seen to the right, earthen slopes can develop a cut-spherical weakness zone. The probability of this happening can be calculated in advance using a simple 2-D circular analysis package. A primary difficulty with analysis is locating the most-probable slip plane for any given situation. Many landslides have been analyzed only after the fact. Landslides vs. Rock strength are two factors for consideration. Recent developments. A recent finding in soil mechanics is that soil deformation can be described as the behavior of a dynamical system. This approach to soil mechanics is referred to as Dynamical Systems based Soil Mechanics (DSSM). DSSM holds simply that soil deformation is a Poisson process in which particles move to their final position at random shear strains. The basis of DSSM is that soils (including sands) can be sheared till they reach a steady-state condition at which, under conditions of constant strain-rate, there is no change in shear stress, effective confining stress, and void ratio. The steady-state was formally defined by Steve J. Poulos an associate professor at the Soil Mechanics Department of Harvard University, who built off a hypothesis that Arthur Casagrande was formulating towards the end of his career. The steady state condition is not the same as the "critical state" condition. It differs from the critical state in that it specifies a statistically constant structure at the steady state. The steady-state values are also very slightly dependent on the strain-rate. Many systems in nature reach steady-states and dynamical systems theory is used to describe such systems. Soil shear can also be described as a dynamical system. The physical basis of the soil shear dynamical system is a Poisson process in which particles move to the steady-state at random shear strains. Joseph generalized this—particles move to their final position (not just steady-state) at random shear-strains. Because of its origins in the steady state concept DSSM is sometimes informally called "Harvard soil mechanics." DSSM provides for very close fits to stress–strain curves, including for sands. Because it tracks conditions on the failure plane, it also provides close fits for the post failure region of sensitive clays and silts something that other theories are not able to do. Additionally DSSM explains key relationships in soil mechanics that to date have simply been taken for granted, for example, why normalized undrained peak shear strengths vary with the log of the over consolidation ratio and why stress–strain curves normalize with the initial effective confining stress; and why in one-dimensional consolidation the void ratio must vary with the log of the effective vertical stress, why the end-of-primary curve is unique for static load increments, and why the ratio of the creep value Cα to the compression index Cc must be approximately constant for a wide range of soils. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "D_{50} " }, { "math_id": 1, "text": "D_{10} " }, { "math_id": 2, "text": "V_a" }, { "math_id": 3, "text": "V_w" }, { "math_id": 4, "text": "V_s" }, { "math_id": 5, "text": "W_a" }, { "math_id": 6, "text": "W_w" }, { "math_id": 7, "text": "W_s" }, { "math_id": 8, "text": "M_a" }, { "math_id": 9, "text": "M_w" }, { "math_id": 10, "text": "M_s" }, { "math_id": 11, "text": "\\rho_a" }, { "math_id": 12, "text": "\\rho_w" }, { "math_id": 13, "text": "\\rho_s" }, { "math_id": 14, "text": "W_s = M_s g" }, { "math_id": 15, "text": "\\rho_w = 1 g/cm^3" }, { "math_id": 16, "text": "G_s = \\frac{\\rho_s} {\\rho_w}" }, { "math_id": 17, "text": "\\gamma" }, { "math_id": 18, "text": "\\rho" }, { "math_id": 19, "text": "g" }, { "math_id": 20, "text": "\\rho = \\frac{M_s + M_w}{V_s + V_w + V_a}= \\frac{M_t}{V_t}" }, { "math_id": 21, "text": "\\rho _d" }, { "math_id": 22, "text": "\\rho_d = \\frac{M_s}{V_s + V_w + V_a}= \\frac{M_s}{V_t}" }, { "math_id": 23, "text": "\\rho '" }, { "math_id": 24, "text": "\\rho ' = \\rho\\ - \\rho _w" }, { "math_id": 25, "text": "\\rho _w" }, { "math_id": 26, "text": "w" }, { "math_id": 27, "text": "w = \\frac{M_w}{M_s} = \\frac{W_w}{W_s}" }, { "math_id": 28, "text": "e" }, { "math_id": 29, "text": "e = \\frac{V_v}{V_s} = \\frac{V_v}{V_T - V_V} = \\frac{n}{1 - n}" }, { "math_id": 30, "text": "n" }, { "math_id": 31, "text": "n = \\frac{V_v}{V_t} = \\frac{V_v}{V_s + V_v}= \\frac{e}{1 + e}" }, { "math_id": 32, "text": "S" }, { "math_id": 33, "text": "S = \\frac{V_w}{V_v} " }, { "math_id": 34, "text": "\\rho = \\frac{(G_s+Se)\\rho_w}{1+e}" }, { "math_id": 35, "text": "\\rho = \\frac{(1+w)G_s\\rho_w}{1+e}" }, { "math_id": 36, "text": "w = \\frac{S e}{G_s}" }, { "math_id": 37, "text": "w_l" }, { "math_id": 38, "text": "w_p" }, { "math_id": 39, "text": " LI = \\frac{w-PL}{LL-PL}" }, { "math_id": 40, "text": " D_r" }, { "math_id": 41, "text": " D_r= \\frac{e_{max} - e}{e_{max} - e_{min}} 100\\%" }, { "math_id": 42, "text": "e_{max}" }, { "math_id": 43, "text": "e_{min}" }, { "math_id": 44, "text": " D_r = 100\\%" }, { "math_id": 45, "text": " D_r = 0\\%" }, { "math_id": 46, "text": "u = \\rho_w g z_w" }, { "math_id": 47, "text": "z_w" }, { "math_id": 48, "text": "Q=\\frac{-K A}{\\mu}\\frac{(u_b-u_a)}{L}" }, { "math_id": 49, "text": "Q" }, { "math_id": 50, "text": "K" }, { "math_id": 51, "text": "A" }, { "math_id": 52, "text": "\\frac{u_b-u_a}{L}" }, { "math_id": 53, "text": "\\mu" }, { "math_id": 54, "text": "x" }, { "math_id": 55, "text": "u_e" }, { "math_id": 56, "text": "u_e = u - \\rho_w g z" }, { "math_id": 57, "text": "z" }, { "math_id": 58, "text": "u" }, { "math_id": 59, "text": "Q=\\frac{-K A}{\\mu}\\frac{(u_{e,b}-u_{e,a})}{L}" }, { "math_id": 60, "text": "v_x=\\frac{-K }{\\mu}\\frac{d u_e}{d x}" }, { "math_id": 61, "text": "v_x = Q/A" }, { "math_id": 62, "text": "v_{px}" }, { "math_id": 63, "text": "v_{px}=\\frac{v_x}{n}" }, { "math_id": 64, "text": "v = k i" }, { "math_id": 65, "text": "k" }, { "math_id": 66, "text": "k = \\frac {K \\rho_w g} {\\mu_w}" }, { "math_id": 67, "text": "i" }, { "math_id": 68, "text": "h" }, { "math_id": 69, "text": "u_e = \\rho_w g h + Constant " }, { "math_id": 70, "text": " Constant " }, { "math_id": 71, "text": " u_e " }, { "math_id": 72, "text": "10^{-12}\\frac{m}{s}" }, { "math_id": 73, "text": "10^{-1}\\frac{m}{s}" }, { "math_id": 74, "text": "\\sigma' = \\sigma - u\\," }, { "math_id": 75, "text": "\\sigma_v" }, { "math_id": 76, "text": "H" }, { "math_id": 77, "text": "\\sigma_v = \\rho g H = \\gamma H" }, { "math_id": 78, "text": "u = \\rho_w g z_w " }, { "math_id": 79, "text": "z_w " }, { "math_id": 80, "text": "\\sigma' = \\sigma - u," }, { "math_id": 81, "text": " \\tau_{crit}" }, { "math_id": 82, "text": " \\sigma_n ' " }, { "math_id": 83, "text": " \\phi_{crit} '\\ " }, { "math_id": 84, "text": " \\tau_{crit} = \\sigma_n ' \\tan \\phi_{crit} '\\ " }, { "math_id": 85, "text": " \\tau_{peak} = \\sigma_n ' \\tan \\phi_{peak} '\\ " }, { "math_id": 86, "text": " \\phi_{peak}' > \\phi_{crit}' " }, { "math_id": 87, "text": " \\tau_f = c' + \\sigma_f ' \\tan \\phi '\\," }, { "math_id": 88, "text": " c ' " }, { "math_id": 89, "text": " \\phi'" }, { "math_id": 90, "text": " c '" }, { "math_id": 91, "text": " \\phi '" }, { "math_id": 92, "text": " c' " }, { "math_id": 93, "text": " c'" } ]
https://en.wikipedia.org/wiki?curid=1276437
12765269
Decision Linear assumption
Computational hardness assumption The Decision Linear (DLIN) assumption is a computational hardness assumption used in elliptic curve cryptography. In particular, the DLIN assumption is useful in settings where the decisional Diffie–Hellman assumption does not hold (as is often the case in pairing-based cryptography). The Decision Linear assumption was introduced by Boneh, Boyen, and Shacham. Informally the DLIN assumption states that given formula_0, with formula_1 random group elements and formula_2 random exponents, it is hard to distinguish formula_3 from an independent random group element formula_4. Motivation. In symmetric pairing-based cryptography the group formula_5 is equipped with a pairing formula_6 which is bilinear. This map gives an efficient algorithm to solve the decisional Diffie-Hellman problem. Given input formula_7, it is easy to check if formula_8 is equal to formula_9. This follows by using the pairing: note that formula_10 Thus, if formula_11, then the values formula_12 and formula_13 will be equal. Since this cryptographic assumption, essential to building ElGamal encryption and signatures, does not hold in this case, new assumptions are needed to build cryptography in symmetric bilinear groups. The DLIN assumption is a modification of Diffie-Hellman type assumptions to thwart the above attack. Formal definition. Let formula_5 be a cyclic group of prime order formula_14. Let formula_15, formula_16, and formula_8 be uniformly random generators of formula_5. Let formula_17 be uniformly random elements of formula_18. Define a distribution formula_19 Let formula_4 be another uniformly random element of formula_5. Define another distribution formula_20 The Decision Linear assumption states that formula_21 and formula_22 are computationally indistinguishable. Applications. Linear encryption. Boneh, Boyen, and Shacham define a public key encryption scheme by analogy to ElGamal encryption. In this scheme, a public key is the generators formula_23. The private key is two exponents such that formula_24. Encryption combines a message formula_25 with the public key to create a ciphertext formula_26. To decrypt the ciphertext, the private key can be used to compute formula_27 To check that this encryption scheme is correct, i.e. formula_28 when both parties follow the protocol, note that formula_29 Then using the fact that formula_24 yields formula_30 Further, this scheme is IND-CPA secure assuming that the DLIN assumption holds. Short group signatures. Boneh, Boyen, and Shacham also use DLIN in a scheme for group signatures. The signatures are called "short group signatures" because, with a standard security level, they can be represented in only 250 bytes. Their protocol first uses linear encryption in order to define a special type of zero-knowledge proof. Then the Fiat–Shamir heuristic is applied to transform the proof system into a digital signature. They prove this signature fulfills the additional requirements of unforgeability, anonymity, and traceability required of a group signature. Their proof relies on not only the DLIN assumption but also another assumption called the formula_31-strong Diffie-Hellman assumption. It is proven in the random oracle model. Other applications. Since its definition in 2004, the Decision Linear assumption has seen a variety of other applications. These include the construction of a pseudorandom function that generalizes the Naor-Reingold construction, an attribute-based encryption scheme, and a special class of non-interactive zero-knowledge proofs.
[ { "math_id": 0, "text": "(u, \\, v, \\, h, \\, u^x, \\, v^y)" }, { "math_id": 1, "text": "u, \\, v, \\, h" }, { "math_id": 2, "text": "x, \\, y" }, { "math_id": 3, "text": "h^{x+y}" }, { "math_id": 4, "text": "\\eta" }, { "math_id": 5, "text": "G" }, { "math_id": 6, "text": "e :G \\times G \\to T" }, { "math_id": 7, "text": "(g, \\, g^a, \\, g^b, \\, h)" }, { "math_id": 8, "text": "h" }, { "math_id": 9, "text": "g^{ab}" }, { "math_id": 10, "text": "e(g^a, g^b) = e(g,g)^{ab} = e(g,g^{ab})." }, { "math_id": 11, "text": "h = g^{ab}" }, { "math_id": 12, "text": "e(g^a, g^b)" }, { "math_id": 13, "text": "e(g,h)" }, { "math_id": 14, "text": "p" }, { "math_id": 15, "text": "u" }, { "math_id": 16, "text": "v" }, { "math_id": 17, "text": "a,b" }, { "math_id": 18, "text": "\\{1, \\, 2, \\, \\dots, \\, p-1\\}" }, { "math_id": 19, "text": "D_1 = (u, \\, v, \\, h, \\, u^a, \\, v^b, \\, h^{a+b})." }, { "math_id": 20, "text": "D_2 = (u, \\, v, \\, h, \\, u^a, \\, v^b, \\, \\eta)." }, { "math_id": 21, "text": "D_1" }, { "math_id": 22, "text": "D_2" }, { "math_id": 23, "text": "u,v,h" }, { "math_id": 24, "text": "u^x = v^y = h" }, { "math_id": 25, "text": "m \\in G" }, { "math_id": 26, "text": "c := (c_1, \\, c_2, \\, c_3) = (u^a, \\, v^b, \\, m \\cdot h^{a+b})" }, { "math_id": 27, "text": "m' := c_3 \\cdot (c_1^x \\cdot c_2^y)^{-1}." }, { "math_id": 28, "text": "m' = m" }, { "math_id": 29, "text": "m' = c_3 \\cdot (c_1^x \\cdot c_2^y)^{-1} = m \\cdot h^{a+b} \\cdot ((u^a)^x \\cdot (v^b)^y)^{-1} = m \\cdot h^{a+b} \\cdot ((u^x)^a \\cdot (v^y)^b)^{-1}." }, { "math_id": 30, "text": "m' = m \\cdot h^{a+b} \\cdot (h^a \\cdot h^b)^{-1} = m \\cdot (h^{a+b} \\cdot h^{-a-b}) = m." }, { "math_id": 31, "text": "q" } ]
https://en.wikipedia.org/wiki?curid=12765269
12767224
Digital differential analyzer (graphics algorithm)
Hardware or software used for interpolation of variables over an interval In computer graphics, a digital differential analyzer (DDA) is hardware or software used for interpolation of variables over an interval between start and end point. DDAs are used for rasterization of lines, triangles and polygons. They can be extended to non linear functions, such as perspective correct texture mapping, quadratic curves, and traversing voxels. In its simplest implementation for linear cases such as lines, the DDA algorithm interpolates values in interval by computing for each xi the equations xi = xi−1 + 1, yi = yi−1 + m, where m is the slope of the line. This slope can be expressed in DDA as follows: formula_0 In fact any two consecutive points lying on this line segment should satisfy the equation. Performance. The DDA method can be implemented using floating-point or integer arithmetic. The native floating-point implementation requires one addition and one rounding operation per interpolated value (e.g. coordinate x, y, depth, color component etc.) and output result. This process is only efficient when an FPU with fast add and rounding operation will be available. The fixed-point integer operation requires two additions per output cycle, and in case of fractional part overflow, one additional increment and subtraction. The probability of fractional part overflows is proportional to the ratio m of the interpolated start/end values. DDAs are well suited for hardware implementation and can be pipelined for maximized throughput. Algorithm. A linear DDA starts by calculating the smaller of dy or dx for a unit increment of the other. A line is then sampled at unit intervals in one coordinate and corresponding integer values nearest the line path are determined for the other coordinate. Considering a line with positive slope, if the slope is less than or equal to 1, we sample at unit x intervals (dx=1) and compute successive y values as formula_1 formula_2 Subscript k takes integer values starting from 0, for the 1st point and increases by 1 until endpoint is reached. y value is rounded off to nearest integer to correspond to a screen pixel. For lines with slope greater than 1, we reverse the role of x and y i.e. we sample at dy=1 and calculate consecutive x values as formula_3 formula_4 Similar calculations are carried out to determine pixel positions along a line with negative slope. Thus, if the absolute value of the slope is less than 1, we set dx=1 if formula_5 i.e. the starting extreme point is at the left. Program. DDA algorithm program in C++: void main() float x, float y, float x1, y1, float x2, y2, dx, dy, step; int i, gd = DETECT, gm; initgraph(&amp;gd, &amp;gm, "C:\\TURBOC3\\BGI"); cout « "Enter the value of x1 and y1: "; cin » x1 » y1; cout « "Enter the value of x2 and y2: "; cin » x2 » y2; dx = (x2 - x1); dy = (y2 - y1); if (abs(dx) &gt;= abs(dy)) step = abs(dx); else step = abs(dy); dx = dx / step; dy = dy / step; x = x1; y = y1; i = 0; while (i &lt;= step) { putpixel(round(x), round(y), 5); x = x + dx; y = y + dy; i = i + 1; delay(100); getch(); closegraph(); References. http://www.museth.org/Ken/Publications_files/Museth_SIG14.pdf
[ { "math_id": 0, "text": "m = \\frac{y_{\\rm end} -y_{\\rm start}}{x_{\\rm end}-x_{\\rm start}}" }, { "math_id": 1, "text": "y_{k+1} = y_k + m" }, { "math_id": 2, "text": "x_{k+1} = x_k + 1" }, { "math_id": 3, "text": "x_{k+1} = x_k + \\frac{1}{m}" }, { "math_id": 4, "text": "y_{k+1} = y_k + 1" }, { "math_id": 5, "text": " x_{\\rm start}<x_{\\rm end}" } ]
https://en.wikipedia.org/wiki?curid=12767224
12769596
Sparse language
In computational complexity theory, a sparse language is a formal language (a set of strings) such that the complexity function, counting the number of strings of length "n" in the language, is bounded by a polynomial function of "n". They are used primarily in the study of the relationship of the complexity class NP with other classes. The complexity class of all sparse languages is called SPARSE. Sparse languages are called "sparse" because there are a total of 2"n" strings of length "n", and if a language only contains polynomially many of these, then the proportion of strings of length "n" that it contains rapidly goes to zero as "n" grows. All unary languages are sparse. An example of a nontrivial sparse language is the set of binary strings containing exactly "k" 1 bits for some fixed "k"; for each "n", there are only formula_0 strings in the language, which is bounded by "n""k".
[ { "math_id": 0, "text": "\\binom{n}{k}" }, { "math_id": 1, "text": "\\textbf{NP}\\subseteq \\textbf{P}/\\text{poly}" } ]
https://en.wikipedia.org/wiki?curid=12769596
12769975
Aggregate Level Simulation Protocol
The Aggregate Level Simulation Protocol (ALSP) is a protocol and supporting software that enables simulations to interoperate with one another. Replaced by the High Level Architecture (simulation) (HLA), it was used by the US military to link analytic and training simulations. ALSP consists of: History. In 1990, the Defense Advanced Research Projects Agency (DARPA) employed The MITRE Corporation to study the application of distributed interactive simulation principles employed in SIMNET to aggregate-level constructive training simulations. Based on prototype efforts, a community-based experiment was conducted in 1991 to extend SIMNET to link the US Army's Corps Battle Simulation (CBS) and the US Air Force's Air Warfare Simulation (AWSIM). The success of the prototype and users' recognition of the value of this technology to the training community led to development of production software. The first ALSP confederation, providing air-ground interactions between CBS and AWSIM, supported three major exercises in 1992. By 1995, ALSP had transitioned to a multi-Service program with simulations representing the US Army (CBS), the US Air Force (AWSIM), the US Navy (RESA), the US Marine Corps (MTWS), electronic warfare (JECEWSI), logistics (CSSTSS), and intelligence (TACSIM). The program had also transitioned from DARPA's research and development emphasis to mainstream management by the US Army's Program Executive Office for Simulation, Training, and Instrumentation (PEO STRI) Contributions. ALSP developed and demonstrated key aspects of distributed simulation, many of which were applied in the development of HLA. Motivation. In 1989, the Warrior Preparation Center (WPC) in Einsiedlerhof, Germany hosted the computerized military exercise ACE-89. The Defense Advanced Research Projects Agency (DARPA) used ACE-89 as a technology insertion opportunity by funding deployment of the Defense Simulation Internet (DSI). Its packetized video teleconferencing brought general officers of NATO nations face-to-face during a military exercise for the first time; this was well received. But the software application of DSI, distribution of Ground Warfare Simulation (GRWSIM), was less successful. The GRWSIM simulation was unreliable and its distributed database was inconsistent, degrading the effectiveness of the exercise. DARPA was funding development of a distributed tank trainer system called SIMNET where individual, computerized, tank-crew trainers were connected over local area networks and the DSI to cooperate in a single, virtual battlefield. The success of SIMNET, the disappointment of ACE-89, and the desire to combine existing combat simulations prompted DARPA to initiate research that lead to ALSP. Basic Tenets. DARPA sponsored the design of a general interface between large, existing, aggregate-level combat simulations. Aggregate-level combat simulations use Lanchestrian models of combat rather than individual physical weapon models and are typically used for high-level training. Despite representational differences, several principles of SIMNET applied to aggregate-level simulations: The ALSP challenge had requirements beyond those of SIMNET: Conceptual Framework. A conceptual framework is an organizing structure of concepts that facilitates simulation model development. Common conceptual frameworks include: event scheduling, activity scanning and process interaction. The ALSP conceptual framework is object-based where a model is composed of objects that are characterized by attributes to which values are assigned. Object classes are organized hierarchically in much the same manner as with object-oriented programming languages. ALSP supports a confederation of simulations that coordinate using a common model. To design a mechanism that permits existing simulations to interact, two strategies are possible: (1) define an infrastructure that translates between the representations in each simulation, or (2) define a common representational scheme and require all simulations to map to that scheme. The first strategy requires few perturbations to existing simulations; interaction is facilitated entirely through the interconnection infrastructure. However, this solution does not scale well. Because of an underlying requirement for scalability, the ALSP design adopted the second strategy. ALSP prescribes that each simulation maps between the representational scheme of the confederation and its own representational scheme. This mapping represents one of the three ways in which a simulation must be altered to participate in an ALSP confederation. The remaining modifications are: In stand-alone simulations, objects come into (and go out of) existence with the passage of simulation time and the disposition of these objects is solely the purview of the simulation. When acting within a confederation, the simulation-object relationship is more complicated. The simulation-object ownership property is dynamic, i.e. during its lifetime an object may be owned by more than one simulation. In fact, for any value of simulation time, several simulations may own different attributes of a given object. By convention, a simulation owns an object if it owns the "identifying" attribute of the object. Owning an object's attribute means that a simulation is responsible for calculating and reporting changes to the value of the attribute. Objects not owned by a particular simulation but within the area of perception for the simulation are known as ghosts. Ghosts are local copies of objects owned by other simulations. When a simulation creates an object, it reports this fact to the confederation to let other simulations create ghosts. Likewise, when a simulation deletes an object, it reports this fact to enable ghost deletion. Whenever a simulation takes an action between one of its objects and a ghost, the simulation must report this to the confederation. In the parlance of ALSP, this is an interaction. These fundamental concepts provide the basis for the remainder of the presentation. The term confederation model describes the object hierarchy, attributes and interactions supported by a confederation. ALSP Infrastructure Software (AIS). The object-based conceptual framework adopted by ALSP defines classes of information that must be distributed. The ALSP Infrastructure Software (AIS) provides data distribution and process coordination. Principal components of AIS are the ALSP Common Module (ACM) and the ALSP Broadcast Emulator (ABE). ALSP Common Module (ACM). The ALSP Common Module (ACM) provides a common interface for all simulations and contains the essential functionality for ALSP. One ACM instance exists for each simulation in a confederation. ACM services require time management and object management; they include: Time management. Joining and departing a confederation is an integral part of time management process. When a simulation joins a confederation, all other ACMs in the confederation create input message queues for the new simulation. Conversely, when a simulation departs a confederation the other ACMs delete input message queues for that simulation. ALSP time management facilities support discrete event simulation using either asynchronous (next-event) or synchronous (time-stepped) time advance mechanisms. The mechanism to support next-event simulations is The mechanism to support time-stepped simulation is: AIS includes a deadlock avoidance mechanism using null messages. The mechanism requires that the processes have exploitable lookahead characteristics. Object management. The ACM administers attribute database and filter information. The attribute database maintains objects known to the simulation, either owned or ghosted, and attributes of those objects that the simulation currently owns. For any object class, attributes may be members of Information flow across the network can be further restricted through filters. Filtering provides discrimination by (1) object class, (2) attribute value or range, and (3) geographic location. Filters also define the interactions relevant to a simulation. The ownership and filtering information maintained by the ACM provide the information necessary to coordinate the transfer of attribute ownership between simulations. ALSP Broadcast Emulator (ABE). An ALSP Broadcast Emulator (ABE) facilitates the distribution of ALSP information. It receives a message on one of its communications paths and retransmits the message on all of its remaining communications paths. This permits configurations where all ALSP components are local to one another (on the same computer or on a local area network). It also permits configurations where sets of ACMs communicate with their own local ABE with inter-ABE communication over wide area networks. Communication Scheme. The ALSP communication scheme consists of (1) an inter-component communications model that defines the transport layer interface that connects ALSP components, (2) a layered protocol for simulation-to-simulation communication, object management, and time management, (3) a message filtering scheme to define the information of interest to a simulation, and (4) a mechanism for intelligent message distribution. Inter-component Communications Model. AIS employs a persistent connection communications model to provide the inter-component communications. The transport layer interface used to provide inter-component communications was dictated by simulation requirements and the transport layer interfaces on AIS-supporting operating systems: local VMS platforms used shared mailboxes; non-local VMS platforms used either Transparent DECnet or TCP/IP; and UNIX-like platforms use TCP/IP. ALSP Protocol. The ALSP protocol is based on a set of orthogonal issues that comprise ALSP's problem space: simulation-to-simulation communication, object management, and time management. These issues are addressed by a layered protocol that has at the top a simulation protocol with underlying simulation/ACM, object management, time management, and event distribution protocols. Simulation Protocol. The simulation protocol is the main level of the ALSP protocol. It consists of four message types: The simulation protocol is text-based. It is defined by an LALR( 1) context-free grammar. The semantics of the protocol are confederation-dependent, where the set of classes, class attributes, interactions, and interaction parameters are variable. Therefore, the syntactical representation of the simulation protocol may be defined without a priori knowledge of the semantics of the objects and interactions of any particular confederation. Simulation/ACM Connection Protocol. The simulation/ACM connection protocol provides services for managing the connection between a simulation and its ACM and a method of information exchange between a simulation and its ACM. Two services control distribution of simulation protocol messages: events and dispatches. Event messages are time-stamped and delivered in a temporally-consistent order. Dispatch messages are delivered as soon as possible, without regard for simulation time. Additional protocol messages provide connection state, filter registration, attribute lock control, confederation save control, object resource control, and time control services. Object Management Protocol. The object management protocol is a peer-level protocol that sits below the simulation protocol and provides object management services. ACMs solely use it for object attribute creation, acquisition, release, and verification (of the consistency of the distributed object database). These services allow AIS to manage distributed object ownership. Distributed object ownership presumes that no single simulation must own all objects in a confederation, but many simulations require knowledge of some objects. A simulation uses simulation protocol update messages to discover objects owned by other simulations. If this simulation is interested in the objects, it can ghost them (track their locations and state) and model interactions to them from owned objects. Locks implement attribute ownership. A primary function of the object management protocol is to ensure that a simulation only updates attributes for which it has acquired a lock. The object manager in the ACM manages the objects and object attributes of the owned and ghosted objects known to the ACM. Services provided by the simulation/ACM protocol are used by the simulations to interact with the ACM's attribute locking mechanism. The coordination of status, request, acquisition, and release of object attributes, between ACMs, uses the object management protocol. Each attribute of each object known to a given ACM has a status that assumes one of three values: From the ACM's perspective, objects come into existence through the registration process performed by its simulation or through the discovery of objects registered by other simulations. The initial state attribute locks for registered objects and discovered objects is as follows: Time Management Protocol. The time management protocol is also a peer-level protocol that sits below the simulation protocol. It provides time management services for synchronizing simulation time among ACMs. The protocol provides services for the distributed coordination of a simulation's entrance into the confederation, time progression, and confederation saves. The join/resign services and time synchronization mechanisms are described in Section earlier. The save mechanism provides fault tolerance. Coordination is required to produce a consistent snapshot of all ACMs, translators and simulations for a particular value of simulation time. Message Filtering. The ACM uses simulation message filtering to evaluates the content of a message received from the confederation. The ACM delivers messages to its simulation that are of interest, and pass filtering criteria and discards those that are not of interest. The ACM filters two types of messages: update messages and interaction messages. "Update messages." The ACM evaluates update messages based on the simulation's update message filtering criteria that the simulation provides. As discussed in earlier, when an ACM receives an update message there are four possible outcomes: (1) the ACM discards the message, (2) the ACM sends the simulation a create message, (3) the ACM sends the simulation the update message, or (4) the ACM sends the simulation a delete message. "Interaction messages." An ACM may discard interaction messages because of the kind parameter. The kind parameter has a hierarchical structure similar to the object class structure. The simulation informs its ACM of the interaction kinds that should pass or fail the interaction filter. Message Distribution. To minimize message traffic between components in an ALSP confederation, AIS employs a form of intelligent message routing that uses the Event Distribution Protocol (EDP). The EDP allows ACMs to inform the other AIS components about the update and interaction filters registered by their simulations. In the case of update messages, distribution of this information allows ACMs to only distribute data on classes (and attributes of classes) that are of interest to the confederation. The ABE also use this information to send only information that is of interest to the components it serves. For interaction messages, the process is similar, except that the kind parameter in the interaction message determines where the message is sent. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "(T, T + ?T]" }, { "math_id": 1, "text": "T+?T" } ]
https://en.wikipedia.org/wiki?curid=12769975
1277015
Steering law
The steering law in human–computer interaction and ergonomics is a predictive model of human movement that describes the time required to navigate, or "steer", through a 2-dimensional tunnel. The tunnel can be thought of as a path or trajectory on a plane that has an associated thickness or width, where the width can vary along the tunnel. The goal of a steering task is to navigate from one end of the tunnel to the other as quickly as possible, without touching the boundaries of the tunnel. A real-world example that approximates this task is driving a car down a road that may have twists and turns, where the car must navigate the road as quickly as possible without touching the sides of the road. The steering law predicts both the instantaneous speed at which we may navigate the tunnel, and the total time required to navigate the entire tunnel. The steering law has been independently discovered and studied three times (Rashevsky, 1959; Drury, 1971; Accot and Zhai, 1997). Its most recent discovery has been within the human–computer interaction community, which has resulted in the most general mathematical formulation of the law. The steering law in human–computer interaction. Within human–computer interaction, the law was rediscovered by Johnny Accot and Shumin Zhai, who mathematically derived it in a novel way from Fitts's law using integral calculus, experimentally verified it for a class of tasks, and developed the most general mathematical statement of it. Some researchers within this community have sometimes referred to the law as the Accot–Zhai steering law or Accot's law (Accot is pronounced "ah-cot" in English and "ah-koh" in French). In this context, the steering law is a predictive model of human movement, concerning the speed and total time with which a user may steer a pointing device (such as a mouse or stylus) through a 2D tunnel presented on a screen (i.e. with a bird's eye view of the tunnel), where the user must travel from one end of the path to the other as quickly as possible, while staying within the confines of the path. One potential practical application of this law is in modelling a user's performance in navigating a hierarchical cascading menu. Many researchers in human–computer interaction, including Accot himself, find it surprising or even amazing that the steering law model predicts performance as well as it does, given the almost purely mathematical way in which it was derived. Some consider this a testament to the robustness of Fitts's law. In its general form, the steering law can be expressed as formula_0 where "T" is the average time to navigate through the path, "C" is the path parameterized by "s", "W(s)" is the width of the path at "s", and "a" and "b" are experimentally fitted constants. In general, the path may have a complicated curvilinear shape (such as a spiral) with variable thickness "W(s)". Simpler paths allow for mathematical simplifications of the general form of the law. For example, if the path is a straight tunnel of constant width "W", the equation reduces to formula_1 where "A" is the length of the path. We see, especially in this simplified form, a "speed–accuracy" tradeoff, somewhat similar to that in Fitts's law. We can also differentiate both sides of the integral equation with respect to "s" to obtain the local, or instantaneous, form of the law: formula_2 which says that the instantaneous speed of the user is proportional to the width of the tunnel. This makes intuitive sense if we consider the analogous task of driving a car down a road: the wider the road, the faster we can drive and still stay on the road, even if there are curves in the road. Derivation of the model from Fitts's law. This derivation is only meant as a high level sketch. It lacks the illustrations of, and may differ in detail from, the derivation given by Accot and Zhai (1997). Assume that the time required for goal passing (i.e. passing a pointer through a goal at distance "A" and of width "W", oriented perpendicular to the axis of motion) can be modeled with this form of Fitts's law: formula_3 Then, a straight tunnel of length "A" and constant width "W" can be approximated as a sequence of "N" evenly spaced goals, each separated from its neighbours by a distance of "A/N". We can let "N" grow arbitrarily large, making the distance between successive goals become infinitesimal. The total time to navigative through all the goals, and thus through the tunnel, is Note that "b" is an experimentally fitted constant and let formula_4. Therefore, "T"straight tunnel = formula_5. Next, consider a curved tunnel of total length "A", parameterized by "s" varying from 0 to "A". Let "W(s)" be the variable width of the tunnel. The tunnel can be approximated as a sequence of "N" straight tunnels, numbered 1 through "N", each located at "si" where "i" = 1 to "N", and each of length "s""i"+1 − "s""i" and of width "W"("s""i"). We can let "N" grow arbitrarily large, making the length of successive straight tunnels become infinitesimal. The total time to navigative through the curved tunnel is yielding the general form of the steering law. Modeling steering in layers. Steering law has been extended to predict movement time for steering in layers of thickness "t" (Kattinakere et al., 2007). The relation is given by formula_6
[ { "math_id": 0, "text": "T=a + b \\int_{C} \\frac{ds}{W(s)}" }, { "math_id": 1, "text": "T=a + b \\frac{A}{W}" }, { "math_id": 2, "text": "\\frac{ds}{dT} = \\frac{W(s)}{b}" }, { "math_id": 3, "text": "T_\\text{goal} = b \\log_2 \\left( \\frac{A}{W} + 1 \\right)" }, { "math_id": 4, "text": "\\tilde{b} := \\frac{b}{\\ln(2)}" }, { "math_id": 5, "text": "\\tilde{b} \\cdot \\frac{A}{W}" }, { "math_id": 6, "text": "T = a+b\\sqrt{(A/W)^2+(A/t)^2}." } ]
https://en.wikipedia.org/wiki?curid=1277015
12770444
Cooling flow
A cooling flow occurs when the intracluster medium (ICM) in the centres of galaxy clusters should be rapidly cooling at the rate of tens to thousands of solar masses per year. This should happen as the ICM (a plasma) is quickly losing its energy by the emission of X-rays. The X-ray brightness of the ICM is proportional to the square of its density, which rises steeply towards the centres of many clusters. Also the temperature falls to typically a third or a half of the temperature in the outskirts of the cluster. The typical [predicted] timescale for the ICM to cool is relatively short, less than a billion years. As material in the centre of the cluster "cools out", the pressure of the overlying ICM should cause more material to flow inwards (the cooling flow). In a steady state, the rate of "mass deposition", i.e. the rate at which the plasma cools, is given by formula_0 where "L" is the bolometric (i.e. over the entire spectrum) luminosity of the cooling region, "T" is its temperature, "k" is the Boltzmann constant and "μm" is the mean molecular mass. Cooling flow problem. It is currently thought that the very large amounts of expected cooling are in reality much smaller, as there is little evidence for cool X-ray emitting gas in many of these systems. This is the "cooling flow problem". Theories for why there is little evidence of cooling include Heating by AGN is the most popular explanation, as they emit a lot of energy over their lifetimes, and some of the alternatives listed have theoretical problems.
[ { "math_id": 0, "text": "\n\\dot{M} = \\frac{2}{5} \\frac{L \\mu m}{kT},\n" } ]
https://en.wikipedia.org/wiki?curid=12770444
12772382
Banach–Stone theorem
In mathematics, the Banach–Stone theorem is a classical result in the theory of continuous functions on topological spaces, named after the mathematicians Stefan Banach and Marshall Stone. In brief, the Banach–Stone theorem allows one to recover a compact Hausdorff space "X" from the Banach space structure of the space "C"("X") of continuous real- or complex-valued functions on "X". If one is allowed to invoke the algebra structure of "C"("X") this is easy – we can identify "X" with the spectrum of "C"("X"), the set of algebra homomorphisms into the scalar field, equipped with the weak*-topology inherited from the dual space "C"("X")*. The Banach-Stone theorem avoids reference to multiplicative structure by recovering "X" from the extreme points of the unit ball of "C"("X")*. Statement. For a compact Hausdorff space "X", let "C"("X") denote the Banach space of continuous real- or complex-valued functions on "X", equipped with the supremum norm ‖·‖∞. Given compact Hausdorff spaces "X" and "Y", suppose "T" : "C"("X") → "C"("Y") is a surjective linear isometry. Then there exists a homeomorphism "φ" : "Y" → "X" and a function "g" ∈ "C"("Y") with formula_0 such that formula_1 The case where "X" and "Y" are compact metric spaces is due to Banach, while the extension to compact Hausdorff spaces is due to Stone. In fact, they both prove a slight generalization—they do not assume that "T" is linear, only that it is an isometry in the sense of metric spaces, and use the Mazur–Ulam theorem to show that "T" is affine, and so formula_2 is a linear isometry. Generalizations. The Banach–Stone theorem has some generalizations for vector-valued continuous functions on compact, Hausdorff topological spaces. For example, if "E" is a Banach space with trivial centralizer and "X" and "Y" are compact, then every linear isometry of "C"("X"; "E") onto "C"("Y"; "E") is a strong Banach–Stone map. A similar technique has also been used to recover a space "X" from the extreme points of the duals of some other spaces of functions on "X". The noncommutative analog of the Banach-Stone theorem is the folklore theorem that two unital C*-algebras are isomorphic if and only if they are completely isometric (i.e., isometric at all matrix levels). Mere isometry is not enough, as shown by the existence of a C*-algebra that is not isomorphic to its opposite algebra (which trivially has the same Banach space structure). References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "| g(y) | = 1 \\mbox{ for all } y \\in Y" }, { "math_id": 1, "text": "(T f) (y) = g(y) f(\\varphi(y)) \\mbox{ for all } y \\in Y, f \\in C(X)." }, { "math_id": 2, "text": "T - T(0)" } ]
https://en.wikipedia.org/wiki?curid=12772382
12772705
Multipliers and centralizers (Banach spaces)
In mathematics, multipliers and centralizers are algebraic objects in the study of Banach spaces. They are used, for example, in generalizations of the Banach–Stone theorem. Definitions. Let ("X", ‖·‖) be a Banach space over a field K (either the real or complex numbers), and let Ext("X") be the set of extreme points of the closed unit ball of the continuous dual space "X"∗. A continuous linear operator "T" : "X" → "X" is said to be a multiplier if every point "p" in Ext("X") is an eigenvector for the adjoint operator "T"∗ : "X"∗ → "X"∗. That is, there exists a function "a""T" : Ext("X") → K such that formula_0 making formula_1 the eigenvalue corresponding to "p". Given two multipliers "S" and "T" on "X", "S" is said to be an adjoint for "T" if formula_2 i.e. "a""S" agrees with "a""T" in the real case, and with the complex conjugate of "a""T" in the complex case. The centralizer (or commutant) of "X", denoted "Z"("X"), is the set of all multipliers on "X" for which an adjoint exists.
[ { "math_id": 0, "text": "p \\circ T = a_{T} (p) p \\; \\mbox{ for all } p \\in \\mathrm{Ext} (X)," }, { "math_id": 1, "text": "a_{T} (p)" }, { "math_id": 2, "text": "a_{S} = \\overline{a_{T}}," } ]
https://en.wikipedia.org/wiki?curid=12772705
1277363
Branched surface
In mathematics, a branched surface is a generalization of both surfaces and train tracks. Definition. A surface is a space that locally looks like formula_0 (a Euclidean space, up to homeomorphism). Consider, however, the space obtained by taking the quotient of two copies A,B of formula_0 under the identification of a closed half-space of each with a closed half-space of the other. This will be a surface except along a single line. Now, pick another copy C of formula_1 and glue it and A together along halfspaces so that the singular line of this gluing is transverse in A to the previous singular line. Call this complicated space K. A branched surface is a space that is locally modeled on K. Weight. A branched manifold can have a weight assigned to various of its subspaces; if this is done, the space is often called a weighted branched manifold. Weights are non-negative real numbers and are assigned to subspaces "N" that satisfy the following: That is, "N" is a component of the branched surface minus its branching set. Weights are assigned so that if a component branches into two other components, then the sum of the weights of the two unidentified halfplanes of that neighborhood is the weight of the identified halfplane. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\mathbb{R}^2" }, { "math_id": 1, "text": "\\mathbb{R}" } ]
https://en.wikipedia.org/wiki?curid=1277363
12774077
Aronszajn tree
In set theory, an Aronszajn tree is a tree of uncountable height with no uncountable branches and no uncountable levels. For example, every Suslin tree is an Aronszajn tree. More generally, for a cardinal "κ", a "κ"-Aronszajn tree is a tree of height "κ" in which all levels have size less than "κ" and all branches have height less than "κ" (so Aronszajn trees are the same as formula_0-Aronszajn trees). They are named for Nachman Aronszajn, who constructed an Aronszajn tree in 1934; his construction was described by . A cardinal "κ" for which no "κ"-Aronszajn trees exist is said to have the tree property (sometimes the condition that "κ" is regular and uncountable is included). Existence of κ-Aronszajn trees. Kőnig's lemma states that formula_1-Aronszajn trees do not exist. The existence of Aronszajn trees (formula_2-Aronszajn trees) was proven by Nachman Aronszajn, and implies that the analogue of Kőnig's lemma does not hold for uncountable trees. The existence of formula_3-Aronszajn trees is undecidable in ZFC: more precisely, the continuum hypothesis implies the existence of an formula_3-Aronszajn tree, and Mitchell and Silver showed that it is consistent (relative to the existence of a weakly compact cardinal) that no formula_3-Aronszajn trees exist. Jensen proved that V = L implies that there is a "κ"-Aronszajn tree (in fact a "κ"-Suslin tree) for every infinite successor cardinal "κ". showed (using a large cardinal axiom) that it is consistent that no formula_4-Aronszajn trees exist for any finite "n" other than 1. If "κ" is weakly compact then no "κ"-Aronszajn trees exist. Conversely, if "κ" is inaccessible and no "κ"-Aronszajn trees exist, then "κ" is weakly compact. Special Aronszajn trees. An Aronszajn tree is called special if there is a function "f" from the tree to the rationals so that "f"("x") &lt; "f"("y") whenever "x" &lt; "y". Martin's axiom MA(formula_0) implies that all Aronszajn trees are special, a proposition sometimes abbreviated by EATS. The stronger proper forcing axiom implies the stronger statement that for any two Aronszajn trees there is a club set of levels such that the restrictions of the trees to this set of levels are isomorphic, which says that in some sense any two Aronszajn trees are essentially isomorphic . On the other hand, it is consistent that non-special Aronszajn trees exist, and this is also consistent with the generalized continuum hypothesis plus Suslin's hypothesis . Construction of a special Aronszajn tree. A special Aronszajn tree can be constructed as follows. The elements of the tree are certain well-ordered sets of rational numbers with supremum that is rational or −∞. If "x" and "y" are two of these sets then we define "x" ≤ "y" (in the tree order) to mean that "x" is an initial segment of the ordered set "y". For each countable ordinal α we write "U""α" for the elements of the tree of level α, so that the elements of "U""α" are certain sets of rationals with order type α. The special Aronszajn tree "T" is the union of the sets "U""α" for all countable α. We construct the countable levels "U""α" by transfinite induction on α as follows starting with the empty set as "U""0": The function "f"("x") = sup "x" is rational or −∞, and has the property that if "x" &lt; "y" then "f"("x") &lt; "f"("y"). Any branch in "T" is countable as "f" maps branches injectively to −∞ and the rationals. "T" is uncountable as it has a non-empty level "U""α" for each countable ordinal "α" which make up the first uncountable ordinal. This proves that "T" is a special Aronszajn tree. This construction can be used to construct "κ"-Aronszajn trees whenever "κ" is a successor of a regular cardinal and the generalized continuum hypothesis holds, by replacing the rational numbers by a more general "η" set.
[ { "math_id": 0, "text": "\\aleph_1" }, { "math_id": 1, "text": "\\aleph_0" }, { "math_id": 2, "text": "=\\aleph_1" }, { "math_id": 3, "text": "\\aleph_2" }, { "math_id": 4, "text": "\\aleph_n" } ]
https://en.wikipedia.org/wiki?curid=12774077
1277585
Dynamometer car
A dynamometer car is a railroad maintenance of way car used for measuring various aspects of a locomotive's performance. Measurements include tractive effort (pulling force), power, top speed, etc. History. The first dynamometer car was probably one built in about 1838 by the "Father of Computing" Charles Babbage. Working for the Great Western Railway of Great Britain, he equipped a passenger carriage to be placed between an engine and train and record data on a continuously moving roll of paper. The recorded data included the pulling force of the engine, a plot of the path of the carriage and the vertical shake of the carriage. The work was undertaken to help support the position of the Great Western Railway in the controversy over standardizing the British track gauge. In the United States, the Pennsylvania Railroad began using dynamometer cars in the 1860s. The first modern dynamometer car in the United States was built in 1874 by P. H. Dudley for the New York Central Railroad. The early cars used a system of springs and mechanical linkages to effectively use the front coupler on the car as a scale and directly measure the force on the coupler. The car would also have a means to measure the speed of the train. Later versions used a hydraulic cylinder and line to transmit the force to the recording device. Modern dynamometer cars typically use electronic solid state measuring devices and instrumentation such as strain gauges. A LNER dynamometer car was used to record No 4468 Mallard's speed record in 1938, and has been preserved at the National Railway Museum in York, England. This was also used for British Railways 1948 Locomotive Exchange Trials along with two other dynamometer cars, both of which have also survived into preservation. A car originally belonging to the Chicago, Burlington and Quincy Railroad, is preserved at the National Railroad Museum located in Green Bay, Wisconsin. A car built for the Chicago, Milwaukee, St. Paul and Pacific Railroad is preserved at the Illinois Railway Museum. Usage. While the principal purpose of the dynamometer car was to measure the power output of locomotive, other data were typically collected, such as smoke box data, throttle settings and valve cut offs, fuel burn rates, and water usage to determine the overall performance and efficiency of the locomotive. Data would typically be recorded on time-indexed continuous paper recording rolls for the pull and velocity. Power would later be manually calculated from these data on early cars. Some later cars were equipped with a mechanical integrator to directly record the power. A separate use for the car was to test a particular rail route to rate it for tonnage based on a run with a dynamometer car and recording the effect of the grades and curvature on the capacity and resulting power requirements for that line. Power calculations. The operating principle of the dynamometer car is based on the basic equation for power being equal to force times distance over time. formula_0 This equation can be reduced to power equals force times velocity: formula_1 In other words, the instantaneous power output of the locomotive can be calculated by measuring the pull on the coupler and multiplying by the current speed. formula_2 Converting to horse power gives: formula_3 References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "P = \\frac{F\\cdot d}{t}" }, { "math_id": 1, "text": "P = F\\cdot\\frac{d}{t}=F\\cdot V" }, { "math_id": 2, "text": "P =50,000 ~ \\text{lbf} \\cdot \\frac{30 ~ \\text{mi}} {\\text{h}} \\cdot \\frac{5280 ~ \\text{ft}}{\\text{mi}} \\cdot \\frac{\\text{h}}{3600 ~ \\text{s}} = 2,200,000 ~ \\frac{\\text{ft} \\cdot \\text{lbf}}{\\text{s}}" }, { "math_id": 3, "text": "P = 2,200,000 ~ \\frac{\\text{ft} \\cdot \\text{lbf}} {\\text{s}} \\cdot \\frac{1 ~ \\text{hp}}{550 ~ \\text{ft} \\cdot \\text{lbf} / \\text{s}} = 4,000 ~ \\text{hp}" } ]
https://en.wikipedia.org/wiki?curid=1277585
1277699
G-structure on a manifold
Structure group sub-bundle on a tangent frame bundle In differential geometry, a "G"-structure on an "n"-manifold "M", for a given structure group "G", is a principal "G"-subbundle of the tangent frame bundle F"M" (or GL("M")) of "M". The notion of "G"-structures includes various classical structures that can be defined on manifolds, which in some cases are tensor fields. For example, for the orthogonal group, an O("n")-structure defines a Riemannian metric, and for the special linear group an SL("n",R)-structure is the same as a volume form. For the trivial group, an {"e"}-structure consists of an absolute parallelism of the manifold. Generalising this idea to arbitrary principal bundles on topological spaces, one can ask if a principal formula_0-bundle over a group formula_0 "comes from" a subgroup formula_1 of formula_0. This is called reduction of the structure group (to formula_1). Several structures on manifolds, such as a complex structure, a symplectic structure, or a Kähler structure, are "G"-structures with an additional integrability condition. Reduction of the structure group. One can ask if a principal formula_0-bundle over a group formula_0 "comes from" a subgroup formula_1 of formula_0. This is called reduction of the structure group (to formula_1), and makes sense for any map formula_2, which need not be an inclusion map (despite the terminology). Definition. In the following, let formula_3 be a topological space, formula_4 topological groups and a group homomorphism formula_5. In terms of concrete bundles. Given a principal formula_0-bundle formula_6 over formula_3, a "reduction of the structure group" (from formula_0 to formula_1) is a "formula_1"-bundle formula_7 and an isomorphism formula_8 of the associated bundle to the original bundle. In terms of classifying spaces. Given a map formula_9, where formula_10 is the classifying space for formula_0-bundles, a "reduction of the structure group" is a map formula_11 and a homotopy formula_12. Properties and examples. Reductions of the structure group do not always exist. If they exist, they are usually not essentially unique, since the isomorphism formula_13 is an important part of the data. As a concrete example, every even-dimensional real vector space is isomorphic to the underlying real space of a complex vector space: it admits a linear complex structure. A real vector bundle admits an almost complex structure if and only if it is isomorphic to the underlying real bundle of a complex vector bundle. This is then a reduction along the inclusion "GL"("n",C) → "GL"(2"n",R) In terms of transition maps, a "G"-bundle can be reduced if and only if the transition maps can be taken to have values in "H". Note that the term "reduction" is misleading: it suggests that "H" is a subgroup of "G", which is often the case, but need not be (for example for spin structures): it's properly called a lifting. More abstractly, ""G"-bundles over "X"" is a functor in "G": Given a Lie group homomorphism "H" → "G", one gets a map from "H"-bundles to "G"-bundles by inducing (as above). Reduction of the structure group of a "G"-bundle "B" is choosing an "H"-bundle whose image is "B". The inducing map from "H"-bundles to "G"-bundles is in general neither onto nor one-to-one, so the structure group cannot always be reduced, and when it can, this reduction need not be unique. For example, not every manifold is orientable, and those that are orientable admit exactly two orientations. If "H" is a closed subgroup of "G", then there is a natural one-to-one correspondence between reductions of a "G"-bundle "B" to "H" and global sections of the fiber bundle "B"/"H" obtained by quotienting "B" by the right action of "H". Specifically, the fibration "B" → "B"/"H" is a principal "H"-bundle over "B"/"H". If σ : "X" → "B"/"H" is a section, then the pullback bundle "B"H = σ−1"B" is a reduction of "B". "G"-structures. Every vector bundle of dimension formula_14 has a canonical formula_15-bundle, the frame bundle. In particular, every smooth manifold has a canonical vector bundle, the tangent bundle. For a Lie group formula_0 and a group homomorphism formula_16, a formula_0-structure is a reduction of the structure group of the frame bundle to formula_0. Examples. The following examples are defined for real vector bundles, particularly the tangent bundle of a smooth manifold. Some formula_0-structures are defined in terms of others: Given a Riemannian metric on an oriented manifold, a formula_0-structure for the 2-fold cover formula_17 is a spin structure. (Note that the group homomorphism here is "not" an inclusion.) Principal bundles. Although the theory of principal bundles plays an important role in the study of "G"-structures, the two notions are different. A "G"-structure is a principal subbundle of the tangent frame bundle, but the fact that the "G"-structure bundle "consists of tangent frames" is regarded as part of the data. For example, consider two Riemannian metrics on R"n". The associated O("n")-structures are isomorphic if and only if the metrics are isometric. But, since R"n" is contractible, the underlying O("n")-bundles are always going to be isomorphic as principal bundles because the only bundles over contractible spaces are trivial bundles. This fundamental difference between the two theories can be captured by giving an additional piece of data on the underlying "G"-bundle of a "G"-structure: the solder form. The solder form is what ties the underlying principal bundle of the "G"-structure to the local geometry of the manifold itself by specifying a canonical isomorphism of the tangent bundle of "M" to an associated vector bundle. Although the solder form is not a connection form, it can sometimes be regarded as a precursor to one. In detail, suppose that "Q" is the principal bundle of a "G"-structure. If "Q" is realized as a reduction of the frame bundle of "M", then the solder form is given by the pullback of the tautological form of the frame bundle along the inclusion. Abstractly, if one regards "Q" as a principal bundle independently of its realization as a reduction of the frame bundle, then the solder form consists of a representation ρ of "G" on Rn and an isomorphism of bundles θ : "TM" → "Q" ×ρ Rn. Integrability conditions and flat "G"-structures. Several structures on manifolds, such as a complex structure, a symplectic structure, or a Kähler structure, are "G"-structures (and thus can be obstructed), but need to satisfy an additional integrability condition. Without the corresponding integrability condition, the structure is instead called an "almost" structure, as in an almost complex structure, an almost symplectic structure, or an almost Kähler structure. Specifically, a symplectic manifold structure is a stronger concept than a "G"-structure for the symplectic group. A symplectic structure on a manifold is a 2-form "ω" on "M" that is non-degenerate (which is an formula_18-structure, or almost symplectic structure), "together with" the extra condition that d"ω" = 0; this latter is called an integrability condition. Similarly, foliations correspond to "G"-structures coming from block matrices, together with integrability conditions so that the Frobenius theorem applies. A flat "G"-structure is a "G"-structure "P" having a global section ("V"1...,"V"n) consisting of commuting vector fields. A "G"-structure is integrable (or "locally flat") if it is locally isomorphic to a flat "G"-structure. Isomorphism of "G"-structures. The set of diffeomorphisms of "M" that preserve a "G"-structure is called the "automorphism group" of that structure. For an O("n")-structure they are the group of isometries of the Riemannian metric and for an SL("n",R)-structure volume preserving maps. Let "P" be a "G"-structure on a manifold "M", and "Q" a "G"-structure on a manifold "N". Then an isomorphism of the "G"-structures is a diffeomorphism "f" : "M" → "N" such that the pushforward of linear frames "f"* : "FM" → "FN" restricts to give a mapping of "P" into "Q". (Note that it is sufficient that "Q" be contained within the image of "f"*.) The "G"-structures "P" and "Q" are locally isomorphic if "M" admits a covering by open sets "U" and a family of diffeomorphisms "f"U : "U" → "f"("U") ⊂ "N" such that "f"U induces an isomorphism of "P"|U → "Q"|"f"("U"). An automorphism of a "G"-structure is an isomorphism of a "G"-structure "P" with itself. Automorphisms arise frequently in the study of transformation groups of geometric structures, since many of the important geometric structures on a manifold can be realized as "G"-structures. A wide class of equivalence problems can be formulated in the language of "G"-structures. For example, a pair of Riemannian manifolds are (locally) equivalent if and only if their bundles of orthonormal frames are (locally) isomorphic "G"-structures. In this view, the general procedure for solving an equivalence problem is to construct a system of invariants for the "G"-structure which are then sufficient to determine whether a pair of "G"-structures are locally isomorphic or not. Connections on "G"-structures. Let "Q" be a "G"-structure on "M". A principal connection on the principal bundle "Q" induces a connection on any associated vector bundle: in particular on the tangent bundle. A linear connection ∇ on "TM" arising in this way is said to be compatible with "Q". Connections compatible with "Q" are also called adapted connections. Concretely speaking, adapted connections can be understood in terms of a moving frame. Suppose that "V"i is a basis of local sections of "TM" (i.e., a frame on "M") which defines a section of "Q". Any connection ∇ determines a system of basis-dependent 1-forms ω via ∇X Vi = ωij(X)Vj where, as a matrix of 1-forms, ω ∈ Ω1(M)⊗gl("n"). An adapted connection is one for which ω takes its values in the Lie algebra g of "G". Torsion of a "G"-structure. Associated to any "G"-structure is a notion of torsion, related to the torsion of a connection. Note that a given "G"-structure may admit many different compatible connections which in turn can have different torsions, but in spite of this it is possible to give an independent notion of torsion "of the G-structure" as follows. The difference of two adapted connections is a 1-form on "M" with values in the adjoint bundle Ad"Q". That is to say, the space "A""Q" of adapted connections is an affine space for Ω1(Ad"Q"). The torsion of an adapted connection defines a map formula_19 to 2-forms with coefficients in "TM". This map is linear; its linearization formula_20 is called the algebraic torsion map. Given two adapted connections ∇ and ∇′, their torsion tensors "T"∇, "T"∇′ differ by τ(∇−∇′). Therefore, the image of "T"∇ in coker(τ) is independent from the choice of ∇. The image of "T"∇ in coker(τ) for any adapted connection ∇ is called the torsion of the "G"-structure. A "G"-structure is said to be torsion-free if its torsion vanishes. This happens precisely when "Q" admits a torsion-free adapted connection. Example: Torsion for almost complex structures. An example of a "G"-structure is an almost complex structure, that is, a reduction of a structure group of an even-dimensional manifold to GL("n",C). Such a reduction is uniquely determined by a "C"∞-linear endomorphism "J" ∈ End("TM") such that "J"2 = −1. In this situation, the torsion can be computed explicitly as follows. An easy dimension count shows that formula_21, where Ω2,0("TM") is a space of forms "B" ∈ Ω2("TM") which satisfy formula_22 Therefore, the torsion of an almost complex structure can be considered as an element in Ω2,0("TM"). It is easy to check that the torsion of an almost complex structure is equal to its Nijenhuis tensor. Higher order "G"-structures. Imposing integrability conditions on a particular "G"-structure (for instance, with the case of a symplectic form) can be dealt with via the process of prolongation. In such cases, the prolonged "G"-structure cannot be identified with a "G"-subbundle of the bundle of linear frames. In many cases, however, the prolongation is a principal bundle in its own right, and its structure group can be identified with a subgroup of a higher-order jet group. In which case, it is called a higher order "G"-structure [Kobayashi]. In general, Cartan's equivalence method applies to such cases.
[ { "math_id": 0, "text": "G" }, { "math_id": 1, "text": "H" }, { "math_id": 2, "text": "H \\to G" }, { "math_id": 3, "text": "X" }, { "math_id": 4, "text": "G, H" }, { "math_id": 5, "text": "\\phi\\colon H \\to G" }, { "math_id": 6, "text": "P" }, { "math_id": 7, "text": "Q" }, { "math_id": 8, "text": "\\phi_Q\\colon Q \\times_H G \\to P" }, { "math_id": 9, "text": "\\pi\\colon X \\to BG" }, { "math_id": 10, "text": "BG" }, { "math_id": 11, "text": "\\pi_Q\\colon X \\to BH" }, { "math_id": 12, "text": "\\phi_Q\\colon B\\phi \\circ \\pi_Q \\to \\pi" }, { "math_id": 13, "text": "\\phi" }, { "math_id": 14, "text": "n" }, { "math_id": 15, "text": "GL(n)" }, { "math_id": 16, "text": "\\phi\\colon G \\to GL(n)" }, { "math_id": 17, "text": "\\mbox{Spin}(n) \\to \\mbox{SO}(n)" }, { "math_id": 18, "text": "Sp" }, { "math_id": 19, "text": "A^Q \\to \\Omega^2 (TM)\\," }, { "math_id": 20, "text": "\\tau:\\Omega^1(\\mathrm{Ad}_Q)\\to \\Omega^2(TM)\\," }, { "math_id": 21, "text": "\\Omega^2(TM)= \\Omega^{2,0}(TM)\\oplus \\mathrm{im}(\\tau)" }, { "math_id": 22, "text": "B(JX,Y) = B(X, JY) = - J B(X,Y).\\," } ]
https://en.wikipedia.org/wiki?curid=1277699
12778
Group velocity
Physical quantity The group velocity of a wave is the velocity with which the overall envelope shape of the wave's amplitudes—known as the "modulation" or "envelope" of the wave—propagates through space. For example, if a stone is thrown into the middle of a very still pond, a circular pattern of waves with a quiescent center appears in the water, also known as a capillary wave. The expanding ring of waves is the wave group or wave packet, within which one can discern individual waves that travel faster than the group as a whole. The amplitudes of the individual waves grow as they emerge from the trailing edge of the group and diminish as they approach the leading edge of the group. History. The idea of a group velocity distinct from a wave's phase velocity was first proposed by W.R. Hamilton in 1839, and the first full treatment was by Rayleigh in his "Theory of Sound" in 1877. Definition and interpretation. The group velocity "v"g is defined by the equation: formula_0 where "ω" is the wave's angular frequency (usually expressed in radians per second), and "k" is the angular wavenumber (usually expressed in radians per meter). The phase velocity is: "v"p "ω"/"k". The function "ω"("k"), which gives "ω" as a function of "k", is known as the dispersion relation. "ak" + "b"), then the group velocity and phase velocity are different. The envelope of a wave packet (see figure on right) will travel at the group velocity, while the individual peaks and troughs within the envelope will move at the phase velocity. "v"p /2. This underlies the "Kelvin wake pattern" for the bow wave of all ships and swimming objects. Regardless of how fast they are moving, as long as their velocity is constant, on each side the wake forms an angle of 19.47° = arcsin(1/3) with the line of travel. Derivation. One derivation of the formula for group velocity is as follows. Consider a wave packet as a function of position "x" and time "t": "α"("x","t"). Let "A"("k") be its Fourier transform at time , formula_2 By the superposition principle, the wavepacket at any time "t" is formula_3 where "ω" is implicitly a function of "k". Assume that the wave packet "α" is almost monochromatic, so that "A"("k") is sharply peaked around a central wavenumber "k"0. Then, linearization gives formula_4 where formula_5 and formula_6 (see next section for discussion of this step). Then, after some algebra, formula_7 There are two factors in this expression. The first factor, formula_8, describes a perfect monochromatic wave with wavevector "k"0, with peaks and troughs moving at the phase velocity formula_9 within the envelope of the wavepacket. The other factor, formula_10, gives the envelope of the wavepacket. This envelope function depends on position and time "only" through the combination formula_11. Therefore, the envelope of the wavepacket travels at velocity formula_12 which explains the group velocity formula. Other expressions. For light, the refractive index "n", vacuum wavelength "λ0", and wavelength in the medium "λ", are related by formula_13 with "v"p   "ω"/"k" the phase velocity. The group velocity, therefore, can be calculated by any of the following formulas, formula_14 Dispersion. Part of the previous derivation is the Taylor series approximation that: formula_15 If the wavepacket has a relatively large frequency spread, or if the dispersion "ω(k)" has sharp variations (such as due to a resonance), or if the packet travels over very long distances, this assumption is not valid, and higher-order terms in the Taylor expansion become important. As a result, the envelope of the wave packet not only moves, but also "distorts," in a manner that can be described by the material's group velocity dispersion. Loosely speaking, different frequency-components of the wavepacket travel at different speeds, with the faster components moving towards the front of the wavepacket and the slower moving towards the back. Eventually, the wave packet gets stretched out. This is an important effect in the propagation of signals through optical fibers and in the design of high-power, short-pulse lasers. In three dimensions. For waves traveling through three dimensions, such as light waves, sound waves, and matter waves, the formulas for phase and group velocity are generalized in a straightforward way: where formula_18 means the gradient of the angular frequency ω as a function of the wave vector formula_19, and formula_20 is the unit vector in direction k. If the waves are propagating through an anisotropic (i.e., not rotationally symmetric) medium, for example a crystal, then the phase velocity vector and group velocity vector may point in different directions. In lossy or gainful media. The group velocity is often thought of as the velocity at which energy or information is conveyed along a wave. In most cases this is accurate, and the group velocity can be thought of as the signal velocity of the waveform. However, if the wave is travelling through an absorptive or gainful medium, this does not always hold. In these cases the group velocity may not be a well-defined quantity, or may not be a meaningful quantity. In his text "Wave Propagation in Periodic Structures", Brillouin argued that in a lossy medium the group velocity ceases to have a clear physical meaning. An example concerning the transmission of electromagnetic waves through an atomic gas is given by Loudon. Another example is mechanical waves in the solar photosphere: The waves are damped (by radiative heat flow from the peaks to the troughs), and related to that, the energy velocity is often substantially lower than the waves' group velocity. Despite this ambiguity, a common way to extend the concept of group velocity to complex media is to consider spatially damped plane wave solutions inside the medium, which are characterized by a "complex-valued" wavevector. Then, the imaginary part of the wavevector is arbitrarily discarded and the usual formula for group velocity is applied to the real part of wavevector, i.e., formula_21 Or, equivalently, in terms of the real part of complex refractive index, "n" = "n" + "iκ", one has formula_22 It can be shown that this generalization of group velocity continues to be related to the apparent speed of the peak of a wavepacket. The above definition is not universal, however: alternatively one may consider the time damping of standing waves (real k, complex ω), or, allow group velocity to be a complex-valued quantity. Different considerations yield distinct velocities, yet all definitions agree for the case of a lossless, gainless medium. The above generalization of group velocity for complex media can behave strangely, and the example of anomalous dispersion serves as a good illustration. At the edges of a region of anomalous dispersion, formula_23 becomes infinite (surpassing even the speed of light in vacuum), and formula_23 may easily become negative (its sign opposes Rek) inside the band of anomalous dispersion. Superluminal group velocities. Since the 1980s, various experiments have verified that it is possible for the group velocity (as defined above) of laser light pulses sent through lossy materials, or gainful materials, to significantly exceed the speed of light in vacuum c. The peaks of wavepackets were also seen to move faster than c. In all these cases, however, there is no possibility that signals could be carried faster than the speed of light in vacuum, since the high value of vg does not help to speed up the true motion of the sharp wavefront that would occur at the start of any real signal. Essentially the seemingly superluminal transmission is an artifact of the narrow band approximation used above to define group velocity and happens because of resonance phenomena in the intervening medium. In a wide band analysis it is seen that the apparently paradoxical speed of propagation of the signal envelope is actually the result of local interference of a wider band of frequencies over many cycles, all of which propagate perfectly causally and at phase velocity. The result is akin to the fact that shadows can travel faster than light, even if the light causing them always propagates at light speed; since the phenomenon being measured is only loosely connected with causality, it does not necessarily respect the rules of causal propagation, even if it under normal circumstances does so and leads to a common intuition. See also. &lt;templatestyles src="Div col/styles.css"/&gt; References. Notes. &lt;templatestyles src="Reflist/styles.css" /&gt; Further reading. &lt;templatestyles src="Refbegin/styles.css" /&gt;
[ { "math_id": 0, "text": "v_{\\rm g} \\ \\equiv\\ \\frac{\\partial \\omega}{\\partial k}\\," }, { "math_id": 1, "text": "\\omega = \\sqrt{gk}" }, { "math_id": 2, "text": " \\alpha(x, 0) = \\int_{-\\infty}^\\infty dk \\, A(k) e^{ikx}." }, { "math_id": 3, "text": " \\alpha(x, t) = \\int_{-\\infty}^\\infty dk \\, A(k) e^{i(kx - \\omega t)}," }, { "math_id": 4, "text": "\\omega(k) \\approx \\omega_0 + \\left(k - k_0\\right)\\omega'_0" }, { "math_id": 5, "text": "\\omega_0 = \\omega(k_0)" }, { "math_id": 6, "text": "\\omega'_0 = \\left.\\frac{\\partial \\omega(k)}{\\partial k}\\right|_{k=k_0}" }, { "math_id": 7, "text": " \\alpha(x,t) = e^{i\\left(k_0 x - \\omega_0 t\\right)}\\int_{-\\infty}^\\infty dk \\, A(k) e^{i(k - k_0)\\left(x - \\omega'_0 t\\right)}." }, { "math_id": 8, "text": "e^{i\\left(k_0 x - \\omega_0 t\\right)}" }, { "math_id": 9, "text": "\\omega_0/k_0" }, { "math_id": 10, "text": "\\int_{-\\infty}^\\infty dk \\, A(k) e^{i(k - k_0)\\left(x - \\omega'_0 t\\right)}" }, { "math_id": 11, "text": "(x - \\omega'_0 t)" }, { "math_id": 12, "text": "\\omega'_0 = \\left.\\frac{d\\omega}{dk}\\right|_{k=k_0}~," }, { "math_id": 13, "text": "\\lambda_0 = \\frac{2\\pi c}{\\omega}, \\;\\; \\lambda = \\frac{2\\pi}{k} = \\frac{2\\pi v_{\\rm p}}{\\omega}, \\;\\; n = \\frac{c}{v_{\\rm p}} = \\frac{\\lambda_0}{\\lambda}," }, { "math_id": 14, "text": " \\begin{align}\n v_{\\rm g} &= \\frac{c}{n + \\omega \\frac{\\partial n}{\\partial \\omega}}\n = \\frac{c}{n - \\lambda_0 \\frac{\\partial n}{\\partial \\lambda_0}}\\\\\n &= v_{\\rm p} \\left(1 + \\frac{\\lambda}{n} \\frac{\\partial n}{\\partial \\lambda}\\right)\n = v_{\\rm p} - \\lambda \\frac{\\partial v_{\\rm p}}{\\partial \\lambda} = v_{\\rm p} + k \\frac{\\partial v_{\\rm p}}{\\partial k}.\n\\end{align}" }, { "math_id": 15, "text": "\\omega(k) \\approx \\omega_0 + (k - k_0)\\omega'_0(k_0)" }, { "math_id": 16, "text": "v_{\\rm p} = \\omega/k, \\quad v_{\\rm g} = \\frac{\\partial \\omega}{\\partial k}, \\," }, { "math_id": 17, "text": "(v_{\\rm p})_i = \\frac{\\omega}{{k}_i}, \\quad \\mathbf{v}_{\\rm g} = \\vec{\\nabla}_{\\mathbf{k}} \\, \\omega \\," }, { "math_id": 18, "text": "\\vec{\\nabla}_{\\mathbf{k}} \\, \\omega" }, { "math_id": 19, "text": "\\mathbf{k}" }, { "math_id": 20, "text": "\\hat{\\mathbf{k}}" }, { "math_id": 21, "text": "v_{\\rm g} = \\left(\\frac{\\partial \\left(\\operatorname{Re} k\\right)}{\\partial \\omega}\\right)^{-1} ." }, { "math_id": 22, "text": "\\frac{c}{v_{\\rm g}} = n + \\omega \\frac{\\partial n}{\\partial \\omega} ." }, { "math_id": 23, "text": "v_{\\rm g}" } ]
https://en.wikipedia.org/wiki?curid=12778
1277825
Angular momentum coupling
Coupling in quantum physics In quantum mechanics, angular momentum coupling is the procedure of constructing eigenstates of total angular momentum out of eigenstates of separate angular momenta. For instance, the orbit and spin of a single particle can interact through spin–orbit interaction, in which case the complete physical picture must include spin–orbit coupling. Or two charged particles, each with a well-defined angular momentum, may interact by Coulomb forces, in which case coupling of the two one-particle angular momenta to a total angular momentum is a useful step in the solution of the two-particle Schrödinger equation. In both cases the separate angular momenta are no longer constants of motion, but the sum of the two angular momenta usually still is. Angular momentum coupling in atoms is of importance in atomic spectroscopy. Angular momentum coupling of electron spins is of importance in quantum chemistry. Also in the nuclear shell model angular momentum coupling is ubiquitous. In astronomy, spin–orbit coupling reflects the general law of conservation of angular momentum, which holds for celestial systems as well. In simple cases, the direction of the angular momentum vector is neglected, and the spin–orbit coupling is the ratio between the frequency with which a planet or other celestial body spins about its own axis to that with which it orbits another body. This is more commonly known as orbital resonance. Often, the underlying physical effects are tidal forces. General theory and detailed origin. Angular momentum conservation. Conservation of angular momentum is the principle that the total angular momentum of a system has a constant magnitude and direction if the system is subjected to no external torque. Angular momentum is a property of a physical system that is a constant of motion (also referred to as a "conserved" property, time-independent and well-defined) in two situations: In both cases the angular momentum operator commutes with the Hamiltonian of the system. By Heisenberg's uncertainty relation this means that the angular momentum and the energy (eigenvalue of the Hamiltonian) can be measured at the same time. An example of the first situation is an atom whose electrons only experience the Coulomb force of its atomic nucleus. If we ignore the electron–electron interaction (and other small interactions such as spin–orbit coupling), the "orbital angular momentum" l of each electron commutes with the total Hamiltonian. In this model the atomic Hamiltonian is a sum of kinetic energies of the electrons and the spherically symmetric electron–nucleus interactions. The individual electron angular momenta li commute with this Hamiltonian. That is, they are conserved properties of this approximate model of the atom. An example of the second situation is a rigid rotor moving in field-free space. A rigid rotor has a well-defined, time-independent, angular momentum. These two situations originate in classical mechanics. The third kind of conserved angular momentum, associated with spin, does not have a classical counterpart. However, all rules of angular momentum coupling apply to spin as well. In general the conservation of angular momentum implies full rotational symmetry (described by the groups SO(3) and SU(2)) and, conversely, spherical symmetry implies conservation of angular momentum. If two or more physical systems have conserved angular momenta, it can be useful to combine these momenta to a total angular momentum of the combined system—a conserved property of the total system. The building of eigenstates of the total conserved angular momentum from the angular momentum eigenstates of the individual subsystems is referred to as "angular momentum coupling". Application of angular momentum coupling is useful when there is an interaction between subsystems that, without interaction, would have conserved angular momentum. By the very interaction the spherical symmetry of the subsystems is broken, but the angular momentum of the total system remains a constant of motion. Use of the latter fact is helpful in the solution of the Schrödinger equation. Examples. As an example we consider two electrons, in an atom (say the helium atom) labeled with i = 1 and 2. If there is no electron–electron interaction, but only electron–nucleus interaction, then the two electrons can be rotated around the nucleus independently of each other; nothing happens to their energy. The expectation values of both operators, l1 and l2, are conserved. However, if we switch on the electron–electron interaction that depends on the distance d(1,2) between the electrons, then only a simultaneous and equal rotation of the two electrons will leave d(1,2) invariant. In such a case the expectation value of neither l1 nor l2 is a constant of motion in general, but the expectation value of the total orbital angular momentum operator L = l1 + l2 is. Given the eigenstates of l1 and l2, the construction of eigenstates of L (which still is conserved) is the "coupling of the angular momenta of electrons" 1 "and" 2. The total orbital angular momentum quantum number L is restricted to integer values and must satisfy the triangular condition that formula_0, such that the three nonnegative integer values could correspond to the three sides of a triangle. In quantum mechanics, coupling also exists between angular momenta belonging to different Hilbert spaces of a single object, e.g. its spin and its orbital angular momentum. If the spin has half-integer values, such as for an electron, then the total (orbital plus spin) angular momentum will also be restricted to half-integer values. Reiterating slightly differently the above: one expands the quantum states of composed systems (i.e. made of subunits like two hydrogen atoms or two electrons) in basis sets which are made of tensor products of quantum states which in turn describe the subsystems individually. We assume that the states of the subsystems can be chosen as eigenstates of their angular momentum operators (and of their component along any arbitrary z axis). The subsystems are therefore correctly described by a pair of ℓ, m quantum numbers (see angular momentum for details). When there is interaction among the subsystems, the total Hamiltonian contains terms that do not commute with the angular operators acting on the subsystems only. However, these terms "do" commute with the "total" angular momentum operator. Sometimes one refers to the non-commuting interaction terms in the Hamiltonian as "angular momentum coupling terms", because they necessitate the angular momentum coupling. Spin–orbit coupling. The behavior of atoms and smaller particles is well described by the theory of quantum mechanics, in which each particle has an intrinsic angular momentum called spin and specific configurations (of e.g. electrons in an atom) are described by a set of quantum numbers. Collections of particles also have angular momenta and corresponding quantum numbers, and under different circumstances the angular momenta of the parts couple in different ways to form the angular momentum of the whole. Angular momentum coupling is a category including some of the ways that subatomic particles can interact with each other. In atomic physics, spin–orbit coupling, also known as spin-pairing, describes a weak magnetic interaction, or coupling, of the particle spin and the orbital motion of this particle, e.g. the electron spin and its motion around an atomic nucleus. One of its effects is to separate the energy of internal states of the atom, e.g. spin-aligned and spin-antialigned that would otherwise be identical in energy. This interaction is responsible for many of the details of atomic structure. In solid-state physics, the spin coupling with the orbital motion can lead to splitting of energy bands due to Dresselhaus or Rashba effects. In the macroscopic world of orbital mechanics, the term "spin–orbit coupling" is sometimes used in the same sense as spin–orbit resonance. LS coupling. In light atoms (generally "Z" ≤ 30), electron spins s"i" interact among themselves so they combine to form a total spin angular momentum S. The same happens with orbital angular momenta ℓ"i", forming a total orbital angular momentum L. The interaction between the quantum numbers L and S is called Russell–Saunders coupling (after Henry Norris Russell and Frederick Saunders) or LS coupling. Then S and L couple together and form a total angular momentum J: formula_1 where L and S are the totals: formula_2 This is an approximation which is good as long as any external magnetic fields are weak. In larger magnetic fields, these two momenta decouple, giving rise to a different splitting pattern in the energy levels (the Paschen–Back effect), and the size of LS coupling term becomes small. For an extensive example on how LS-coupling is practically applied, see the article on term symbols. jj coupling. In heavier atoms the situation is different. In atoms with bigger nuclear charges, spin–orbit interactions are frequently as large as or larger than spin–spin interactions or orbit–orbit interactions. In this situation, each orbital angular momentum ℓ"i" tends to combine with the corresponding individual spin angular momentum s"i", originating an individual total angular momentum j"i". These then couple up to form the total angular momentum J formula_3 This description, facilitating calculation of this kind of interaction, is known as "jj coupling". Spin–spin coupling. Spin–spin coupling is the coupling of the intrinsic angular momentum (spin) of different particles. J-coupling between pairs of nuclear spins is an important feature of nuclear magnetic resonance (NMR) spectroscopy as it can provide detailed information about the structure and conformation of molecules. Spin–spin coupling between nuclear spin and electronic spin is responsible for hyperfine structure in atomic spectra. Term symbols. Term symbols are used to represent the states and spectral transitions of atoms, they are found from coupling of angular momenta mentioned above. When the state of an atom has been specified with a term symbol, the allowed transitions can be found through selection rules by considering which transitions would conserve angular momentum. A photon has spin 1, and when there is a transition with emission or absorption of a photon the atom will need to change state to conserve angular momentum. The term symbol selection rules are: Δ"S" = 0; Δ"L" = 0, ±1; Δ"l" = ± 1; Δ"J" = 0, ±1 . The expression "term symbol" is derived from the "term series" associated with the Rydberg states of an atom and their energy levels. In the Rydberg formula the frequency or wave number of the light emitted by a hydrogen-like atom is proportional to the difference between the two terms of a transition. The series known to early spectroscopy were designated "sharp", "principal", "diffuse", and "fundamental" and consequently the letters S, P, D, and F were used to represent the orbital angular momentum states of an atom. Relativistic effects. In very heavy atoms, relativistic shifting of the energies of the electron energy levels accentuates spin–orbit coupling effect. Thus, for example, uranium molecular orbital diagrams must directly incorporate relativistic symbols when considering interactions with other atoms. Nuclear coupling. In atomic nuclei, the spin–orbit interaction is much stronger than for atomic electrons, and is incorporated directly into the nuclear shell model. In addition, unlike atomic–electron term symbols, the lowest energy state is not L − S, but rather, ℓ + s. All nuclear levels whose ℓ value (orbital angular momentum) is greater than zero are thus split in the shell model to create states designated by ℓ + s and ℓ − s. Due to the nature of the shell model, which assumes an average potential rather than a central Coulombic potential, the nucleons that go into the ℓ + s and ℓ − s nuclear states are considered degenerate within each orbital (e.g. The 2p contains four nucleons, all of the same energy. Higher in energy is the 2p which contains two equal-energy nucleons).
[ { "math_id": 0, "text": "|l_1 - l_2| \\leq L \\leq l_1 + l_2" }, { "math_id": 1, "text": "\\mathbf J = \\mathbf L + \\mathbf S, \\, " }, { "math_id": 2, "text": "\\mathbf L = \\sum_i \\boldsymbol{\\ell}_i, \\ \\mathbf S = \\sum_i \\mathbf{s}_i. \\, " }, { "math_id": 3, "text": "\\mathbf J = \\sum_i \\mathbf j_i = \\sum_i (\\boldsymbol{\\ell}_i + \\mathbf{s}_i)." } ]
https://en.wikipedia.org/wiki?curid=1277825
12779248
Multiplicative cascade
Fractal distribution of random points In mathematics, a multiplicative cascade is a fractal/multifractal distribution of points produced via an iterative and multiplicative random process. Definition. The plots above are examples of multiplicative cascade multifractals. To create these distributions there are a few steps to take. Firstly, we must create a lattice of cells which will be our underlying probability density field. Secondly, an iterative process is followed to create multiple levels of the lattice: at each iteration the cells are split into four equal parts (cells). Each new cell is then assigned a probability randomly from the set formula_0 without replacement, where formula_1. This process is continued to the "N"th level. For example, in constructing such a model down to level 8 we produce a 48 array of cells. Thirdly, the cells are filled as follows: We take the probability of a cell being occupied as the product of the cell's own "p""i" and those of all its parents (up to level 1). A Monte Carlo rejection scheme is used repeatedly until the desired cell population is obtained, as follows: "x" and "y" cell coordinates are chosen randomly, and a random number between 0 and 1 is assigned; the ("x", "y") cell is then populated depending on whether the assigned number is lesser than (outcome: not populated) or greater or equal to (outcome: populated) the cell's occupation probability. Examples. To produce the plots above we filled the probability density field with 5,000 points in a space of 256 × 256. An example of the probability density field: The fractals are generally not scale-invariant and therefore cannot be considered "standard" fractals. They can however be considered multifractals. The Rényi (generalized) dimensions can be theoretically predicted. It can be shown that as formula_2, formula_3 where N is the level of the grid refinement and, formula_4 References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\lbrace p_1,p_2,p_3,p_4 \\rbrace" }, { "math_id": 1, "text": "p_i \\in [0,1]" }, { "math_id": 2, "text": "N \\rightarrow \\infty" }, { "math_id": 3, "text": "D_q=\\frac{\\log_2\\left( f^q_1+f^q_2+f^q_3+f^q_4\\right)}{1-q}," }, { "math_id": 4, "text": "f_i=\\frac{p_i}{\\sum_i p_i}." } ]
https://en.wikipedia.org/wiki?curid=12779248
1277926
Pacific decadal oscillation
Recurring pattern of climate variability The Pacific decadal oscillation (PDO) is a robust, recurring pattern of ocean-atmosphere climate variability centered over the mid-latitude Pacific basin. The PDO is detected as warm or cool surface waters in the Pacific Ocean, north of 20°N. Over the past century, the amplitude of this climate pattern has varied irregularly at interannual-to-interdecadal time scales (meaning time periods of a few years to as much as time periods of multiple decades). There is evidence of reversals in the prevailing polarity (meaning changes in cool surface waters versus warm surface waters within the region) of the oscillation occurring around 1925, 1947, and 1977; the last two reversals corresponded with dramatic shifts in salmon production regimes in the North Pacific Ocean. This climate pattern also affects coastal sea and continental surface air temperatures from Alaska to California. During a "warm", or "positive", phase, the west Pacific becomes cooler and part of the eastern ocean warms; during a "cool", or "negative", phase, the opposite pattern occurs. The Pacific decadal oscillation was named by Steven R. Hare, who noticed it while studying salmon production pattern results in 1997. The Pacific decadal oscillation index is the leading empirical orthogonal function (EOF) of monthly sea surface temperature anomalies (SST-A) over the North Pacific (poleward of 20°N) after the global average sea surface temperature has been removed. This PDO index is the standardized principal component time series. A PDO 'signal' has been reconstructed as far back as 1661 through tree-ring chronologies in the Baja California area. Mechanisms. Several studies have indicated that the PDO index can be reconstructed as the superimposition of tropical forcing and extra-tropical processes. Thus, unlike El Niño–Southern Oscillation (ENSO), the PDO is not a single physical mode of ocean variability, but rather the sum of several processes with different dynamic origins. At inter-annual time scales the PDO index is reconstructed as the sum of random and ENSO induced variability in the Aleutian Low, whereas on decadal timescales ENSO teleconnections, stochastic atmospheric forcing and changes in the North Pacific oceanic gyre circulation contribute approximately equally. Additionally sea surface temperature anomalies have some winter to winter persistence due to the reemergence mechanism. ENSO can influence the global circulation pattern thousands of kilometers away from the equatorial Pacific through the "atmospheric bridge". During El Niño events, deep convection and heat transfer to the troposphere is enhanced over the anomalously warm sea surface temperature, this ENSO-related tropical forcing generates Rossby waves that propagate poleward and eastward and are subsequently refracted back from the pole to the tropics. The planetary waves form at preferred locations both in the North and South Pacific Ocean, and the teleconnection pattern is established within 2–6 weeks. ENSO driven patterns modify surface temperature, humidity, wind, and the distribution of clouds over the North Pacific that alter surface heat, momentum, and freshwater fluxes and thus induce sea surface temperature, salinity, and mixed layer depth (MLD) anomalies. The atmospheric bridge is more effective during boreal winter when the deepened Aleutian Low results in stronger and cold northwesterly winds over the central Pacific and warm/humid southerly winds along the North American west coast, the associated changes in the surface heat fluxes and to a lesser extent Ekman transport creates negative sea surface temperature anomalies and a deepened MLD in the central Pacific and warm the ocean from the Hawaii to the Bering Sea. Midlatitude SST anomaly patterns tend to recur from one winter to the next but not during the intervening summer, this process occurs because of the strong mixed layer seasonal cycle. The mixed layer depth over the North Pacific is deeper, typically 100-200m, in winter than it is in summer and thus SST anomalies that form during winter and extend to the base of the mixed layer are sequestered beneath the shallow summer mixed layer when it reforms in late spring and are effectively insulated from the air-sea heat flux. When the mixed layer deepens again in the following autumn/early winter the anomalies may again influence the surface. This process has been named "reemergence mechanism" by Alexander and Deser and is observed over much of the North Pacific Ocean although it is more effective in the west where the winter mixed layer is deeper and the seasonal cycle greater. Long term sea surface temperature variation may be induced by random atmospheric forcings that are integrated and reddened into the ocean mixed layer. The stochastic climate model paradigm was proposed by Frankignoul and Hasselmann, in this model a stochastic forcing represented by the passage of storms alter the ocean mixed layer temperature via surface energy fluxes and Ekman currents and the system is damped due to the enhanced (reduced) heat loss to the atmosphere over the anomalously warm (cold) SST via turbulent energy and longwave radiative fluxes, in the simple case of a linear negative feedback the model can be written as the separable ordinary differential equation: formula_0 where v is the random atmospheric forcing, λ is the damping rate (positive and constant) and y is the response. The variance spectrum of y is: formula_1 where F is the variance of the white noise forcing and w is the frequency, an implication of this equation is that at short time scales (w»λ) the variance of the ocean temperature increase with the square of the period while at longer timescales(w«λ, ~150 months) the damping process dominates and limits sea surface temperature anomalies so that the spectra became white. Thus an atmospheric white noise generates SST anomalies at much longer timescales but without spectral peaks. Modeling studies suggest that this process contribute to as much as 1/3 of the PDO variability at decadal timescales. Several dynamic oceanic mechanisms and SST-air feedback may contribute to the observed decadal variability in the North Pacific Ocean. SST variability is stronger in the Kuroshio Oyashio extension (KOE) region and is associated with changes in the KOE axis and strength, that generates decadal and longer time scales SST variance but without the observed magnitude of the spectral peak at ~10 years, and SST-air feedback. Remote reemergence occurs in regions of strong current such as the Kuroshio extension and the anomalies created near the Japan may reemerge the next winter in the central pacific. Saravanan and McWilliams have demonstrated that the interaction between spatially coherent atmospheric forcing patterns and an advective ocean shows periodicities at preferred time scales when non-local advective effects dominate over the local sea surface temperature damping. This "advective resonance" mechanism may generate decadal SST variability in the Eastern North Pacific associated with the anomalous Ekman advection and surface heat flux. Dynamic gyre adjustments are essential to generate decadal SST peaks in the North Pacific, the process occurs via westward propagating oceanic Rossby waves that are forced by wind anomalies in the central and eastern Pacific Ocean. The quasi-geostrophic equation for long non-dispersive Rossby Waves forced by large scale wind stress can be written as the linear partial differential equation: formula_2 where h is the upper-layer thickness anomaly, τ is the wind stress, c is the Rossby wave speed that depends on latitude, ρ0 is the density of sea water and f0 is the Coriolis parameter at a reference latitude. The response time scale is set by the Rossby waves speed, the location of the wind forcing and the basin width, at the latitude of the Kuroshio Extension c is 2.5 cm s−1 and the dynamic gyre adjustment timescale is ~(5)10 years if the Rossby wave was initiated in the (central)eastern Pacific Ocean. If the wind white forcing is zonally uniform it should generate a red spectrum in which h variance increases with the period and reaches a constant amplitude at lower frequencies without decadal and interdecadal peaks, however low frequencies atmospheric circulation tends to be dominated by fixed spatial patterns so that wind forcing is not zonally uniform, if the wind forcing is zonally sinusoidal then decadal peaks occurs due to resonance of the forced basin-scale Rossby waves. The propagation of h anomalies in the western pacific changes the KOE axis and strength and impact SST due to the anomalous geostrophic heat transport. Recent studies suggest that Rossby waves excited by the Aleutian low propagate the PDO signal from the North Pacific to the KOE through changes in the KOE axis while Rossby waves associated with the NPO propagate the North Pacific Gyre oscillation signal through changes in the KOE strength. Impacts. Temperature and precipitation The PDO spatial pattern and impacts are similar to those associated with ENSO events. During the positive phase the wintertime Aleutian Low is deepened and shifted southward, warm/humid air is advected along the North American west coast and temperatures are higher than usual from the Pacific Northwest to Alaska but below normal in Mexico and the Southeastern United States. &lt;br&gt; Winter precipitation is higher than usual in the Alaska Coast Range, Mexico and the Southwestern United States but reduced over Canada, Eastern Siberia and Australia &lt;br&gt; McCabe et al. showed that the PDO along with the AMO strongly influence multidecadal droughts pattern in the United States, drought frequency is enhanced over much of the Northern United States during the positive PDO phase and over the Southwest United States during the negative PDO phase in both cases if the PDO is associated with a positive AMO.&lt;br&gt; The Asian Monsoon is also affected, increased rainfall and decreased summer temperature is observed over the Indian subcontinent during the negative phase. Reconstructions and regime shifts. The PDO index has been reconstructed using tree rings and other hydrologically sensitive proxies from west North America and Asia. MacDonald and Case reconstructed the PDO back to 993 using tree rings from California and Alberta. The index shows a 50–70 year periodicity but is a strong mode of variability only after 1800, a persistent negative phase occurring during medieval times (993–1300) which is consistent with La Niña conditions reconstructed in the tropical Pacific and multi-century droughts in the South-West United States. Several regime shifts are apparent both in the reconstructions and instrumental data, during the 20th century regime shifts associated with concurrent changes in SST, SLP, land precipitation and ocean cloud cover occurred in 1924/1925, 1945/1946, and 1976/1977: Predictability. The NOAA Earth System Research Laboratory produces official ENSO forecasts, and Experimental statistical forecasts using a linear inverse modeling (LIM) method to predict the PDO, LIM assumes that the PDO can be separated into a linear deterministic component and a non-linear component represented by random fluctuations. Much of the LIM PDO predictability arises from ENSO and the global trend rather than extra-tropical processes and is thus limited to ~4 seasons. The prediction is consistent with the seasonal footprinting mechanism in which an optimal SST structure evolves into the ENSO mature phase 6–10 months later that subsequently impacts the North Pacific Ocean SST via the atmospheric bridge. Skills in predicting decadal PDO variability could arise from taking into account the impact of the externally forced and internally generated Pacific variability. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "{\\operatorname{d}y\\over\\operatorname{d}t}= v(t)- \\lambda y" }, { "math_id": 1, "text": "{G(w) = \\frac{F}{w^2 + \\lambda ^2 }}" }, { "math_id": 2, "text": "{\\partial h\\over\\partial t} -c{\\partial h\\over\\partial x} = \\frac{-\\nabla \\times \\vec{\\tau}}{\\rho_0f_0}" } ]
https://en.wikipedia.org/wiki?curid=1277926
12781
Group action
Transformations induced by a mathematical group In mathematics, many sets of transformations form a group under function composition; for example, the rotations around a point in the plane. It is often useful to consider the group as an abstract group, and to say that one has a group action of the abstract group that consists of performing the transformations of the group of transformations. The reason for distinguishing the group from the transformations is that, generally, a group of transformations of a structure acts also on various related structures; for example, the above rotation group acts also on triangles by transforming triangles into triangles. Formally, a group action of a group "G" on a set "S" is a group homomorphism from "G" to some group (under function composition) of functions from "S" to itself. If a group acts on a structure, it will usually also act on objects built from that structure. For example, the group of Euclidean isometries acts on Euclidean space and also on the figures drawn in it; in particular, it acts on the set of all triangles. Similarly, the group of symmetries of a polyhedron acts on the vertices, the edges, and the faces of the polyhedron. A group action on a vector space is called a representation of the group. In the case of a finite-dimensional vector space, it allows one to identify many groups with subgroups of the general linear group GL("n", "K"), the group of the invertible matrices of dimension "n" over a field "K". The symmetric group "S""n" acts on any set with "n" elements by permuting the elements of the set. Although the group of all permutations of a set depends formally on the set, the concept of group action allows one to consider a single group for studying the permutations of all sets with the same cardinality. Definition. Left group action. If G is a group with identity element e, and X is a set, then a ("left") "group action" α of G on X is a function formula_0 that satisfies the following two axioms: for all g and h in G and all x in X. The group G is then said to act on X (from the left). A set X together with an action of G is called a ("left") G-"set". It can be notationally convenient to curry the action "α", so that, instead, one has a collection of transformations "α""g" : "X" → "X", with one transformation "α""g" for each group element "g" ∈ "G". The identity and compatibility relations then read formula_1 and formula_2 with ∘ being function composition. The second axiom then states that the function composition is compatible with the group multiplication; they form a commutative diagram. This axiom can be shortened even further, and written as "α""g" ∘ "α""h" = "α""gh". With the above understanding, it is very common to avoid writing "α" entirely, and to replace it with either a dot, or with nothing at all. Thus, "α"("g", "x") can be shortened to "g"⋅"x" or "gx", especially when the action is clear from context. The axioms are then formula_3 formula_4 From these two axioms, it follows that for any fixed g in G, the function from X to itself which maps x to "g"⋅"x" is a bijection, with inverse bijection the corresponding map for "g"−1. Therefore, one may equivalently define a group action of G on X as a group homomorphism from G into the symmetric group Sym("X") of all bijections from X to itself. Right group action. Likewise, a "right group action" of G on X is a function formula_5 that satisfies the analogous axioms: for all g and h in G and all x in X. The difference between left and right actions is in the order in which a product "gh" acts on x. For a left action, h acts first, followed by g second. For a right action, g acts first, followed by h second. Because of the formula ("gh")−1 = "h"−1"g"−1, a left action can be constructed from a right action by composing with the inverse operation of the group. Also, a right action of a group G on X can be considered as a left action of its opposite group "G"op on X. Thus, for establishing general properties of group actions, it suffices to consider only left actions. However, there are cases where this is not possible. For example, the multiplication of a group induces both a left action and a right action on the group itself—multiplication on the left and on the right, respectively. Notable properties of actions. Let "G" be a group acting on a set "X". The action is called "&lt;templatestyles src="Template:Visible anchor/styles.css" /&gt;faithful" or "&lt;templatestyles src="Template:Visible anchor/styles.css" /&gt;effective" if "g"⋅"x" = "x" for all "x" ∈ "X" implies that "g" = "e""G". Equivalently, the homomorphism from "G" to the group of bijections of "X" corresponding to the action is injective. The action is called "&lt;templatestyles src="Template:Visible anchor/styles.css" /&gt;free" (or "semiregular" or "fixed-point free") if the statement that "g"⋅"x" = "x" for some "x" ∈ "X" already implies that "g" = "e""G". In other words, no non-trivial element of "G" fixes a point of "X". This is a much stronger property than faithfulness. For example, the action of any group on itself by left multiplication is free. This observation implies Cayley's theorem that any group can be embedded in a symmetric group (which is infinite when the group is). A finite group may act faithfully on a set of size much smaller than its cardinality (however such an action cannot be free). For instance the abelian 2-group (Z / 2Z)"n" (of cardinality 2"n") acts faithfully on a set of size 2"n". This is not always the case, for example the cyclic group Z / 2"n"Z cannot act faithfully on a set of size less than 2"n". In general the smallest set on which a faithful action can be defined can vary greatly for groups of the same size. For example, three groups of size 120 are the symmetric group S5, the icosahedral group A5 × Z / 2Z and the cyclic group Z / 120Z. The smallest sets on which faithful actions can be defined for these groups are of size 5, 7, and 16 respectively. Transitivity properties. The action of "G" on "X" is called "&lt;templatestyles src="Template:Visible anchor/styles.css" /&gt;transitive" if for any two points "x", "y" ∈ "X" there exists a "g" ∈ "G" so that "g" ⋅ "x" = "y". The action is "&lt;templatestyles src="Template:Visible anchor/styles.css" /&gt;simply transitive" (or "sharply transitive", or "&lt;templatestyles src="Template:Visible anchor/styles.css" /&gt;regular") if it is both transitive and free. This means that given "x", "y" ∈ "X" the element "g" in the definition of transitivity is unique. If "X" is acted upon simply transitively by a group "G" then it is called a principal homogeneous space for "G" or a "G"-torsor. For an integer "n" ≥ 1, the action is &lt;templatestyles src="Template:Visible anchor/styles.css" /&gt;"n-transitive" if "X" has at least "n" elements, and for any pair of "n"-tuples ("x"1, ..., "x""n"), ("y"1, ..., "y""n") ∈ "X""n" with pairwise distinct entries (that is "x""i" ≠ "x""j", "y""i" ≠ "y""j" when "i" ≠ "j") there exists a "g" ∈ "G" such that "g"⋅"x""i" = "y""i" for "i" = 1, ..., "n". In other words the action on the subset of "X""n" of tuples without repeated entries is transitive. For "n" = 2, 3 this is often called double, respectively triple, transitivity. The class of 2-transitive groups (that is, subgroups of a finite symmetric group whose action is 2-transitive) and more generally multiply transitive groups is well-studied in finite group theory. An action is &lt;templatestyles src="Template:Visible anchor/styles.css" /&gt;"sharply n-transitive" when the action on tuples without repeated entries in "X""n" is sharply transitive. Examples. The action of the symmetric group of "X" is transitive, in fact "n"-transitive for any "n" up to the cardinality of "X". If "X" has cardinality "n", the action of the alternating group is ("n" − 2)-transitive but not ("n" − 1)-transitive. The action of the general linear group of a vector space "V" on the set "V" &amp;setminus; {0} of non-zero vectors is transitive, but not 2-transitive (similarly for the action of the special linear group if the dimension of "v" is at least 2). The action of the orthogonal group of a Euclidean space is not transitive on nonzero vectors but it is on the unit sphere. Primitive actions. The action of "G" on "X" is called "primitive" if there is no partition of "X" preserved by all elements of "G" apart from the trivial partitions (the partition in a single piece and its dual, the partition into singletons). Topological properties. Assume that "X" is a topological space and the action of "G" is by homeomorphisms. The action is "wandering" if every "x" ∈ "X" has a neighbourhood "U" such that there are only finitely many "g" ∈ "G" with "g"⋅"U" ∩ "U" ≠ ∅. More generally, a point "x" ∈ "X" is called a point of discontinuity for the action of "G" if there is an open subset "U" ∋ "x" such that there are only finitely many "g" ∈ "G" with "g"⋅"U" ∩ "U" ≠ ∅. The "domain of discontinuity" of the action is the set of all points of discontinuity. Equivalently it is the largest "G"-stable open subset Ω ⊂ "X" such that the action of "G" on Ω is wandering. In a dynamical context this is also called a "wandering set". The action is "properly discontinuous" if for every compact subset "K" ⊂ "X" there are only finitely many "g" ∈ "G" such that "g"⋅"K" ∩ "K" ≠ ∅. This is strictly stronger than wandering; for instance the action of Z on R2 &amp;setminus; {(0, 0)} given by "n"⋅("x", "y") = (2"n""x", 2−"n""y") is wandering and free but not properly discontinuous. The action by deck transformations of the fundamental group of a locally simply connected space on an covering space is wandering and free. Such actions can be characterized by the following property: every "x" ∈ "X" has a neighbourhood "U" such that "g"⋅"U" ∩ "U" = ∅ for every "g" ∈ "G" &amp;setminus; {"e""G"}. Actions with this property are sometimes called "freely discontinuous", and the largest subset on which the action is freely discontinuous is then called the "free regular set". An action of a group "G" on a locally compact space "X" is called "cocompact" if there exists a compact subset "A" ⊂ "X" such that "X" = "G" ⋅ "A". For a properly discontinuous action, cocompactness is equivalent to compactness of the quotient space "G" \ "X". Actions of topological groups. Now assume "G" is a topological group and "X" a topological space on which it acts by homeomorphisms. The action is said to be "continuous" if the map "G" × "X" → "X" is continuous for the product topology. The action is said to be "&lt;templatestyles src="Template:Visible anchor/styles.css" /&gt;proper" if the map "G" × "X" → "X" × "X" defined by ("g", "x") ↦ ("x", "g"⋅"x") is proper. This means that given compact sets "K", "K"′ the set of "g" ∈ "G" such that "g"⋅"K" ∩ "K"′ ≠ ∅ is compact. In particular, this is equivalent to proper discontinuity "G" is a discrete group. It is said to be "locally free" if there exists a neighbourhood "U" of "e""G" such that "g"⋅"x" ≠ "x" for all "x" ∈ "X" and "g" ∈ "U" &amp;setminus; {"e""G"}. The action is said to be "strongly continuous" if the orbital map "g" ↦ "g"⋅"x" is continuous for every "x" ∈ "X". Contrary to what the name suggests, this is a weaker property than continuity of the action. If "G" is a Lie group and "X" a differentiable manifold, then the subspace of "smooth points" for the action is the set of points "x" ∈ "X" such that the map "g" ↦ "g"⋅"x" is smooth. There is a well-developed theory of Lie group actions, i.e. action which are smooth on the whole space. Linear actions. If "g" acts by linear transformations on a module over a commutative ring, the action is said to be irreducible if there are no proper nonzero "g"-invariant submodules. It is said to be "semisimple" if it decomposes as a direct sum of irreducible actions. Orbits and stabilizers. Consider a group "G" acting on a set "X". The "&lt;templatestyles src="Template:Visible anchor/styles.css" /&gt;orbit" of an element "x" in "X" is the set of elements in "X" to which "x" can be moved by the elements of "G". The orbit of "x" is denoted by "G"⋅"x": formula_6 The defining properties of a group guarantee that the set of orbits of (points "x" in) "X" under the action of "G" form a partition of "X". The associated equivalence relation is defined by saying "x" ~ "y" if and only if there exists a "g" in "G" with "g"⋅"x" = "y". The orbits are then the equivalence classes under this relation; two elements "x" and "y" are equivalent if and only if their orbits are the same, that is, "G"⋅"x" = "G"⋅"y". The group action is transitive if and only if it has exactly one orbit, that is, if there exists "x" in "X" with "G"⋅"x" = "X". This is the case if and only if "G"⋅"x" = "X" for all "x" in "X" (given that "X" is non-empty). The set of all orbits of "X" under the action of "G" is written as "X" / "G" (or, less frequently, as "G" \ "X"), and is called the "&lt;templatestyles src="Template:Visible anchor/styles.css" /&gt;quotient" of the action. In geometric situations it may be called the "&lt;templatestyles src="Template:Visible anchor/styles.css" /&gt;orbit space", while in algebraic situations it may be called the space of "&lt;templatestyles src="Template:Visible anchor/styles.css" /&gt;coinvariants", and written "X""G", by contrast with the invariants (fixed points), denoted "X""G": the coinvariants are a quotient while the invariants are a subset. The coinvariant terminology and notation are used particularly in group cohomology and group homology, which use the same superscript/subscript convention. Invariant subsets. If "Y" is a subset of "X", then "G"⋅"Y" denotes the set {"g"⋅"y" : "g" ∈ "G" and "y" ∈ "Y"}. The subset "Y" is said to be "invariant under ""G" if "G"⋅"Y" = "Y" (which is equivalent "G"⋅"Y" ⊆ "Y"). In that case, "G" also operates on "Y" by restricting the action to "Y". The subset "Y" is called "fixed under ""G" if "g"⋅"y" = "y" for all "g" in "G" and all "y" in "Y". Every subset that is fixed under "G" is also invariant under "G", but not conversely. Every orbit is an invariant subset of "X" on which "G" acts transitively. Conversely, any invariant subset of "X" is a union of orbits. The action of "G" on "X" is "transitive" if and only if all elements are equivalent, meaning that there is only one orbit. A "G""-invariant" element of "X" is "x" ∈ "X" such that "g"⋅"x" = "x" for all "g" ∈ "G". The set of all such "x" is denoted "X""G" and called the "G""-invariants" of "X". When "X" is a "G"-module, "X""G" is the zeroth cohomology group of "G" with coefficients in "X", and the higher cohomology groups are the derived functors of the functor of "G"-invariants. Fixed points and stabilizer subgroups. Given "g" in "G" and "x" in "X" with "g"⋅"x" = "x", it is said that ""x" is a fixed point of "g"" or that ""g" fixes "x"". For every "x" in "X", the &lt;templatestyles src="Template:Visible anchor/styles.css" /&gt;stabilizer subgroup of "G" with respect to "x" (also called the isotropy group or little group) is the set of all elements in "G" that fix "x": formula_7 This is a subgroup of "G", though typically not a normal one. The action of "G" on "X" is free if and only if all stabilizers are trivial. The kernel "N" of the homomorphism with the symmetric group, "G" → Sym("X"), is given by the intersection of the stabilizers "G""x" for all "x" in "X". If "N" is trivial, the action is said to be faithful (or effective). Let "x" and "y" be two elements in "X", and let "g" be a group element such that "y" = "g"⋅"x". Then the two stabilizer groups "G""x" and "G""y" are related by "G""y" = "gG""x""g"−1. Proof: by definition, "h" ∈ "G""y" if and only if "h"⋅("g"⋅"x") = "g"⋅"x". Applying "g"−1 to both sides of this equality yields ("g"−1"hg")⋅"x" = "x"; that is, "g"−1"hg" ∈ "G""x". An opposite inclusion follows similarly by taking "h" ∈ "G""x" and "x" = "g"−1⋅"y". The above says that the stabilizers of elements in the same orbit are conjugate to each other. Thus, to each orbit, we can associate a conjugacy class of a subgroup of "G" (that is, the set of all conjugates of the subgroup). Let ("H") denote the conjugacy class of "H". Then the orbit "O" has type ("H") if the stabilizer "G""x" of some/any "x" in "O" belongs to ("H"). A maximal orbit type is often called a principal orbit type. &lt;templatestyles src="Template:Visible anchor/styles.css" /&gt;Orbit-stabilizer theorem and Burnside's lemma. Orbits and stabilizers are closely related. For a fixed "x" in "X", consider the map "f" : "G" → "X" given by "g" ↦ "g"⋅"x". By definition the image "f"("G") of this map is the orbit "G"⋅"x". The condition for two elements to have the same image is formula_8 In other words, "f"("g") = "f"("h") "if and only if" "g" and "h" lie in the same coset for the stabilizer subgroup "G""x". Thus, the fiber "f"−1({"y"}) of "f" over any "y" in "G"⋅"x" is contained in such a coset, and every such coset also occurs as a fiber. Therefore "f" induces a bijection between the set "G" / "G""x" of cosets for the stabilizer subgroup and the orbit "G"⋅"x", which sends "gG""x" ↦ "g"⋅"x". This result is known as the "orbit-stabilizer theorem". If "G" is finite then the orbit-stabilizer theorem, together with Lagrange's theorem, gives formula_9 in other words the length of the orbit of "x" times the order of its stabilizer is the order of the group. In particular that implies that the orbit length is a divisor of the group order. Example: Let "G" be a group of prime order "p" acting on a set "X" with "k" elements. Since each orbit has either 1 or "p" elements, there are at most "k" mod "p" orbits of length 1 which are "G"-invariant elements. This result is especially useful since it can be employed for counting arguments (typically in situations where "X" is finite as well). Example: We can use the orbit-stabilizer theorem to count the automorphisms of a graph. Consider the cubical graph as pictured, and let "G" denote its automorphism group. Then "G" acts on the set of vertices {1, 2, ..., 8}, and this action is transitive as can be seen by composing rotations about the center of the cube. Thus, by the orbit-stabilizer theorem, |"G"| = |"G" ⋅ 1| |"G"1| = 8 |"G"1|. Applying the theorem now to the stabilizer "G"1, we can obtain |"G"1| = |("G"1) ⋅ 2| |("G"1)2|. Any element of "G" that fixes 1 must send 2 to either 2, 4, or 5. As an example of such automorphisms consider the rotation around the diagonal axis through 1 and 7 by 2"π"/3, which permutes 2, 4, 5 and 3, 6, 8, and fixes 1 and 7. Thus, |("G"1) ⋅ 2| = 3. Applying the theorem a third time gives |("G"1)2| = |(("G"1)2) ⋅ 3| |(("G"1)2)3|. Any element of "G" that fixes 1 and 2 must send 3 to either 3 or 6. Reflecting the cube at the plane through 1, 2, 7 and 8 is such an automorphism sending 3 to 6, thus |(("G"1)2) ⋅ 3| = 2. One also sees that (("G"1)2)3 consists only of the identity automorphism, as any element of "G" fixing 1, 2 and 3 must also fix all other vertices, since they are determined by their adjacency to 1, 2 and 3. Combining the preceding calculations, we can now obtain |G| = 8 ⋅ 3 ⋅ 2 ⋅ 1 = 48. A result closely related to the orbit-stabilizer theorem is Burnside's lemma: formula_10 where "X""g" is the set of points fixed by "g". This result is mainly of use when "G" and "X" are finite, when it can be interpreted as follows: the number of orbits is equal to the average number of points fixed per group element. Fixing a group "G", the set of formal differences of finite "G"-sets forms a ring called the Burnside ring of "G", where addition corresponds to disjoint union, and multiplication to Cartesian product. Group actions and groupoids. The notion of group action can be encoded by the "action groupoid" "G"′ = "G" ⋉ "X" associated to the group action. The stabilizers of the action are the vertex groups of the groupoid and the orbits of the action are its components. Morphisms and isomorphisms between "G"-sets. If "X" and "Y" are two "G"-sets, a "morphism" from "X" to "Y" is a function "f" : "X" → "Y" such that "f"("g"⋅"x") = "g"⋅"f"("x") for all "g" in "G" and all "x" in "X". Morphisms of "G"-sets are also called "equivariant maps" or "G"-"maps". The composition of two morphisms is again a morphism. If a morphism "f" is bijective, then its inverse is also a morphism. In this case "f" is called an "isomorphism", and the two "G"-sets "X" and "Y" are called "isomorphic"; for all practical purposes, isomorphic "G"-sets are indistinguishable. Some example isomorphisms: With this notion of morphism, the collection of all "G"-sets forms a category; this category is a Grothendieck topos (in fact, assuming a classical metalogic, this topos will even be Boolean). Variants and generalizations. We can also consider actions of monoids on sets, by using the same two axioms as above. This does not define bijective maps and equivalence relations however. See semigroup action. Instead of actions on sets, we can define actions of groups and monoids on objects of an arbitrary category: start with an object "X" of some category, and then define an action on "X" as a monoid homomorphism into the monoid of endomorphisms of "X". If "X" has an underlying set, then all definitions and facts stated above can be carried over. For example, if we take the category of vector spaces, we obtain group representations in this fashion. We can view a group "G" as a category with a single object in which every morphism is invertible. A (left) group action is then nothing but a (covariant) functor from "G" to the category of sets, and a group representation is a functor from "G" to the category of vector spaces. A morphism between "G"-sets is then a natural transformation between the group action functors. In analogy, an action of a groupoid is a functor from the groupoid to the category of sets or to some other category. In addition to continuous actions of topological groups on topological spaces, one also often considers smooth actions of Lie groups on smooth manifolds, regular actions of algebraic groups on algebraic varieties, and actions of group schemes on schemes. All of these are examples of group objects acting on objects of their respective category. Notes. &lt;templatestyles src="Reflist/styles.css" /&gt; Citations. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\alpha\\colon G \\times X \\to X," }, { "math_id": 1, "text": "\\alpha_e(x) = x" }, { "math_id": 2, "text": "\\alpha_g(\\alpha_h(x)) = (\\alpha_g \\circ \\alpha_h)(x) = \\alpha_{gh}(x)" }, { "math_id": 3, "text": "e{\\cdot}x = x" }, { "math_id": 4, "text": "g{\\cdot}(h{\\cdot}x) = (gh){\\cdot}x" }, { "math_id": 5, "text": "\\alpha\\colon X \\times G \\to X," }, { "math_id": 6, "text": "G{\\cdot}x = \\{ g{\\cdot}x : g \\in G \\}." }, { "math_id": 7, "text": "G_x = \\{g \\in G : g{\\cdot}x = x\\}." }, { "math_id": 8, "text": "f(g)=f(h) \\iff g{\\cdot}x = h{\\cdot}x \\iff g^{-1}h{\\cdot}x = x \\iff g^{-1}h \\in G_x \\iff h \\in gG_x." }, { "math_id": 9, "text": "|G \\cdot x| = [G\\,:\\,G_x] = |G| / |G_x|," }, { "math_id": 10, "text": "|X/G|=\\frac{1}{|G|}\\sum_{g\\in G} |X^g|," } ]
https://en.wikipedia.org/wiki?curid=12781
12783357
Yamabe invariant
In mathematics, in the field of differential geometry, the Yamabe invariant, also referred to as the sigma constant, is a real number invariant associated to a smooth manifold that is preserved under diffeomorphisms. It was first written down independently by O. Kobayashi and R. Schoen and takes its name from H. Yamabe. Used by Vincent Moncrief and Arthur Fischer to study reduced Hamiltonian for Einstein's equations. Definition. Let formula_0 be a compact smooth manifold (without boundary) of dimension formula_1. The normalized Einstein–Hilbert functional formula_2 assigns to each Riemannian metric formula_3 on formula_0 a real number as follows: formula_4 where formula_5 is the scalar curvature of formula_3 and formula_6 is the volume density associated to the metric formula_3. The exponent in the denominator is chosen so that the functional is scale-invariant: for every positive real constant formula_7, it satisfies formula_8. We may think of formula_9 as measuring the average scalar curvature of formula_3 over formula_0. It was conjectured by Yamabe that every conformal class of metrics contains a metric of constant scalar curvature (the so-called Yamabe problem); it was proven by Yamabe, Trudinger, Aubin, and Schoen that a minimum value of formula_9 is attained in each conformal class of metrics, and in particular this minimum is achieved by a metric of constant scalar curvature. We define formula_10 where the infimum is taken over the smooth real-valued functions formula_11 on formula_0. This infimum is finite (not formula_12): Hölder's inequality implies formula_13. The number formula_14 is sometimes called the conformal Yamabe energy of formula_3 (and is constant on conformal classes). A comparison argument due to Aubin shows that for any metric formula_3, formula_14 is bounded above by formula_15, where formula_16 is the standard metric on the formula_17-sphere formula_18. It follows that if we define formula_19 where the supremum is taken over all metrics on formula_0, then formula_20 (and is in particular finite). The real number formula_21 is called the Yamabe invariant of formula_0. The Yamabe invariant in two dimensions. In the case that formula_22, (so that "M" is a closed surface) the Einstein–Hilbert functional is given by formula_23 where formula_24 is the Gauss curvature of "g". However, by the Gauss–Bonnet theorem, the integral of the Gauss curvature is given by formula_25, where formula_26 is the Euler characteristic of "M". In particular, this number does not depend on the choice of metric. Therefore, for surfaces, we conclude that formula_27 For example, the 2-sphere has Yamabe invariant equal to formula_28, and the 2-torus has Yamabe invariant equal to zero. Examples. In the late 1990s, the Yamabe invariant was computed for large classes of 4-manifolds by Claude LeBrun and his collaborators. In particular, it was shown that most compact complex surfaces have negative, exactly computable Yamabe invariant, and that any Kähler–Einstein metric of negative scalar curvature realizes the Yamabe invariant in dimension 4. It was also shown that the Yamabe invariant of formula_29 is realized by the Fubini–Study metric, and so is less than that of the 4-sphere. Most of these arguments involve Seiberg–Witten theory, and so are specific to dimension 4. An important result due to Petean states that if formula_0 is simply connected and has dimension formula_30, then formula_31. In light of Perelman's solution of the Poincaré conjecture, it follows that a simply connected formula_17-manifold can have negative Yamabe invariant only if formula_32. On the other hand, as has already been indicated, simply connected formula_33-manifolds do in fact often have negative Yamabe invariants. Below is a table of some smooth manifolds of dimension three with known Yamabe invariant. In dimension 3, the number formula_15 is equal to formula_34 and is often denoted formula_35. By an argument due to Anderson, Perelman's results on the Ricci flow imply that the constant-curvature metric on any hyperbolic 3-manifold realizes the Yamabe invariant. This provides us with infinitely many examples of 3-manifolds for which the invariant is both negative and exactly computable. Topological significance. The sign of the Yamabe invariant of formula_0 holds important topological information. For example, formula_21 is positive if and only if formula_0 admits a metric of positive scalar curvature. The significance of this fact is that much is known about the topology of manifolds with metrics of positive scalar curvature. Notes. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "M" }, { "math_id": 1, "text": "n\\geq 2" }, { "math_id": 2, "text": "\\mathcal{E}" }, { "math_id": 3, "text": "g" }, { "math_id": 4, "text": "\\mathcal{E}(g) = \\frac{\\int_M R_g \\, dV_g}{\\left(\\int_M \\, dV_g\\right)^{\\frac{n-2}{n}}}," }, { "math_id": 5, "text": "R_g" }, { "math_id": 6, "text": "dV_g" }, { "math_id": 7, "text": "c" }, { "math_id": 8, "text": "\\mathcal{E}(cg) = \\mathcal{E}(g)" }, { "math_id": 9, "text": "\\mathcal{E}(g)" }, { "math_id": 10, "text": "Y(g) = \\inf_f \\mathcal{E}(e^{2f} g)," }, { "math_id": 11, "text": "f" }, { "math_id": 12, "text": "-\\infty" }, { "math_id": 13, "text": "Y(g) \\geq -\\left(\\textstyle\\int_M |R_g|^{n/2} \\,dV_g\\right)^{2/n}" }, { "math_id": 14, "text": "Y(g)" }, { "math_id": 15, "text": "\\mathcal{E}(g_0)" }, { "math_id": 16, "text": "g_0" }, { "math_id": 17, "text": "n" }, { "math_id": 18, "text": "S^n" }, { "math_id": 19, "text": "\\sigma(M) = \\sup_g Y(g)," }, { "math_id": 20, "text": "\\sigma(M) \\leq \\mathcal{E}(g_0)" }, { "math_id": 21, "text": "\\sigma(M)" }, { "math_id": 22, "text": "n=2" }, { "math_id": 23, "text": "\\mathcal{E}(g) = \\int_M R_g \\, dV_g = \\int_M 2K_g \\, dV_g, " }, { "math_id": 24, "text": "K_g" }, { "math_id": 25, "text": "2\\pi \\chi(M)" }, { "math_id": 26, "text": "\\chi(M)" }, { "math_id": 27, "text": "\\sigma(M) = 4\\pi \\chi(M)." }, { "math_id": 28, "text": "8\\pi" }, { "math_id": 29, "text": "CP^2" }, { "math_id": 30, "text": "n \\geq 5" }, { "math_id": 31, "text": "\\sigma (M) \\geq 0" }, { "math_id": 32, "text": "n=4" }, { "math_id": 33, "text": "4" }, { "math_id": 34, "text": "6(2\\pi^2)^{2/3}" }, { "math_id": 35, "text": "\\sigma_1" }, { "math_id": 36, "text": "\\mathbb{RP}^3" }, { "math_id": 37, "text": "Spin^c" } ]
https://en.wikipedia.org/wiki?curid=12783357
1278374
Euler brick
Cuboid whose edges and face diagonals have integer lengths In mathematics, an Euler brick, named after Leonhard Euler, is a rectangular cuboid whose edges and face diagonals all have integer lengths. A primitive Euler brick is an Euler brick whose edge lengths are relatively prime. A perfect Euler brick is one whose space diagonal is also an integer, but such a brick has not yet been found. Definition. The definition of an Euler brick in geometric terms is equivalent to a solution to the following system of Diophantine equations: formula_0 where "a", "b", "c" are the edges and "d", "e", "f" are the diagonals. Examples. The smallest Euler brick, discovered by Paul Halcke in 1719, has edges ("a", "b", "c") = (44, 117, 240) and face diagonals ("d", "e", "f" ) = (125, 244, 267). Some other small primitive solutions, given as edges ("a", "b", "c") — face diagonals ("d", "e", "f"), are below: Generating formula. Euler found at least two parametric solutions to the problem, but neither gives all solutions. An infinitude of Euler bricks can be generated with Saunderson's parametric formula. Let ("u", "v", "w") be a Pythagorean triple (that is, "u"2 + "v"2 "w"2.) Then the edges formula_1 give face diagonals formula_2 There are many Euler bricks which are not parametrized as above, for instance the Euler brick with edges ("a", "b", "c") = (240, 252, 275) and face diagonals ("d", "e", "f" ) = (348, 365, 373). Perfect cuboid. &lt;templatestyles src="Unsolved/styles.css" /&gt; Unsolved problem in mathematics: Does a perfect cuboid exist? A perfect cuboid (also called a perfect Euler brick or perfect box) is an Euler brick whose space diagonal also has integer length. In other words, the following equation is added to the system of Diophantine equations defining an Euler brick: formula_3 where "g" is the space diagonal. As of May 2023[ [update]], no example of a perfect cuboid had been found and no one has proven that none exist. Exhaustive computer searches show that, if a perfect cuboid exists, Some facts are known about properties that must be satisfied by a "primitive" perfect cuboid, if one exists, based on modular arithmetic: In addition: If a perfect cuboid exists and formula_4 are its edges, formula_5 — the corresponding face diagonals and the space diagonal formula_6, then Cuboid conjectures. Three cuboid conjectures are three mathematical propositions claiming irreducibility of three univariate polynomials with integer coefficients depending on several integer parameters. The conjectures are related to the perfect cuboid problem. Though they are not equivalent to the perfect cuboid problem, if all of these three conjectures are valid, then no perfect cuboids exist. They are neither proved nor disproved. Cuboid conjecture 1. "For any two positive coprime integer numbers formula_12 the eighth degree polynomial" "is irreducible over the ring of integers formula_13". Cuboid conjecture 2. "For any two positive coprime integer numbers formula_14 the tenth-degree polynomial" "is irreducible over the ring of integers formula_13". Cuboid conjecture 3. "For any three positive coprime integer numbers formula_15, formula_16, formula_17 such that none of the conditions" "are fulfilled, the twelfth-degree polynomial" "is irreducible over the ring of integers formula_13". Almost-perfect cuboids. An almost-perfect cuboid has 6 out of the 7 lengths as rational. Such cuboids can be sorted into three types, called "body", "edge", and "face" cuboids. In the case of the body cuboid, the body (space) diagonal "g" is irrational. For the edge cuboid, one of the edges "a", "b", "c" is irrational. The face cuboid has one of the face diagonals "d", "e", "f" irrational. The body cuboid is commonly referred to as the "Euler cuboid" in honor of Leonhard Euler, who discussed this type of cuboid. He was also aware of face cuboids, and provided the (104, 153, 672) example. The three integer cuboid edge lengths and three integer diagonal lengths of a face cuboid can also be interpreted as the edge lengths of a Heronian tetrahedron that is also a Schläfli orthoscheme. There are infinitely many face cuboids, and infinitely many Heronian orthoschemes. The smallest solutions for each type of almost-perfect cuboids, given as edges, face diagonals and the space diagonal ("a", "b", "c", "d", "e", "f", "g"), are as follows: , there are 167,043 found cuboids with the smallest integer edge less than 200,000,000,027: 61,042 are Euler (body) cuboids, 16,612 are edge cuboids with a complex number edge length, 32,286 were edge cuboids, and 57,103 were face cuboids. , an exhaustive search counted all edge and face cuboids with the smallest integer space diagonal less than 1,125,899,906,842,624: 194,652 were edge cuboids, 350,778 were face cuboids. Perfect parallelepiped. A perfect parallelepiped is a parallelepiped with integer-length edges, face diagonals, and body diagonals, but not necessarily with all right angles; a perfect cuboid is a special case of a perfect parallelepiped. In 2009, dozens of perfect parallelepipeds were shown to exist, answering an open question of Richard Guy. Some of these perfect parallelepipeds have two rectangular faces. The smallest perfect parallelepiped has edges 271, 106, and 103; short face diagonals 101, 266, and 255; long face diagonals 183, 312, and 323; and body diagonals 374, 300, 278, and 272. Notes. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\begin{cases} a^2 + b^2 = d^2\\\\ a^2 + c^2 = e^2\\\\ b^2 + c^2 = f^2\\end{cases}" }, { "math_id": 1, "text": " a=u|4v^2-w^2| ,\\quad b=v|4u^2-w^2|, \\quad c=4uvw " }, { "math_id": 2, "text": "d=w^3, \\quad e=u(4v^2+w^2), \\quad f=v(4u^2+w^2)." }, { "math_id": 3, "text": "a^2 + b^2 + c^2 = g^2," }, { "math_id": 4, "text": "a, b, c" }, { "math_id": 5, "text": "d, e, f" }, { "math_id": 6, "text": "g" }, { "math_id": 7, "text": "(d^2, e^2, f^2)" }, { "math_id": 8, "text": "abcg" }, { "math_id": 9, "text": "(af, be, cd)" }, { "math_id": 10, "text": "(bf, ae, gd), (ad, cf, ge), (ce, bd, gf)" }, { "math_id": 11, "text": "\\frac{abcg}{2}" }, { "math_id": 12, "text": "a \\neq u" }, { "math_id": 13, "text": "\\mathbb Z" }, { "math_id": 14, "text": "p \\neq q" }, { "math_id": 15, "text": "a" }, { "math_id": 16, "text": "b" }, { "math_id": 17, "text": "u" } ]
https://en.wikipedia.org/wiki?curid=1278374
1278389
Related rates
Problems that make use of the relations to rates of change In differential calculus, related rates problems involve finding a rate at which a quantity changes by relating that quantity to other quantities whose rates of change are known. The rate of change is usually with respect to time. Because science and engineering often relate quantities to each other, the methods of related rates have broad applications in these fields. Differentiation with respect to time or one of the other variables requires application of the chain rule, since most problems involve several variables. Fundamentally, if a function formula_0 is defined such that formula_1, then the derivative of the function formula_0 can be taken with respect to another variable. We assume formula_2 is a function of formula_3, i.e. formula_4. Then formula_5, so formula_6 Written in Leibniz notation, this is: formula_7 Thus, if it is known how formula_2 changes with respect to formula_3, then we can determine how formula_0 changes with respect to formula_3 and vice versa. We can extend this application of the chain rule with the sum, difference, product and quotient rules of calculus, etc. For example, if formula_8 then formula_9 Procedure. The most common way to approach related rates problems is the following: Errors in this procedure are often caused by plugging in the known values for the variables "before" (rather than after) finding the derivative with respect to time. Doing so will yield an incorrect result, since if those values are substituted for the variables before differentiation, those variables will become constants; and when the equation is differentiated, zeroes appear in places of all variables for which the values were plugged in. Example. A 10-meter ladder is leaning against the wall of a building, and the base of the ladder is sliding away from the building at a rate of 3 meters per second. How fast is the top of the ladder sliding down the wall when the base of the ladder is 6 meters from the wall? The distance between the base of the ladder and the wall, "x", and the height of the ladder on the wall, "y", represent the sides of a right triangle with the ladder as the hypotenuse, "h". The objective is to find "dy"/"dt", the rate of change of "y" with respect to time, "t", when "h", "x" and "dx"/"dt", the rate of change of "x", are known. Step 1: Step 2: From the Pythagorean theorem, the equation formula_15 describes the relationship between "x", "y" and "h", for a right triangle. Differentiating both sides of this equation with respect to time, "t", yields formula_16 Step 3: When solved for the wanted rate of change, "dy"/"dt", gives us formula_17 formula_18 formula_19 formula_20 Step 4 &amp; 5: Using the variables from step 1 gives us: formula_20 formula_21 Solving for y using the Pythagorean Theorem gives: formula_22 formula_23 formula_24 Plugging in 8 for the equation: formula_25 It is generally assumed that negative values represent the downward direction. In doing such, the top of the ladder is sliding down the wall at a rate of meters per second. Physics examples. Because one physical quantity often depends on another, which, in turn depends on others, such as time, related-rates methods have broad applications in Physics. This section presents an example of related rates kinematics and electromagnetic induction. Relative kinematics of two vehicles. For example, one can consider the kinematics problem where one vehicle is heading West toward an intersection at 80 miles per hour while another is heading North away from the intersection at 60 miles per hour. One can ask whether the vehicles are getting closer or further apart and at what rate at the moment when the North bound vehicle is 3 miles North of the intersection and the West bound vehicle is 4 miles East of the intersection. Big idea: use chain rule to compute rate of change of distance between two vehicles. Plan: Choose coordinate system: Let the "y"-axis point North and the "x"-axis point East. Identify variables: Define "y"("t") to be the distance of the vehicle heading North from the origin and "x"("t") to be the distance of the vehicle heading West from the origin. Express "c" in terms of "x" and "y" via the Pythagorean theorem: formula_26 Express "dc"/"dt" using chain rule in terms of "dx"/"dt" and "dy/dt:" Substitute in "x" = 4 mi, "y" = 3 mi, "dx"/"dt" = −80 mi/hr, "dy"/"dt" = 60 mi/hr and simplify formula_27 Consequently, the two vehicles are getting closer together at a rate of 28 mi/hr. Electromagnetic induction of conducting loop spinning in magnetic field. The magnetic flux through a loop of area "A" whose normal is at an angle "θ" to a magnetic field of strength "B" is formula_28 Faraday's law of electromagnetic induction states that the induced electromotive force formula_29 is the negative rate of change of magnetic flux formula_30 through a conducting loop. formula_31 If the loop area "A" and magnetic field "B" are held constant, but the loop is rotated so that the angle "θ" is a known function of time, the rate of change of "θ" can be related to the rate of change of formula_30 (and therefore the electromotive force) by taking the time derivative of the flux relation formula_32 If for example, the loop is rotating at a constant angular velocity "ω", so that "θ" = "ωt", then formula_33 References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "F" }, { "math_id": 1, "text": "F = f(x)" }, { "math_id": 2, "text": "x" }, { "math_id": 3, "text": "t" }, { "math_id": 4, "text": "x=g(t)" }, { "math_id": 5, "text": "F=f(g(t))" }, { "math_id": 6, "text": "F'(t) =f'(g(t)) \\cdot g'(t)" }, { "math_id": 7, "text": "\\frac{dF}{dt} = \\frac{df}{dx} \\cdot \\frac{dx}{dt}." }, { "math_id": 8, "text": "F(x)= G(y)+ H(z)" }, { "math_id": 9, "text": "\\frac{dF}{dx}\\cdot \\frac{dx}{dt} =\\frac{dG}{dy} \\cdot \\frac{dy}{dt}+ \\frac{dH}{dz} \\cdot \\frac{dz}{dt}." }, { "math_id": 10, "text": "x=6" }, { "math_id": 11, "text": "h=10" }, { "math_id": 12, "text": "\\frac{dx}{dt}=3" }, { "math_id": 13, "text": "\\frac{dh}{dt}=0" }, { "math_id": 14, "text": "\\frac{dy}{dt}=\\text{?}" }, { "math_id": 15, "text": "x^2+y^2=h^2," }, { "math_id": 16, "text": "\\frac{d}{dt}\\left(x^2+y^2\\right)=\\frac{d}{dt}\\left(h^2\\right)" }, { "math_id": 17, "text": "\\frac{d}{dt}\\left(x^2\\right)+\\frac{d}{dt}\\left(y^2\\right)=\\frac{d}{dt}\\left(h^2\\right)" }, { "math_id": 18, "text": "(2x)\\frac{dx}{dt}+(2y)\\frac{dy}{dt}=(2h)\\frac{dh}{dt}" }, { "math_id": 19, "text": "x\\frac{dx}{dt}+y\\frac{dy}{dt}=h\\frac{dh}{dt}" }, { "math_id": 20, "text": "\\frac{dy}{dt}=\\frac{h\\frac{dh}{dt}-x\\frac{dx}{dt}}{y}." }, { "math_id": 21, "text": "\\frac{dy}{dt}=\\frac{10\\times0-6\\times3}{y}=-\\frac{18}{y}." }, { "math_id": 22, "text": "x^2+y^2=h^2" }, { "math_id": 23, "text": "6^2+y^2=10^2" }, { "math_id": 24, "text": "y=8" }, { "math_id": 25, "text": "-\\frac{18}{y}=-\\frac{18}{8}=-\\frac{9}{4}" }, { "math_id": 26, "text": "c = \\left(x^2 + y^2\\right)^{1/2}" }, { "math_id": 27, "text": "\n\\begin{align}\n\\frac{dc}{dt} & = \\frac{4 \\text{ mi} \\cdot (-80 \\text{ mi}/\\text{hr}) + 3 \\text{ mi} \\cdot (60) \\text{mi}/\\text{hr}}{\\sqrt{(4 \\text{ mi})^2 + (3 \\text{ mi})^2}}\\\\\n& = \\frac{-320 \\text{ mi}^2/\\text{hr} + 180 \\text{ mi}^2/\\text{hr}}{5\\text{ mi}}\\\\\n&= \\frac{-140 \\text{ mi}^2/\\text{hr}}{5\\text{ mi}}\\\\\n& = -28 \\text{ mi}/\\text{hr}\n\\end{align}\n" }, { "math_id": 28, "text": " \\Phi_B = B A \\cos(\\theta)," }, { "math_id": 29, "text": "\\mathcal{E}" }, { "math_id": 30, "text": "\\Phi_B" }, { "math_id": 31, "text": " \\mathcal{E} = -\\frac{d\\Phi_B}{dt}," }, { "math_id": 32, "text": "\\mathcal{E} = -\\frac{d\\Phi_B}{dt} = B A \\sin\\theta \\frac{d\\theta}{dt} " }, { "math_id": 33, "text": "\\mathcal{E}= \\omega B A \\sin\\omega t " } ]
https://en.wikipedia.org/wiki?curid=1278389
12784536
Buffon's noodle
Variation of Buffon's needle In geometric probability, the problem of Buffon's noodle is a variation on the well-known problem of Buffon's needle, named after Georges-Louis Leclerc, Comte de Buffon who lived in the 18th century. This approach to the problem was published by Joseph-Émile Barbier in 1860. Buffon's needle. Suppose there exist infinitely many equally spaced parallel, horizontal lines, and we were to randomly toss a needle whose length is less than or equal to the distance between adjacent lines. What is the probability that the needle will lie across a line upon landing? To solve this problem, let formula_0 be the length of the needle and formula_1 be the distance between two adjacent lines. Then, let formula_2 be the acute angle the needle makes with the horizontal, and let formula_3 be the distance from the center of the needle to the nearest line. The needle lies across the nearest line if and only if formula_4. We see this condition from the right triangle formed by the needle, the nearest line, and the line of length formula_3 when the needle lies across the nearest line. Now, we assume that the values of formula_5 are randomly determined when they land, where formula_6, since formula_7, and formula_8. The sample space for formula_5 is thus a rectangle of side lengths formula_9 and formula_10. The probability of the event that the needle lies across the nearest line is the fraction of the sample space that intersects with formula_11. Since formula_7, the area of this intersection is given by formula_12 Now, the area of the sample space is formula_13 Hence, the probability formula_14 of the event is formula_15 Bending the needle. The formula stays the same even when the needle is bent in any way (subject to the constraint that it must lie in a plane), making it a "noodle"—a rigid plane curve. We drop the assumption that the length of the noodle is no more than the distance between the parallel lines. The probability distribution of the number of crossings depends on the shape of the noodle, but the expected number of crossings does not; it depends only on the length "L" of the noodle and the distance "D" between the parallel lines (observe that a curved noodle may cross a single line multiple times). This fact may be proved as follows (see Klain and Rota). First suppose the noodle is piecewise linear, i.e. consists of "n" straight pieces. Let "X""i" be the number of times the "i"th piece crosses one of the parallel lines. These random variables are not independent, but the expectations are still additive due to the linearity of expectation: formula_16 Regarding a curved noodle as the limit of a sequence of piecewise linear noodles, we conclude that the expected number of crossings per toss is proportional to the length; it is some constant times the length "L". Then the problem is to find the constant. In case the noodle is a circle of diameter equal to the distance "D" between the parallel lines, then "L" = π"D" and the number of crossings is exactly 2, with probability 1. So when "L" = π"D" then the expected number of crossings is 2. Therefore, the expected number of crossings must be 2"L"/(π"D"). Barbier's theorem. Extending this argument slightly, if formula_17 is a convex compact subset of formula_18, then the expected number of lines intersecting formula_17 is equal to half the expected number of lines intersecting the perimeter of formula_17, which is formula_19. In particular, if the noodle is any closed curve of constant width D, then the number of crossings is also exactly 2. This means the perimeter has length formula_20, the same as that of a circle, proving Barbier's theorem. Notes. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": " \\ell " }, { "math_id": 1, "text": " D " }, { "math_id": 2, "text": " \\theta " }, { "math_id": 3, "text": " x " }, { "math_id": 4, "text": " x \\le \\frac{\\ell \\sin \\theta}{2} " }, { "math_id": 5, "text": " x, \\theta " }, { "math_id": 6, "text": " 0 < x < \\frac{D}{2} " }, { "math_id": 7, "text": " 0 < \\ell < D " }, { "math_id": 8, "text": " 0 < \\theta < \\frac{\\pi}{2} " }, { "math_id": 9, "text": " \\frac{D}{2} " }, { "math_id": 10, "text": " \\frac{\\pi}{2} " }, { "math_id": 11, "text": " x \\le \\frac \\ell 2 \\sin \\theta " }, { "math_id": 12, "text": " \\text{Area (event)} = \\int^{\\pi/2}_0 \\frac\\ell2 \\sin \\theta \\, d \\theta = -\\frac\\ell2 \\cos \\frac\\pi2 + \\frac\\ell 2 \\cos 0 = \\frac\\ell 2." }, { "math_id": 13, "text": " \\text{Area (sample space)} = \\frac{D}{2} \\times \\frac{\\pi}{2} = \\frac{D \\pi}{4}. " }, { "math_id": 14, "text": " P " }, { "math_id": 15, "text": " P = \\frac{\\text{Area (event)}}{\\text{Area (sample space)}} = \\frac\\ell 2 \\cdot \\frac 4 {D \\pi} = \\frac{2\\ell}{\\pi D}. " }, { "math_id": 16, "text": " E(X_1+\\cdots+X_n) = E(X_1)+\\cdots+E(X_n). " }, { "math_id": 17, "text": "C" }, { "math_id": 18, "text": "\\R^2" }, { "math_id": 19, "text": "\\frac{|\\partial C|}{\\pi D}" }, { "math_id": 20, "text": "\\pi D" } ]
https://en.wikipedia.org/wiki?curid=12784536
12791220
Singular integral
In mathematics, singular integrals are central to harmonic analysis and are intimately connected with the study of partial differential equations. Broadly speaking a singular integral is an integral operator formula_0 whose kernel function "K" : R"n"×R"n" → R is singular along the diagonal "x" = "y". Specifically, the singularity is such that |"K"("x", "y")| is of size |"x" − "y"|−"n" asymptotically as |"x" − "y"| → 0. Since such integrals may not in general be absolutely integrable, a rigorous definition must define them as the limit of the integral over |"y" − "x"| &gt; ε as ε → 0, but in practice this is a technicality. Usually further assumptions are required to obtain results such as their boundedness on "L""p"(R"n"). The Hilbert transform. The archetypal singular integral operator is the Hilbert transform "H". It is given by convolution against the kernel "K"("x") = 1/(π"x") for "x" in R. More precisely, formula_1 The most straightforward higher dimension analogues of these are the Riesz transforms, which replace "K"("x") = 1/"x" with formula_2 where "i" = 1, ..., "n" and formula_3 is the "i"-th component of "x" in R"n". All of these operators are bounded on "L""p" and satisfy weak-type (1, 1) estimates. Singular integrals of convolution type. A singular integral of convolution type is an operator "T" defined by convolution with a kernel "K" that is locally integrable on R"n"\{0}, in the sense that Suppose that the kernel satisfies: Then it can be shown that "T" is bounded on "L""p"(R"n") and satisfies a weak-type (1, 1) estimate. Property 1. is needed to ensure that convolution (1) with the tempered distribution p.v. "K" given by the principal value integral formula_6 is a well-defined Fourier multiplier on "L"2. Neither of the properties 1. or 2. is necessarily easy to verify, and a variety of sufficient conditions exist. Typically in applications, one also has a "cancellation" condition formula_7 which is quite easy to check. It is automatic, for instance, if "K" is an odd function. If, in addition, one assumes 2. and the following size condition formula_8 then it can be shown that 1. follows. The smoothness condition 2. is also often difficult to check in principle, the following sufficient condition of a kernel "K" can be used: Observe that these conditions are satisfied for the Hilbert and Riesz transforms, so this result is an extension of those result. Singular integrals of non-convolution type. These are even more general operators. However, since our assumptions are so weak, it is not necessarily the case that these operators are bounded on "L""p". Calderón–Zygmund kernels. A function "K" : R"n"×R"n" → R is said to be a "Calderón–Zygmund kernel" if it satisfies the following conditions for some constants "C" &gt; 0 and "δ" &gt; 0. Singular integrals of non-convolution type. "T" is said to be a "singular integral operator of non-convolution type" associated to the Calderón–Zygmund kernel "K" if formula_11 whenever "f" and "g" are smooth and have disjoint support. Such operators need not be bounded on "L""p" Calderón–Zygmund operators. A singular integral of non-convolution type "T" associated to a Calderón–Zygmund kernel "K" is called a "Calderón–Zygmund operator" when it is bounded on "L"2, that is, there is a "C" &gt; 0 such that formula_12 for all smooth compactly supported ƒ. It can be proved that such operators are, in fact, also bounded on all "L""p" with 1 &lt; "p" &lt; ∞. The "T"("b") theorem. The "T"("b") theorem provides sufficient conditions for a singular integral operator to be a Calderón–Zygmund operator, that is for a singular integral operator associated to a Calderón–Zygmund kernel to be bounded on "L"2. In order to state the result we must first define some terms. A "normalised bump" is a smooth function "φ" on R"n" supported in a ball of radius 1 and centred at the origin such that |"∂""α" "φ"("x")| ≤ 1, for all multi-indices |"α"| ≤ "n" + 2. Denote by "τ""x"("φ")("y") = "φ"("y" − "x") and "φ""r"("x") = "r"−"n""φ"("x"/"r") for all "x" in R"n" and "r" &gt; 0. An operator is said to be "weakly bounded" if there is a constant "C" such that formula_13 for all normalised bumps "φ" and "ψ". A function is said to be "accretive" if there is a constant "c" &gt; 0 such that Re("b")("x") ≥ "c" for all "x" in R. Denote by "M""b" the operator given by multiplication by a function "b". The "T"("b") theorem states that a singular integral operator "T" associated to a Calderón–Zygmund kernel is bounded on "L"2 if it satisfies all of the following three conditions for some bounded accretive functions "b"1 and "b"2:
[ { "math_id": 0, "text": "T(f)(x) = \\int K(x,y)f(y) \\, dy, " }, { "math_id": 1, "text": "H(f)(x) = \\frac{1}{\\pi}\\lim_{\\varepsilon \\to 0} \\int_{|x-y|>\\varepsilon} \\frac{1}{x-y}f(y) \\, dy. " }, { "math_id": 2, "text": "K_i(x) = \\frac{x_i}{|x|^{n+1}}" }, { "math_id": 3, "text": "x_i" }, { "math_id": 4, "text": "\\hat{K}\\in L^\\infty(\\mathbf{R}^n)" }, { "math_id": 5, "text": "\\sup_{y \\neq 0} \\int_{|x|>2|y|} |K(x-y) - K(x)| \\, dx \\leq C." }, { "math_id": 6, "text": "\\operatorname{p.v.}\\,\\, K[\\phi] = \\lim_{\\epsilon\\to 0^+} \\int_{|x|>\\epsilon}\\phi(x)K(x)\\,dx" }, { "math_id": 7, "text": "\\int_{R_1<|x|<R_2} K(x) \\, dx = 0 ,\\ \\forall R_1,R_2 > 0" }, { "math_id": 8, "text": "\\sup_{R>0} \\int_{R<|x|<2R} |K(x)| \\, dx \\leq C," }, { "math_id": 9, "text": "K\\in C^1(\\mathbf{R}^n\\setminus\\{0\\})" }, { "math_id": 10, "text": "|\\nabla K(x)|\\le\\frac{C}{|x|^{n+1}}" }, { "math_id": 11, "text": "\\int g(x) T(f)(x) \\, dx = \\iint g(x) K(x,y) f(y) \\, dy \\, dx," }, { "math_id": 12, "text": "\\|T(f)\\|_{L^2} \\leq C\\|f\\|_{L^2}," }, { "math_id": 13, "text": " \\left|\\int T\\bigl(\\tau^x(\\varphi_r)\\bigr)(y) \\tau^x(\\psi_r)(y) \\, dy\\right| \\leq Cr^{-n}" } ]
https://en.wikipedia.org/wiki?curid=12791220
12792326
Word (group theory)
In group theory, a word is any written product of group elements and their inverses. For example, if "x", "y" and "z" are elements of a group "G", then "xy", "z"−1"xzz" and "y"−1"zxx"−1"yz"−1 are words in the set {"x", "y", "z"}. Two different words may evaluate to the same value in "G", or even in every group. Words play an important role in the theory of free groups and presentations, and are central objects of study in combinatorial group theory. Definitions. Let "G" be a group, and let "S" be a subset of "G". A word in "S" is any expression of the form formula_0 where "s"1...,"sn" are elements of "S", called generators, and each "εi" is ±1. The number "n" is known as the length of the word. Each word in "S" represents an element of "G", namely the product of the expression. By convention, the unique identity element can be represented by the empty word, which is the unique word of length zero. Notation. When writing words, it is common to use exponential notation as an abbreviation. For example, the word formula_1 could be written as formula_2 This latter expression is not a word itself—it is simply a shorter notation for the original. When dealing with long words, it can be helpful to use an overline to denote inverses of elements of "S". Using overline notation, the above word would be written as follows: formula_3 Reduced words. Any word in which a generator appears next to its own inverse ("xx"−1 or "x"−1"x") can be simplified by omitting the redundant pair: formula_4 This operation is known as reduction, and it does not change the group element represented by the word. Reductions can be thought of as relations (defined below) that follow from the group axioms. A reduced word is a word that contains no redundant pairs. Any word can be simplified to a reduced word by performing a sequence of reductions: formula_5 The result does not depend on the order in which the reductions are performed. A word is cyclically reduced if and only if every cyclic permutation of the word is reduced. Operations on words. The product of two words is obtained by concatenation: formula_6 Even if the two words are reduced, the product may not be. The inverse of a word is obtained by inverting each generator, and reversing the order of the elements: formula_7 The product of a word with its inverse can be reduced to the empty word: formula_8 You can move a generator from the beginning to the end of a word by conjugation: formula_9 Generating set of a group. A subset "S" of a group "G" is called a generating set if every element of "G" can be represented by a word in "S". When "S" is not a generating set for "G", the set of elements represented by words in "S" is a subgroup of "G", known as the subgroup of "G" generated by "S" and usually denoted formula_10. It is the smallest subgroup of "G" that contains the elements of "S". Normal forms. A normal form for a group "G" with generating set "S" is a choice of one reduced word in "S" for each element of "G". For example: Relations and presentations. If "S" is a generating set for a group "G", a relation is a pair of words in "S" that represent the same element of "G". These are usually written as equations, e.g. formula_11 A set formula_12 of relations defines "G" if every relation in "G" follows logically from those in formula_12 using the axioms for a group. A presentation for "G" is a pair formula_13, where "S" is a generating set for "G" and formula_12 is a defining set of relations. For example, the Klein four-group can be defined by the presentation formula_14 Here 1 denotes the empty word, which represents the identity element. Free groups. If "S" is any set, the free group over "S" is the group with presentation formula_15. That is, the free group over "S" is the group generated by the elements of "S", with no extra relations. Every element of the free group can be written uniquely as a reduced word in "S". Notes. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "s_1^{\\varepsilon_1} s_2^{\\varepsilon_2} \\cdots s_n^{\\varepsilon_n}" }, { "math_id": 1, "text": "x x y^{-1} z y z z z x^{-1} x^{-1} \\," }, { "math_id": 2, "text": "x^2 y^{-1} z y z^3 x^{-2}. \\," }, { "math_id": 3, "text": "x^2\\overline{y}zyz^3\\overline{x}^2. \\," }, { "math_id": 4, "text": "y^{-1}zxx^{-1}y\\;\\;\\longrightarrow\\;\\;y^{-1}zy." }, { "math_id": 5, "text": "xzy^{-1}xx^{-1}yz^{-1}zz^{-1}yz\\;\\;\\longrightarrow\\;\\;xyz." }, { "math_id": 6, "text": "\\left(xzyz^{-1}\\right)\\left(zy^{-1}x^{-1}y\\right) = xzyz^{-1}zy^{-1}x^{-1}y." }, { "math_id": 7, "text": "\\left(zy^{-1}x^{-1}y\\right)^{-1}=y^{-1}xyz^{-1}." }, { "math_id": 8, "text": "zy^{-1}x^{-1}y \\; y^{-1}xyz^{-1} = 1." }, { "math_id": 9, "text": "x^{-1}\\left(xy^{-1}z^{-1}yz\\right)x = y^{-1}z^{-1}yzx." }, { "math_id": 10, "text": "\\langle S\\rangle" }, { "math_id": 11, "text": "x^{-1} y x = y^2.\\," }, { "math_id": 12, "text": "\\mathcal{R}" }, { "math_id": 13, "text": "\\langle S \\mid \\mathcal{R}\\rangle" }, { "math_id": 14, "text": "\\langle i,j \\mid i^2 = 1,\\,j^2 = 1,\\,ij=ji\\rangle." }, { "math_id": 15, "text": "\\langle S\\mid\\;\\rangle" } ]
https://en.wikipedia.org/wiki?curid=12792326
12792547
Proof that π is irrational
Mathematical proof In the 1760s, Johann Heinrich Lambert was the first to prove that the number π is irrational, meaning it cannot be expressed as a fraction formula_0, where formula_1 and formula_2 are both integers. In the 19th century, Charles Hermite found a proof that requires no prerequisite knowledge beyond basic calculus. Three simplifications of Hermite's proof are due to Mary Cartwright, Ivan Niven, and Nicolas Bourbaki. Another proof, which is a simplification of Lambert's proof, is due to Miklós Laczkovich. Many of these are proofs by contradiction. In 1882, Ferdinand von Lindemann proved that formula_3 is not just irrational, but transcendental as well. Lambert's proof. In 1761, Johann Heinrich Lambert proved that formula_3 is irrational by first showing that this continued fraction expansion holds: formula_4 Then Lambert proved that if formula_5 is non-zero and rational, then this expression must be irrational. Since formula_6, it follows that formula_7 is irrational, and thus formula_3 is also irrational. A simplification of Lambert's proof is given below. Hermite's proof. Written in 1873, this proof uses the characterization of formula_3 as the smallest positive number whose half is a zero of the cosine function and it actually proves that formula_8 is irrational. As in many proofs of irrationality, it is a proof by contradiction. Consider the sequences of real functions formula_9 and formula_10 for formula_11 defined by: formula_12 Using induction we can prove that formula_13 and therefore we have: formula_14 So formula_15 which is equivalent to formula_16 Using the definition of the sequence and employing induction we can show that formula_17 where formula_18 and formula_19 are polynomial functions with integer coefficients and the degree of formula_18 is smaller than or equal to formula_20 In particular, formula_21 Hermite also gave a closed expression for the function formula_22 namely formula_23 He did not justify this assertion, but it can be proved easily. First of all, this assertion is equivalent to formula_24 Proceeding by induction, take formula_25 formula_26 and, for the inductive step, consider any natural number formula_27 If formula_28 then, using integration by parts and Leibniz's rule, one gets formula_29 If formula_30 with formula_31 and formula_32 in formula_33, then, since the coefficients of formula_18 are integers and its degree is smaller than or equal to formula_34 formula_35 is some integer formula_36 In other words, formula_37 But this number is clearly greater than formula_38 On the other hand, the limit of this quantity as formula_39 goes to infinity is zero, and so, if formula_39 is large enough, formula_40 Thereby, a contradiction is reached. Hermite did not present his proof as an end in itself but as an afterthought within his search for a proof of the transcendence of formula_41 He discussed the recurrence relations to motivate and to obtain a convenient integral representation. Once this integral representation is obtained, there are various ways to present a succinct and self-contained proof starting from the integral (as in Cartwright's, Bourbaki's or Niven's presentations), which Hermite could easily see (as he did in his proof of the transcendence of formula_42). Moreover, Hermite's proof is closer to Lambert's proof than it seems. In fact, formula_43 is the "residue" (or "remainder") of Lambert's continued fraction for formula_44 Cartwright's proof. Harold Jeffreys wrote that this proof was set as an example in an exam at Cambridge University in 1945 by Mary Cartwright, but that she had not traced its origin. It still remains on the 4th problem sheet today for the Analysis IA course at Cambridge University. Consider the integrals formula_45 where formula_39 is a non-negative integer. Two integrations by parts give the recurrence relation formula_46 If formula_47 then this becomes formula_48 Furthermore, formula_49 and formula_50 Hence for all formula_51 formula_52 where formula_53 and formula_54 are polynomials of degree formula_55 and with integer coefficients (depending on formula_39). Take formula_56 and suppose if possible that formula_57 where formula_1 and formula_2 are natural numbers (i.e., assume that formula_3 is rational). Then formula_58 The right side is an integer. But formula_59 since the interval formula_60 has length formula_61 and the function being integrated takes only values between formula_62 and formula_63 On the other hand, formula_64 Hence, for sufficiently large formula_39 formula_65 that is, we could find an integer between formula_62 and formula_63 That is the contradiction that follows from the assumption that formula_3 is rational. This proof is similar to Hermite's proof. Indeed, formula_66 However, it is clearly simpler. This is achieved by omitting the inductive definition of the functions formula_9 and taking as a starting point their expression as an integral. Niven's proof. This proof uses the characterization of formula_3 as the smallest positive zero of the sine function. Suppose that formula_3 is rational, i.e. formula_67 for some integers formula_1 and formula_2 which may be taken without loss of generality to both be positive. Given any positive integer formula_68 we define the polynomial function: formula_69 and, for each formula_70 let formula_71 Claim 1: formula_72 is an integer. Proof: Expanding formula_73 as a sum of monomials, the coefficient of formula_74 is a number of the form formula_75 where formula_76 is an integer, which is formula_62 if formula_77 Therefore, formula_78 is formula_62 when formula_79 and it is equal to formula_80 if formula_81; in each case, formula_78 is an integer and therefore formula_82 is an integer. On the other hand, formula_83 and so formula_84 for each non-negative integer formula_85 In particular, formula_86 Therefore, formula_87 is also an integer and so formula_88 is an integer (in fact, it is easy to see that formula_89). Since formula_82 and formula_88 are integers, so is their sum. Claim 2: formula_90 Proof: Since formula_91 is the zero polynomial, we have formula_92 The derivatives of the sine and cosine function are given by sin' = cos and cos' = −sin. Hence the product rule implies formula_93 By the fundamental theorem of calculus formula_94 Since formula_95 and formula_96 (here we use the above-mentioned characterization of formula_3 as a zero of the sine function), Claim 2 follows. Conclusion: Since formula_97 and formula_98 for formula_99 (because formula_3 is the "smallest" positive zero of the sine function), Claims 1 and 2 show that formula_72 is a "positive" integer. Since formula_100 and formula_101 for formula_102 we have, by the original definition of formula_103 formula_104 which is smaller than formula_105 for large formula_68 hence formula_106 for these formula_68 by Claim 2. This is impossible for the positive integer formula_107 This shows that the original assumption that formula_3 is rational leads to a contradiction, which concludes the proof. The above proof is a polished version, which is kept as simple as possible concerning the prerequisites, of an analysis of the formula formula_108 which is obtained by formula_109 integrations by parts. Claim 2 essentially establishes this formula, where the use of formula_110 hides the iterated integration by parts. The last integral vanishes because formula_111 is the zero polynomial. Claim 1 shows that the remaining sum is an integer. Niven's proof is closer to Cartwright's (and therefore Hermite's) proof than it appears at first sight. In fact, formula_112 Therefore, the substitution formula_113 turns this integral into formula_114 In particular, formula_115 Another connection between the proofs lies in the fact that Hermite already mentions that if formula_73 is a polynomial function and formula_116 then formula_117 from which it follows that formula_118 Bourbaki's proof. Bourbaki's proof is outlined as an exercise in his calculus treatise. For each natural number "b" and each non-negative integer formula_68 define formula_119 Since formula_120 is the integral of a function defined on formula_121 that takes the value formula_62 at formula_62 and formula_3 and which is greater than formula_62 otherwise, formula_122 Besides, for each natural number formula_123 formula_124 if formula_39 is large enough, because formula_125 and therefore formula_126 On the other hand, repeated integration by parts allows us to deduce that, if formula_1 and formula_2 are natural numbers such that formula_67 and formula_73 is the polynomial function from formula_127 into formula_128 defined by formula_129 then: formula_130 This last integral is formula_131 since formula_132 is the null function (because formula_73 is a polynomial function of degree formula_133). Since each function formula_134 (with formula_135) takes integer values at formula_62 and formula_3 and since the same thing happens with the sine and the cosine functions, this proves that formula_120 is an integer. Since it is also greater than formula_131 it must be a natural number. But it was also proved that formula_124 if formula_39 is large enough, thereby reaching a contradiction. This proof is quite close to Niven's proof, the main difference between them being the way of proving that the numbers formula_120 are integers. Laczkovich's proof. Miklós Laczkovich's proof is a simplification of Lambert's original proof. He considers the functions formula_136 These functions are clearly defined for any real number formula_137 Besides formula_138 formula_139 Claim 1: The following recurrence relation holds for any real number formula_5: formula_140 Proof: This can be proved by comparing the coefficients of the powers of formula_137 Claim 2: For each real number formula_141 formula_142 Proof: In fact, the sequence formula_143 is bounded (since it converges to formula_62) and if formula_144 is an upper bound and if formula_145 then formula_146 Claim 3: If formula_147 formula_148 is rational, and formula_149 then formula_150 Proof: Otherwise, there would be a number formula_151 and integers formula_1 and formula_2 such that formula_152 and formula_153 To see why, take formula_154 formula_155 and formula_156 if formula_157; otherwise, choose integers formula_1 and formula_2 such that formula_158 and define formula_159 In each case, formula_160 cannot be formula_131 because otherwise it would follow from claim 1 that each formula_161 (formula_162) would be formula_131 which would contradict claim 2. Now, take a natural number formula_163 such that all three numbers formula_164 formula_165 and formula_166 are integers and consider the sequence formula_167 Then formula_168 On the other hand, it follows from claim 1 that formula_169 which is a linear combination of formula_170 and formula_171 with integer coefficients. Therefore, each formula_171 is an integer multiple of formula_172 Besides, it follows from claim 2 that each formula_171 is greater than formula_62 (and therefore that formula_173) if formula_39 is large enough and that the sequence of all formula_171 converges to formula_38 But a sequence of numbers greater than or equal to formula_174 cannot converge to formula_38 Since formula_175 it follows from claim 3 that formula_176 is irrational and therefore that formula_3 is irrational. On the other hand, since formula_177 another consequence of Claim 3 is that, if formula_178 then formula_179 is irrational. Laczkovich's proof is really about the hypergeometric function. In fact, formula_180 and Gauss found a continued fraction expansion of the hypergeometric function using its functional equation. This allowed Laczkovich to find a new and simpler proof of the fact that the tangent function has the continued fraction expansion that Lambert had discovered. Laczkovich's result can also be expressed in . In fact, formula_181 (where formula_182 is the gamma function). So Laczkovich's result is equivalent to: If formula_147 formula_148 is rational, and formula_183 then formula_184 References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "a/b" }, { "math_id": 1, "text": "a" }, { "math_id": 2, "text": "b" }, { "math_id": 3, "text": "\\pi" }, { "math_id": 4, "text": "\\tan(x) = \\cfrac{x}{1 - \\cfrac{x^2}{3 - \\cfrac{x^2}{5 - \\cfrac{x^2}{7 - {}\\ddots}}}}." }, { "math_id": 5, "text": "x" }, { "math_id": 6, "text": "\\tan\\tfrac\\pi4 =1" }, { "math_id": 7, "text": "\\tfrac\\pi4" }, { "math_id": 8, "text": "\\pi^2" }, { "math_id": 9, "text": "A_n" }, { "math_id": 10, "text": "U_n" }, { "math_id": 11, "text": "n \\in \\N_0" }, { "math_id": 12, "text": "\\begin{align}\nA_0(x) &= \\sin(x), && A_{n+1}(x) =\\int_0^xyA_n(y)\\,dy \\\\[4pt]\nU_0(x) &= \\frac{\\sin(x)}x, && U_{n+1}(x) =-\\frac{U_n'(x)}x\n\\end{align}" }, { "math_id": 13, "text": "\\begin{align}\nA_n(x) &=\\frac{x^{2n+1}}{(2n+1)!!}-\\frac{x^{2n+3}}{2\\times(2n+3)!!}+\\frac{x^{2n+5}}{2\\times4\\times(2n+5)!!}\\mp\\cdots \\\\[4pt]\nU_n(x) &=\\frac1{(2n+1)!!}-\\frac{x^2}{2\\times(2n+3)!!}+\\frac{x^4}{2\\times4\\times(2n+5)!!}\\mp\\cdots\n\\end{align}" }, { "math_id": 14, "text": "U_n(x)=\\frac{A_n(x)}{x^{2n+1}}.\\," }, { "math_id": 15, "text": "\n\\begin{align}\n\\frac{A_{n+1}(x)}{x^{2n+3}} & =U_{n+1}(x)=-\\frac{U_n'(x)}x=-\\frac1x\\frac {\\mathrm{d}}{\\mathrm{d}x}\\left(\\frac{A_n(x)}{x^{2n+1}}\\right) \\\\[6pt]\n& = -\\frac{1}{x} \\left( \\frac{A_n'(x) \\cdot x^{2n+1} - (2n+1) x^{2n} A_n(x)}{x^{2(2n+1)}} \\right ) \\\\[6pt]\n& = \\frac{(2n+1)A_n(x)-xA_n'(x)}{x^{2n+3}}\n\\end{align}\n" }, { "math_id": 16, "text": "A_{n+1}(x)=(2n+1)A_n(x)-x^2A_{n-1}(x).\\," }, { "math_id": 17, "text": "A_n(x) = P_n(x^2) \\sin(x) + x Q_n(x^2) \\cos(x),\\," }, { "math_id": 18, "text": "P_n" }, { "math_id": 19, "text": "Q_n" }, { "math_id": 20, "text": "\\bigl\\lfloor \\tfrac12n\\bigr\\rfloor." }, { "math_id": 21, "text": "A_n\\bigl(\\tfrac12\\pi\\bigr) = P_n\\bigl(\\tfrac14\\pi^2\\bigr)." }, { "math_id": 22, "text": "A_n," }, { "math_id": 23, "text": "A_n(x)=\\frac{x^{2n+1}}{2^n n!}\\int_0^1(1-z^2)^n\\cos(xz)\\,\\mathrm{d}z.\\," }, { "math_id": 24, "text": "\\frac{1}{2^n n!}\\int_0^1(1-z^2)^n\\cos(x z)\\,\\mathrm{d}z=\\frac{A_n(x)}{x^{2n+1}}=U_n(x)." }, { "math_id": 25, "text": "n = 0." }, { "math_id": 26, "text": "\\int_0^1\\cos(xz)\\,\\mathrm{d}z=\\frac{\\sin(x)}x=U_0(x)" }, { "math_id": 27, "text": "n." }, { "math_id": 28, "text": "\\frac{1}{2^nn!}\\int_0^1(1-z^2)^n\\cos(xz)\\,\\mathrm{d}z=U_n(x)," }, { "math_id": 29, "text": "\\begin{align}\n&\\frac{1}{2^{n+1}(n+1)!} \\int_0^1\\left(1-z^2\\right)^{n+1}\\cos(xz)\\,\\mathrm{d}z \\\\\n&\\qquad=\\frac{1}{2^{n+1}(n+1)!}\\Biggl(\\,\\overbrace{\\left.(1-z^2)^{n+1}\\frac{\\sin(xz)}x\\right|_{z=0}^{z=1}}^{=\\,0} \\ +\\, \\int_0^12(n+1)\\left(1-z^2\\right)^nz \\frac{\\sin(xz)}x\\,\\mathrm{d}z\\Biggr)\\\\[8pt]\n&\\qquad= \\frac1x\\cdot\\frac1{2^n n!}\\int_0^1\\left(1-z^2\\right)^nz\\sin(xz)\\,\\mathrm{d}z\\\\[8pt]\n&\\qquad= -\\frac1x\\cdot\\frac{\\mathrm{d}}{\\mathrm{d}x}\\left(\\frac1{2^nn!}\\int_0^1(1-z^2)^n\\cos(xz)\\,\\mathrm{d}z\\right) \\\\[8pt]\n&\\qquad= -\\frac{U_n'(x)}x \\\\[4pt]\n&\\qquad= U_{n+1}(x).\n\\end{align}" }, { "math_id": 30, "text": "\\tfrac14\\pi^2 = p/q," }, { "math_id": 31, "text": "p" }, { "math_id": 32, "text": "q" }, { "math_id": 33, "text": "\\N" }, { "math_id": 34, "text": "\\bigl\\lfloor \\tfrac12n\\bigr\\rfloor," }, { "math_id": 35, "text": "q^{\\lfloor n/2 \\rfloor}P_n\\bigl(\\tfrac14\\pi^2\\bigr)" }, { "math_id": 36, "text": "N." }, { "math_id": 37, "text": "N=q^{\\lfloor n/2\\rfloor}{A_n}\\bigl(\\tfrac12\\pi\\bigr) =q^{\\lfloor n/2\\rfloor}\\frac{1}{2^nn!}\\left(\\dfrac pq \\right)^{n+\\frac 12}\\int_0^1(1-z^2)^n \\cos \\left(\\tfrac12\\pi z \\right)\\,\\mathrm{d}z." }, { "math_id": 38, "text": "0." }, { "math_id": 39, "text": "n" }, { "math_id": 40, "text": "N < 1." }, { "math_id": 41, "text": "\\pi." }, { "math_id": 42, "text": "e" }, { "math_id": 43, "text": "A_n(x)" }, { "math_id": 44, "text": "\\tan x." }, { "math_id": 45, "text": "I_n(x)=\\int_{-1}^1(1 - z^2)^n\\cos(xz)\\,dz, " }, { "math_id": 46, "text": "x^2I_n(x)=2n(2n-1)I_{n-1}(x)-4n(n-1)I_{n-2}(x). \\qquad (n \\geq 2)" }, { "math_id": 47, "text": "J_n(x)=x^{2n+1}I_n(x)," }, { "math_id": 48, "text": "J_n(x)=2n(2n-1)J_{n-1}(x)-4n(n-1)x^2J_{n-2}(x)." }, { "math_id": 49, "text": "J_0(x) = 2 \\sin x" }, { "math_id": 50, "text": "J_1(x) = -4x\\cos x + 4\\sin x." }, { "math_id": 51, "text": "n \\in \\Z_+," }, { "math_id": 52, "text": "J_n(x)=x^{2n+1}I_n(x)=n!\\bigl(P_n(x)\\sin(x)+Q_n(x)\\cos(x)\\bigr)," }, { "math_id": 53, "text": "P_n(x)" }, { "math_id": 54, "text": "Q_n(x)" }, { "math_id": 55, "text": "\\leq n," }, { "math_id": 56, "text": "x = \\tfrac12\\pi," }, { "math_id": 57, "text": "\\tfrac12\\pi = a/b" }, { "math_id": 58, "text": " \\frac{a^{2n+1}}{n!}I_n\\bigl(\\tfrac12\\pi\\bigr) = P_n\\bigl(\\tfrac12\\pi\\bigr)b^{2n+1}. " }, { "math_id": 59, "text": "0 < I_n \\bigl(\\tfrac12\\pi\\bigr) < 2" }, { "math_id": 60, "text": "[-1,1]" }, { "math_id": 61, "text": "2" }, { "math_id": 62, "text": "0" }, { "math_id": 63, "text": "1." }, { "math_id": 64, "text": " \\frac{a^{2n+1}}{n!} \\to 0 \\quad \\text{ as }n \\to \\infty. " }, { "math_id": 65, "text": " 0 < \\frac{a^{2n+1}I_n\\left(\\frac\\pi2\\right)}{n!} < 1, " }, { "math_id": 66, "text": "\\begin{align}\nJ_n(x)&=x^{2n+1}\\int_{-1}^1 (1 - z^2)^n \\cos(xz)\\,dz\\\\[5pt]\n &=2x^{2n+1}\\int_0^1 (1 - z^2)^n \\cos(xz)\\,dz\\\\[5pt]\n &=2^{n+1}n!A_n(x).\n\\end{align}" }, { "math_id": 67, "text": "\\pi = a/b" }, { "math_id": 68, "text": "n," }, { "math_id": 69, "text": " f(x) = \\frac{x^n(a - bx)^n}{n!}" }, { "math_id": 70, "text": "x \\in \\R" }, { "math_id": 71, "text": "F(x) = f(x)-f''(x)+f^{(4)}(x)+\\cdots+(-1)^n f^{(2n)}(x)." }, { "math_id": 72, "text": "F(0) + F(\\pi)" }, { "math_id": 73, "text": "f" }, { "math_id": 74, "text": "x^k" }, { "math_id": 75, "text": "c_k /n!" }, { "math_id": 76, "text": "c_k" }, { "math_id": 77, "text": "k < n." }, { "math_id": 78, "text": "f^{(k)}(0)" }, { "math_id": 79, "text": "k < n" }, { "math_id": 80, "text": "(k! / n!) c_k" }, { "math_id": 81, "text": "n \\leq k \\leq 2n" }, { "math_id": 82, "text": "F(0)" }, { "math_id": 83, "text": "f(\\pi-x) = f(x)" }, { "math_id": 84, "text": "(-1)^kf^{(k)}(\\pi-x) = f^{(k)}(x)" }, { "math_id": 85, "text": "k." }, { "math_id": 86, "text": "(-1)^kf^{(k)}(\\pi) = f^{(k)}(0)." }, { "math_id": 87, "text": "f^{(k)}(\\pi)" }, { "math_id": 88, "text": "F(\\pi)" }, { "math_id": 89, "text": "F(\\pi) = F(0)" }, { "math_id": 90, "text": " \\int_0^\\pi f(x)\\sin(x)\\,dx=F(0)+F(\\pi)" }, { "math_id": 91, "text": "f^{(2n + 2)}" }, { "math_id": 92, "text": " F'' + F = f." }, { "math_id": 93, "text": " (F'\\cdot\\sin{} - F\\cdot\\cos{})' = f\\cdot\\sin" }, { "math_id": 94, "text": " \\left. \\int_0^\\pi f(x)\\sin(x)\\,dx= \\bigl(F'(x)\\sin x - F(x)\\cos x\\bigr) \\right|_0^\\pi. " }, { "math_id": 95, "text": "\\sin 0 = \\sin \\pi = 0" }, { "math_id": 96, "text": "\\cos 0 = - \\cos \\pi = 1" }, { "math_id": 97, "text": "f(x) > 0" }, { "math_id": 98, "text": "\\sin x > 0" }, { "math_id": 99, "text": "0 < x < \\pi" }, { "math_id": 100, "text": "0 \\leq x(a - bx) \\leq \\pi a" }, { "math_id": 101, "text": "0 \\leq \\sin x \\leq 1" }, { "math_id": 102, "text": "0 \\leq x \\leq \\pi," }, { "math_id": 103, "text": "f," }, { "math_id": 104, "text": "\\int_0^\\pi f(x)\\sin(x)\\,dx\\le\\pi\\frac{(\\pi a)^n}{n!}" }, { "math_id": 105, "text": "1" }, { "math_id": 106, "text": "F(0) + F(\\pi) < 1" }, { "math_id": 107, "text": "F(0) + F(\\pi)." }, { "math_id": 108, "text": "\\int_0^\\pi f(x)\\sin(x)\\,dx = \\sum_{j=0}^n (-1)^j \\left (f^{(2j)}(\\pi)+f^{(2j)}(0)\\right )+(-1)^{n+1}\\int_0^\\pi f^{(2n+2)}(x)\\sin(x)\\,dx," }, { "math_id": 109, "text": "2n + 2" }, { "math_id": 110, "text": "F" }, { "math_id": 111, "text": "f^{(2n+2)}" }, { "math_id": 112, "text": "\\begin{align}\nJ_n(x)&=x^{2n+1}\\int_{-1}^1(1-z^2)^n\\cos(xz)\\,dz\\\\\n&=\\int_{-1}^1\\left (x^2-(xz)^2\\right )^nx\\cos(xz)\\,dz.\n\\end{align}" }, { "math_id": 113, "text": "xz = y" }, { "math_id": 114, "text": "\\int_{-x}^x(x^2-y^2)^n\\cos(y)\\,dy." }, { "math_id": 115, "text": "\\begin{align}\nJ_n\\left(\\frac\\pi2\\right)&=\\int_{-\\pi/2}^{\\pi/2}\\left(\\frac{\\pi^2}4-y^2\\right)^n\\cos(y)\\,dy\\\\[5pt]\n&=\\int_0^\\pi\\left(\\frac{\\pi^2}4-\\left(y-\\frac\\pi2\\right)^2\\right)^n\\cos\\left(y-\\frac\\pi2\\right)\\,dy\\\\[5pt]\n&=\\int_0^\\pi y^n(\\pi-y)^n\\sin(y)\\,dy\\\\[5pt]\n&=\\frac{n!}{b^n}\\int_0^\\pi f(x)\\sin(x)\\,dx.\n\\end{align}" }, { "math_id": 116, "text": "F=f-f^{(2)}+f^{(4)}\\mp\\cdots," }, { "math_id": 117, "text": "\\int f(x)\\sin(x)\\,dx=F'(x)\\sin(x)-F(x)\\cos(x)+C," }, { "math_id": 118, "text": "\\int_0^\\pi f(x)\\sin(x)\\,dx=F(\\pi)+F(0)." }, { "math_id": 119, "text": "A_n(b)=b^n\\int_0^\\pi\\frac{x^n(\\pi-x)^n}{n!}\\sin(x)\\,dx." }, { "math_id": 120, "text": "A_n(b)" }, { "math_id": 121, "text": "[0,\\pi]" }, { "math_id": 122, "text": "A_n(b) > 0." }, { "math_id": 123, "text": "b," }, { "math_id": 124, "text": "A_n(b) < 1" }, { "math_id": 125, "text": " x(\\pi-x) \\le \\left(\\frac\\pi2\\right)^2" }, { "math_id": 126, "text": "A_n(b)\\le\\pi b^n \\frac{1}{n!} \\left(\\frac\\pi2\\right)^{2n} = \\pi \\frac{(b\\pi^2/4)^n}{n!}." }, { "math_id": 127, "text": "[0, \\pi]" }, { "math_id": 128, "text": "\\R" }, { "math_id": 129, "text": "f(x)=\\frac{x^n(a-bx)^n}{n!}," }, { "math_id": 130, "text": "\\begin{align}\nA_n(b) &= \\int_0^\\pi f(x)\\sin(x)\\,dx \\\\[5pt]\n &= \\Big[{-f(x)\\cos(x)}\\Big]_{x=0}^{x=\\pi} \\,- \\Big[{-f'(x) \\sin(x)} \\Big]_{x=0}^{x=\\pi} + \\cdots \\\\[5pt]\n &\\ \\qquad \\pm \\Big[ f^{(2n)}(x) \\cos(x) \\Big]_{x=0}^{x=\\pi} \\,\\pm \\int_0^\\pi f^{(2n+1)}(x)\\cos(x)\\,dx.\n\\end{align}" }, { "math_id": 131, "text": "0," }, { "math_id": 132, "text": "f^{(2n+1)}" }, { "math_id": 133, "text": "2n" }, { "math_id": 134, "text": "f^{(k)}" }, { "math_id": 135, "text": "0 \\leq k \\leq 2n" }, { "math_id": 136, "text": "f_k(x) = 1 - \\frac{x^2}k+\\frac{x^4}{2! k(k+1)}-\\frac{x^6}{3! k(k+1)(k+2)} + \\cdots \\quad (k\\notin\\{0,-1,-2,\\ldots\\})." }, { "math_id": 137, "text": "x." }, { "math_id": 138, "text": "f_{1/2}(x) = \\cos(2x)," }, { "math_id": 139, "text": "f_{3/2}(x) = \\frac{\\sin(2x)}{2x}." }, { "math_id": 140, "text": "\\frac{x^2}{k(k+1)}f_{k+2}(x)=f_{k+1}(x)-f_k(x)." }, { "math_id": 141, "text": "x," }, { "math_id": 142, "text": "\\lim_{k\\to+\\infty}f_k(x)=1." }, { "math_id": 143, "text": "x^{2n}/n!" }, { "math_id": 144, "text": "C" }, { "math_id": 145, "text": "k > 1," }, { "math_id": 146, "text": "\\left|f_k(x)-1\\right|\\leqslant\\sum_{n=1}^\\infty\\frac C{k^n}=C\\frac{1/k}{1-1/k}=\\frac C{k-1}." }, { "math_id": 147, "text": "x \\neq 0," }, { "math_id": 148, "text": "x^2" }, { "math_id": 149, "text": " k\\in\\Q\\smallsetminus\\{0,-1,-2,\\ldots\\}" }, { "math_id": 150, "text": "f_k(x)\\neq0 \\quad \\text{ and } \\quad \\frac{f_{k+1}(x)}{f_k(x)}\\notin\\Q." }, { "math_id": 151, "text": "y \\neq 0" }, { "math_id": 152, "text": "f_k(x) = ay" }, { "math_id": 153, "text": "f_{k+1}(x) = by." }, { "math_id": 154, "text": "y = f_{k+1}(x)," }, { "math_id": 155, "text": "a = 0," }, { "math_id": 156, "text": "b = 1" }, { "math_id": 157, "text": "f_k(x) = 0" }, { "math_id": 158, "text": "f_{k+1}(x) / f_k(x) = b/a" }, { "math_id": 159, "text": "y = f_k(x)/a = f_{k+1}(x)/b." }, { "math_id": 160, "text": "y" }, { "math_id": 161, "text": "f_{k+n}(x)" }, { "math_id": 162, "text": "n \\in \\N" }, { "math_id": 163, "text": "c" }, { "math_id": 164, "text": "bc/k," }, { "math_id": 165, "text": "ck/x^2," }, { "math_id": 166, "text": "c/x^2" }, { "math_id": 167, "text": "g_n=\\begin{cases}f_k(x) & n=0\\\\ \\dfrac{c^n}{k(k+1)\\cdots(k+n-1)}f_{k+n}(x) & n \\neq 0 \\end{cases}" }, { "math_id": 168, "text": "g_0=f_k(x)=ay\\in\\Z y \\quad \\text{ and } \\quad g_1=\\frac ckf_{k+1}(x)=\\frac{bc}ky\\in\\Z y." }, { "math_id": 169, "text": "\\begin{align}\ng_{n+2}&=\\frac{c^{n+2}}{x^2k(k+1)\\cdots(k+n-1)}\\cdot\\frac{x^2}{(k+n)(k+n+1)}f_{k+n+2}(x)\\\\[5pt]\n& =\\frac{c^{n+2}}{x^2k(k+1)\\cdots(k+n-1)}f_{k+n+1}(x)-\\frac{c^{n+2}}{x^2k(k+1)\\cdots(k+n-1)}f_{k+n}(x)\\\\[5pt]\n&=\\frac{c(k+n)}{x^2}g_{n+1}-\\frac{c^2}{x^2}g_n\\\\[5pt]\n&=\\left(\\frac{ck}{x^2}+\\frac c{x^2}n\\right)g_{n+1}-\\frac{c^2}{x^2}g_n,\n\\end{align}" }, { "math_id": 170, "text": "g_{n+1}" }, { "math_id": 171, "text": "g_n" }, { "math_id": 172, "text": "y." }, { "math_id": 173, "text": "g_n \\geq |y|" }, { "math_id": 174, "text": "|y|" }, { "math_id": 175, "text": "f_{1/2}(\\tfrac14\\pi) = \\cos \\tfrac12\\pi = 0," }, { "math_id": 176, "text": "\\tfrac1{16}\\pi^2" }, { "math_id": 177, "text": "\\tan x=\\frac{\\sin x}{\\cos x}=x\\frac{f_{3/2}(x/2)}{f_{1/2}(x/2)}," }, { "math_id": 178, "text": "x \\in \\Q \\smallsetminus \\{0\\}," }, { "math_id": 179, "text": "\\tan x" }, { "math_id": 180, "text": "f_k(x) = {}_0F_1 (k - x^2)" }, { "math_id": 181, "text": "\\Gamma(k)J_{k-1}(2x) = x^{k-1}f_k(x)" }, { "math_id": 182, "text": "\\Gamma" }, { "math_id": 183, "text": "k\\in\\Q\\smallsetminus\\{0,-1,-2,\\ldots\\}" }, { "math_id": 184, "text": "\\frac{x J_k(x)}{J_{k-1}(x)}\\notin\\Q." } ]
https://en.wikipedia.org/wiki?curid=12792547
12795419
Laplace principle (large deviations theory)
In mathematics, Laplace's principle is a basic theorem in large deviations theory which is similar to Varadhan's lemma. It gives an asymptotic expression for the Lebesgue integral of exp(−"θφ"("x")) over a fixed set "A" as "θ" becomes large. Such expressions can be used, for example, in statistical mechanics to determining the limiting behaviour of a system as the temperature tends to absolute zero. Statement of the result. Let "A" be a Lebesgue-measurable subset of "d"-dimensional Euclidean space R"d" and let "φ" : R"d" → R be a measurable function with formula_0 Then formula_1 where ess inf denotes the essential infimum. Heuristically, this may be read as saying that for large "θ", formula_2 Application. The Laplace principle can be applied to the family of probability measures P"θ" given by formula_3 to give an asymptotic expression for the probability of some event "A" as "θ" becomes large. For example, if "X" is a standard normally distributed random variable on R, then formula_4 for every measurable set "A".
[ { "math_id": 0, "text": "\\int_A e^{-\\varphi(x)} \\,dx < \\infty." }, { "math_id": 1, "text": "\\lim_{\\theta \\to \\infty} \\frac1{\\theta} \\log \\int_A e^{-\\theta \\varphi(x)} \\, dx = - \\mathop{\\mathrm{ess \\, inf}}_{x \\in A} \\varphi(x)," }, { "math_id": 2, "text": "\\int_A e^{-\\theta \\varphi(x)} \\, dx \\approx \\exp \\left(-\\theta \\mathop{\\mathrm{ess \\, inf}}_{x \\in A} \\varphi(x) \\right)." }, { "math_id": 3, "text": "\\mathbf{P}_\\theta (A) = \\left( \\int_A e^{-\\theta \\varphi(x)} \\, dx \\right) \\bigg/ \\left( \\int_{\\mathbf{R}^{d}} e^{-\\theta \\varphi(y)} \\, dy \\right)" }, { "math_id": 4, "text": "\\lim_{\\varepsilon \\downarrow 0} \\varepsilon \\log \\mathbf{P} \\big[ \\sqrt{\\varepsilon} X \\in A \\big] = - \\mathop{\\mathrm{ess \\, inf}}_{x \\in A} \\frac{x^2}{2}" } ]
https://en.wikipedia.org/wiki?curid=12795419
12795886
Alternating step generator
Form of pseudorandom number generator In cryptography, an alternating step generator (ASG) is a cryptographic pseudorandom number generator used in stream ciphers, based on three linear-feedback shift registers. Its output is a combination of two LFSRs which are stepped (clocked) in an alternating fashion, depending on the output of a third LFSR. The design was published in 1987 and patented in 1989 by C. G. Günther. Overview. Linear-feedback shift registers (LFSRs) are, statistically speaking, excellent pseudorandom generators, with good distribution and simple implementation. However, they cannot be used as-is because their output can be predicted easily. An ASG comprises three linear-feedback shift registers, which we will call LFSR0, LFSR1 and LFSR2 for convenience. The output of one of the registers decides which of the other two is to be used; for instance if LFSR2 outputs a 0, LFSR0 is clocked, and if it outputs a 1, LFSR1 is clocked instead. The output is the exclusive OR of the last bit produced by LFSR0 and LFSR1. The initial state of the three LFSRs is the key. Customarily, the LFSRs use primitive polynomials of distinct but close degree, preset to non-zero state, so that each LFSR generates a maximum length sequence. Under these assumptions, the ASG's output demonstrably has long period, high linear complexity, and even distribution of short subsequences. Example code in C: /* 16-bit toy ASG (much too small for practical usage); return 0 or 1. */ unsigned ASG16toy(void) static unsigned /* unsigned type with at least 16 bits */ lfsr2 = 0x8102, /* initial state, 16 bits, must not be 0 */ lfsr1 = 0x4210, /* initial state, 15 bits, must not be 0 */ lfsr0 = 0x2492; /* initial state, 14 bits, must not be 0 */ /* LFSR2 use x^^16 + x^^14 + x^^13 + x^^11 + 1 */ lfsr2 = (-(lfsr2&amp;1))&amp;0x8016 ^ lfsr2»1; if (lfsr2&amp;1) /* LFSR1 use x^^15 + x^^14 + 1 */ lfsr1 = (-(lfsr1&amp;1))&amp;0x4001 ^ lfsr1»1; else /* LFSR0 use x^^14 + x^^13 + x^^3 + x^^2 + 1 */ lfsr0 = (-(lfsr0&amp;1))&amp;0x2C01 ^ lfsr0»1; return (lfsr0 ^ lfsr1)&amp;1; An ASG is very simple to implement in hardware. In particular, contrary to the shrinking generator and self-shrinking generator, an output bit is produced at each clock, ensuring consistent performance and resistance to timing attacks. Security. Shahram Khazaei, Simon Fischer, and Willi Meier give a cryptanalysis of the ASG allowing various tradeoffs between time complexity and the amount of output needed to mount the attack, e.g. with asymptotic complexity formula_0 and formula_1 bits, where formula_2 is the size of the shortest of the three LFSRs. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "O(L^2.2^{2L/3})" }, { "math_id": 1, "text": "O(2^{2L/3})" }, { "math_id": 2, "text": "L" } ]
https://en.wikipedia.org/wiki?curid=12795886
12796
Genotype
Part of the genetic makeup of a cell which determines one of its characteristics The genotype of an organism is its complete set of genetic material. Genotype can also be used to refer to the alleles or variants an individual carries in a particular gene or genetic location. The number of alleles an individual can have in a specific gene depends on the number of copies of each chromosome found in that species, also referred to as ploidy. In diploid species like humans, two full sets of chromosomes are present, meaning each individual has two alleles for any given gene. If both alleles are the same, the genotype is referred to as homozygous. If the alleles are different, the genotype is referred to as heterozygous. Genotype contributes to phenotype, the observable traits and characteristics in an individual or organism. The degree to which genotype affects phenotype depends on the trait. For example, the petal color in a pea plant is exclusively determined by genotype. The petals can be purple or white depending on the alleles present in the pea plant. However, other traits are only partially influenced by genotype. These traits are often called complex traits because they are influenced by additional factors, such as environmental and epigenetic factors. Not all individuals with the same genotype look or act the same way because appearance and behavior are modified by environmental and growing conditions. Likewise, not all organisms that look alike necessarily have the same genotype. The term "genotype" was coined by the Danish botanist Wilhelm Johannsen in 1903. Phenotype. Any given gene will usually cause an observable change in an organism, known as the phenotype. The terms genotype and phenotype are distinct for at least two reasons: A simple example to illustrate genotype as distinct from phenotype is the flower colour in pea plants (see Gregor Mendel). There are three available genotypes, PP (homozygous dominant), Pp (heterozygous), and pp (homozygous recessive). All three have different genotypes but the first two have the same phenotype (purple) as distinct from the third (white). A more technical example to illustrate genotype is the single-nucleotide polymorphism or SNP. A SNP occurs when corresponding sequences of DNA from different individuals differ at one DNA base, for example where the sequence AAGCCTA changes to AAGCTTA. This contains two alleles : C and T. SNPs typically have three genotypes, denoted generically AA Aa and aa. In the example above, the three genotypes would be CC, CT and TT. Other types of genetic marker, such as microsatellites, can have more than two alleles, and thus many different genotypes. Penetrance is the proportion of individuals showing a specified genotype in their phenotype under a given set of environmental conditions. Mendelian inheritance. Traits that are determined exclusively by genotype are typically inherited in a Mendelian pattern. These laws of inheritance were described extensively by Gregor Mendel, who performed experiments with pea plants to determine how traits were passed on from generation to generation. He studied phenotypes that were easily observed, such as plant height, petal color, or seed shape. He was able to observe that if he crossed two true-breeding plants with distinct phenotypes, all the offspring would have the same phenotype. For example, when he crossed a tall plant with a short plant, all the resulting plants would be tall. However, when he self-fertilized the plants that resulted, about 1/4 of the second generation would be short. He concluded that some traits were dominant, such as tall height, and others were recessive, like short height. Though Mendel was not aware at the time, each phenotype he studied was controlled by a single gene with two alleles. In the case of plant height, one allele caused the plants to be tall, and the other caused plants to be short. When the tall allele was present, the plant would be tall, even if the plant was heterozygous. In order for the plant to be short, it had to be homozygous for the recessive allele. One way this can be illustrated is using a Punnett square. In a Punnett square, the genotypes of the parents are placed on the outside. An uppercase letter is typically used to represent the dominant allele, and a lowercase letter is used to represent the recessive allele. The possible genotypes of the offspring can then be determined by combining the parent genotypes. In the example on the right, both parents are heterozygous, with a genotype of Bb. The offspring can inherit a dominant allele from each parent, making them homozygous with a genotype of BB. The offspring can inherit a dominant allele from one parent and a recessive allele from the other parent, making them heterozygous with a genotype of Bb. Finally, the offspring could inherit a recessive allele from each parent, making them homozygous with a genotype of bb. Plants with the BB and Bb genotypes will look the same, since the B allele is dominant. The plant with the bb genotype will have the recessive trait. These inheritance patterns can also be applied to hereditary diseases or conditions in humans or animals. Some conditions are inherited in an autosomal dominant pattern, meaning individuals with the condition typically have an affected parent as well. A classic pedigree for an autosomal dominant condition shows affected individuals in every generation. Other conditions are inherited in an autosomal recessive pattern, where affected individuals do not typically have an affected parent. Since each parent must have a copy of the recessive allele in order to have an affected offspring, the parents are referred to as carriers of the condition. In autosomal conditions, the sex of the offspring does not play a role in their risk of being affected. In sex-linked conditions, the sex of the offspring affects their chances of having the condition. In humans, females inherit two X chromosomes, one from each parent, while males inherit an X chromosome from their mother and a Y chromosome from their father. X-linked dominant conditions can be distinguished from autosomal dominant conditions in pedigrees by the lack of transmission from fathers to sons, since affected fathers only pass their X chromosome to their daughters. In X-linked recessive conditions, males are typically affected more commonly because they are hemizygous, with only one X chromosome. In females, the presence of a second X chromosome will prevent the condition from appearing. Females are therefore carriers of the condition and can pass the trait on to their sons. Mendelian patterns of inheritance can be complicated by additional factors. Some diseases show incomplete penetrance, meaning not all individuals with the disease-causing allele develop signs or symptoms of the disease. Penetrance can also be age-dependent, meaning signs or symptoms of disease are not visible until later in life. For example, Huntington disease is an autosomal dominant condition, but up to 25% of individuals with the affected genotype will not develop symptoms until after age 50. Another factor that can complicate Mendelian inheritance patterns is variable expressivity, in which individuals with the same genotype show different signs or symptoms of disease. For example, individuals with polydactyly can have a variable number of extra digits. Non-Mendelian inheritance. Many traits are not inherited in a Mendelian fashion, but have more complex patterns of inheritance. Incomplete dominance. For some traits, neither allele is completely dominant. Heterozygotes often have an appearance somewhere in between those of homozygotes. For example, a cross between true-breeding red and white "Mirabilis jalapa" results in pink flowers. Codominance. Codominance refers to traits in which both alleles are expressed in the offspring in approximately equal amounts. A classic example is the ABO blood group system in humans, where both the A and B alleles are expressed when they are present. Individuals with the AB genotype have both A and B proteins expressed on their red blood cells. Epistasis. Epistasis is when the phenotype of one gene is affected by one or more other genes. This is often through some sort of masking effect of one gene on the other. For example, the "A" gene codes for hair color, a dominant "A" allele codes for brown hair, and a recessive "a" allele codes for blonde hair, but a separate "B" gene controls hair growth, and a recessive "b" allele causes baldness. If the individual has the BB or Bb genotype, then they produce hair and the hair color phenotype can be observed, but if the individual has a bb genotype, then the person is bald which masks the A gene entirely. Polygenic traits. A polygenic trait is one whose phenotype is dependent on the additive effects of multiple genes. The contributions of each of these genes are typically small and add up to a final phenotype with a large amount of variation. A well studied example of this is the number of sensory bristles on a fly. These types of additive effects is also the explanation for the amount of variation in human eye color. Genotyping. Genotyping refers to the method used to determine an individual's genotype. There are a variety of techniques that can be used to assess genotype. The genotyping method typically depends on what information is being sought. Many techniques initially require amplification of the DNA sample, which is commonly done using PCR. Some techniques are designed to investigate specific SNPs or alleles in a particular gene or set of genes, such as whether an individual is a carrier for a particular condition. This can be done via a variety of techniques, including allele specific oligonucleotide (ASO) probes or DNA sequencing. Tools such as multiplex ligation-dependent probe amplification can also be used to look for duplications or deletions of genes or gene sections. Other techniques are meant to assess a large number of SNPs across the genome, such as SNP arrays. This type of technology is commonly used for genome-wide association studies. Large-scale techniques to assess the entire genome are also available. This includes karyotyping to determine the number of chromosomes an individual has and chromosomal microarrays to assess for large duplications or deletions in the chromosome. More detailed information can be determined using exome sequencing, which provides the specific sequence of all DNA in the coding region of the genome, or whole genome sequencing, which sequences the entire genome including non-coding regions. Genotype encoding. In linear models, the genotypes can be encoded in different manners. Let us consider a biallelic locus with two possible alleles, encoded by formula_0 and formula_1. We consider formula_1 to correspond to the dominant allele to the reference allele formula_0. The following table details the different encoding. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "A" }, { "math_id": 1, "text": "a" } ]
https://en.wikipedia.org/wiki?curid=12796
1279756
Bond order
Difference between the number of bonds and anti-bonds in a molecule In chemistry, bond order is a formal measure of the multiplicity of a covalent bond between two atoms. As introduced by Linus Pauling, bond order is defined as the difference between the numbers of electron pairs in bonding and antibonding molecular orbitals. Bond order gives a rough indication of the stability of a bond. Isoelectronic species have the same bond order. Examples. The bond order itself is the number of electron pairs (covalent bonds) between two atoms. For example, in diatomic nitrogen N≡N, the bond order between the two nitrogen atoms is 3 (triple bond). In acetylene H–C≡C–H, the bond order between the two carbon atoms is also 3, and the C–H bond order is 1 (single bond). In carbon monoxide, , the bond order between carbon and oxygen is 3. In thiazyl trifluoride , the bond order between sulfur and nitrogen is 3, and between sulfur and fluorine is 1. In diatomic oxygen O=O the bond order is 2 (double bond). In ethylene the bond order between the two carbon atoms is also 2. The bond order between carbon and oxygen in carbon dioxide O=C=O is also 2. In phosgene , the bond order between carbon and oxygen is 2, and between carbon and chlorine is 1. In some molecules, bond orders can be 4 (quadruple bond), 5 (quintuple bond) or even 6 (sextuple bond). For example, potassium octachlorodimolybdate salt () contains the anion, in which the two Mo atoms are linked to each other by a bond with order of 4. Each Mo atom is linked to four ligands by a bond with order of 1. The compound (terphenyl)–CrCr–(terphenyl) contains two chromium atoms linked to each other by a bond with order of 5, and each chromium atom is linked to one terphenyl ligand by a single bond. A bond of order 6 is detected in ditungsten molecules , which exist only in a gaseous phase. Non-integer bond orders. In molecules which have resonance or nonclassical bonding, bond order may not be an integer. In benzene, the delocalized molecular orbitals contain 6 pi electrons over six carbons, essentially yielding half a pi bond together with the sigma bond for each pair of carbon atoms, giving a calculated bond order of 1.5 (one and a half bond). Furthermore, bond orders of 1.1 (eleven tenths bond), 4/3 (or 1.333333..., four thirds bond) or 0.5 (half bond), for example, can occur in some molecules and essentially refer to bond strength relative to bonds with order 1. In the nitrate anion (), the bond order for each bond between nitrogen and oxygen is 4/3 (or 1.333333...). Bonding in dihydrogen cation can be described as a covalent one-electron bond, thus the bonding between the two hydrogen atoms has bond order of 0.5. Bond order in molecular orbital theory. In molecular orbital theory, bond order is defined as half the difference between the number of bonding electrons and the number of antibonding electrons as per the equation below. This often but not always yields similar results for bonds near their equilibrium lengths, but it does not work for stretched bonds. Bond order is also an index of bond strength and is also used extensively in valence bond theory. "bond order" = Generally, the higher the bond order, the stronger the bond. Bond orders of one-half may be stable, as shown by the stability of (bond length 106 pm, bond energy 269 kJ/mol) and (bond length 108 pm, bond energy 251 kJ/mol). Hückel molecular orbital theory offers another approach for defining bond orders based on molecular orbital coefficients, for planar molecules with delocalized π bonding. The theory divides bonding into a sigma framework and a pi system. The π-bond order between atoms "r" and "s" derived from Hückel theory was defined by Charles Coulson by using the orbital coefficients of the Hückel MOs: formula_0, Here the sum extends over π molecular orbitals only, and "ni" is the number of electrons occupying orbital "i" with coefficients "cri" and "csi" on atoms "r" and "s" respectively. Assuming a bond order contribution of 1 from the sigma component this gives a total bond order (σ + π) of 5/3 = 1.67 for benzene, rather than the commonly cited bond order of 1.5, showing some degree of ambiguity in how the concept of bond order is defined. For more elaborate forms of molecular orbital theory involving larger basis sets, still other definitions have been proposed. A standard quantum mechanical definition for bond order has been debated for a long time. A comprehensive method to compute bond orders from quantum chemistry calculations was published in 2017. Other definitions. The bond order concept is used in molecular dynamics and bond order potentials. The magnitude of the bond order is associated with the bond length. According to Linus Pauling in 1947, the bond order between atoms "i" and "j" is experimentally described as formula_1 where "d"1 is the single bond length, "dij" is the bond length experimentally measured, and "b" is a constant, depending on the atoms. Pauling suggested a value of 0.353 Å for "b", for carbon-carbon bonds in the original equation: formula_2 The value of the constant "b" depends on the atoms. This definition of bond order is somewhat "ad hoc" and only easy to apply for diatomic molecules. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "p_{rs} = \\sum_i n_ic_{ri}c_{si}" }, { "math_id": 1, "text": "s_{ij} = \\exp{\\left[\\frac{d_{1} - d_{ij}}{b}\\right]}" }, { "math_id": 2, "text": "d_{1} - d_{ij} = 0.353~\\text{ln}(s_{ij})" } ]
https://en.wikipedia.org/wiki?curid=1279756
1280086
Ion-propelled aircraft
Electrohydrodynamic aircraft propulsion An ion-propelled aircraft or ionocraft is an aircraft that uses electrohydrodynamics (EHD) to provide lift or thrust in the air without requiring combustion or moving parts. Current designs do not produce sufficient thrust for manned flight or useful loads. History. Origins. The principle of ionic wind propulsion with corona-generated charged particles was discovered soon after the discovery of electricity with references dating to 1709 in a book titled "Physico-Mechanical Experiments on Various Subjects" by Francis Hauksbee. VTOL "lifter" experiments. American experimenter Thomas Townsend Brown spent much of his life working on the principle, under the mistaken impression that it was an anti-gravity effect, which he named the Biefeld–Brown effect. Since his devices produced thrust in the direction of the field gradient, regardless of the direction of gravity, and did not work in a vacuum, other workers realized that the effect was due to EHD. VTOL ion-propelled aircraft are sometimes called "lifters". Early examples were able to lift about a gram of weight per watt, This was insufficient to lift the heavy high-voltage power supply necessary, which remained on the ground and supplied the craft via long, thin and flexible wires. The use of EHD propulsion for lift was studied by American aircraft designer Major Alexander Prokofieff de Seversky in the 1950s and 1960s. He filed a patent for an "ionocraft" in 1959. He built and flew a model VTOL ionocraft capable of sideways manoeuvring by varying the voltages applied in different areas, although the heavy power supply remained external. The 2008 Wingless Electromagnetic Air Vehicle (WEAV), a saucer-shaped EHD lifter with electrodes embedded throughout its surface, was studied by a team of researchers led by Subrata Roy at the University of Florida in the early part of the twenty-first century. The propulsion system employed many innovations, including the use of magnetic fields to enhance the ionisation efficiency. A model with an external supply achieved minimal lift-off and hover. Onboard power. Twenty-first century power supplies are lighter and more efficient. The first ion-propelled aircraft to take off and fly using its own onboard power supply was a VTOL craft developed by Ethan Krauss of Electron Air in 2006. His patent application was filed in 2014, and he was awarded a microgrant to support his project by Stardust Startups in 2017. The craft developed enough thrust to rise rapidly or to fly horizontally for several minutes. In November 2018 the first self-contained ion-propelled fixed-wing airplane, the MIT EAD Airframe Version 2 flew 60 meters. It was developed by a team of students led by Steven Barrett from the Massachusetts Institute of Technology. It had a 5-meter wingspan and weighed 2.45 kg. The craft was catapult-launched using an elastic band, with the EAD system sustaining the aircraft in flight at low level. Principles of operation. Ionic air propulsion is a technique for creating a flow of air through electrical energy, without any moving parts. Because of this it is sometimes described as a "solid-state" drive. It is based on the principle of electrohydrodynamics. In its basic form, it consists of two parallel conductive electrodes, a leading emitter wire and a downstream collector. When such an arrangement is powered by high voltage (in the range of kilovolts per mm), the emitter ionizes molecules in the air that accelerate backwards to the collector, producing thrust in reaction. Along the way, these ions collide with electrically neutral air molecules and accelerate them in turn. The effect is not directly dependent on electrical polarity, as the ions may be positively or negatively charged. Reversing the polarity of the electrodes does not alter the direction of motion, as it also reverses the polarity of the ions carrying charge. Thrust is produced in the same direction, either way. For positive corona, nitrogen ions are created initially, while for negative polarity, oxygen ions are the major primary ions. Both these types of ion immediately attract a variety of air molecules to create molecular cluster-ions of either sign, which act as charge carriers. Current EHD thrusters are far less efficient than conventional engines. An MIT researcher noted that ion thrusters have the potential to be far more efficient than conventional jet engines. Unlike pure ion thruster rockets, the electrohydrodynamic principle does not apply in the vacuum of space. Electrohydrodynamics. The thrust generated by an EHD device is an example of the Biefeld–Brown effect and can be derived through a modified use of the Child–Langmuir equation. A generalized one-dimensional treatment gives the equation: formula_0 where As applied to a gas such as air, the principle is also referred to as electroaerodynamics (EAD). When the ionocraft is turned on, the corona wire becomes charged with high voltage, usually between 20 and 50 kV. When the corona wire reaches approximately 30 kV, it causes the air molecules nearby to become ionised by stripping their electrons from them. As this happens, the ions are repelled from the anode and attracted towards the collector, causing the majority of the ions to accelerate toward the collector. These ions travel at a constant average velocity termed the drift velocity. Such velocity depends on the mean free path between collisions, the strength of the external electric field, and the mass of ions and neutral air molecules. The fact that the current is carried by a corona discharge (and not a tightly confined arc) means that the moving particles diffuse into an expanding ion cloud, and collide frequently with neutral air molecules. It is these collisions that create thrust. The momentum of the ion cloud is partially imparted onto the neutral air molecules that it collides with, which, because they are neutral, do not migrate back to the second electrode. Instead they continue to travel in the same direction, creating a neutral wind. As these neutral molecules are ejected from the ionocraft, there are, in agreement with Newton's Third Law of Motion, equal and opposite forces, so the ionocraft moves in the opposite direction with an equal force. The force exerted is comparable to a gentle breeze. The resulting thrust depends on other external factors including air pressure and temperature, gas composition, voltage, humidity, and air gap distance. The air mass in the gap between the electrodes is impacted repeatedly by excited particles moving at high drift velocity. This creates electrical resistance, which must be overcome. The result of the neutral air caught in the process is to effectively cause an exchange in momentum and thus generate thrust. The heavier and denser the air, the higher the resulting thrust. Aircraft configuration. As with conventional reaction thrust, EAD thrust may be directed either horizontally to power a fixed-wing airplane or vertically to support a powered lift craft, sometimes referred to as a "lifter". Design. The thrust generating components of an ion propulsion system consist of three parts; a corona or emitter wire, an air gap and a collector wire or strip downstream from the emitter. A lightweight insulating frame supports the arrangement. The emitter and collector should be as close to each other as possible, i.e. with a narrow air gap, to achieve a saturated corona current condition that produces maximum thrust. However, if the emitter is too close to the collector it tends to arc across the gap. Ion propulsion systems require many safety precautions due to the required high voltage. Emitter. The emitter wire is typically connected to the positive terminal of the high voltage power supply. In general, it is made from a small gauge bare conductive wire. While copper wire can be used, it does not work as well as stainless steel. Similarly, thinner wire such as 44 or 50 gauge tends to outperform more common, larger sizes such as 30 gauge, as the stronger electric field around the smaller diameter wire results in lower ionisation onset voltage and a larger corona current as described by Peek's law. The emitter is sometimes referred to as the "corona wire" because of its tendency to emit a purple corona discharge glow while in use. This is simply a side effect of ionization. Air gap. The air gap insulates the two electrodes and allows the ions generated at the emitter to accelerate and transfer momentum to neutral air molecules, before losing their charge at the collector. The width of the air gap is typically 1 mm / kV.&lt;ref name="K. Meesters/ W. Terpstra"&gt;&lt;/ref&gt; Collector. The collector is shaped to provide a smooth equipotential surface underneath the corona wire. Variations of this include a wire mesh, parallel conductive tubes, or a foil skirt with a smooth, round edge. Sharp edges on the skirt degrade performance, as it generates ions of opposite polarity to those within the thrust mechanism. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "F = \\frac{Id}{k} " } ]
https://en.wikipedia.org/wiki?curid=1280086
12800904
Production (computer science)
Method of symbol substitution A production or production rule in computer science is a rewrite rule specifying a symbol substitution that can be recursively performed to generate new symbol sequences. A finite set of productions formula_0 is the main component in the specification of a formal grammar (specifically a generative grammar). The other components are a finite set formula_1 of nonterminal symbols, a finite set (known as an alphabet) formula_2 of terminal symbols that is disjoint from formula_1 and a distinguished symbol formula_3 that is the "start symbol". In an unrestricted grammar, a production is of the form formula_4, where formula_5 and formula_6 are arbitrary strings of terminals and nonterminals, and formula_5 may not be the empty string. If formula_6 is the empty string, this is denoted by the symbol formula_7, or formula_8 (rather than leaving the right-hand side blank). So productions are members of the cartesian product formula_9, where formula_10 is the "vocabulary", formula_11 is the Kleene star operator, formula_12 indicates concatenation, formula_13 denotes set union, and formula_14 denotes set minus or set difference. If we do not allow the start symbol to occur in formula_6 (the word on the right side), we have to replace formula_15 by formula_16 on the right side of the cartesian product symbol. The other types of formal grammar in the Chomsky hierarchy impose additional restrictions on what constitutes a production. Notably in a context-free grammar, the left-hand side of a production must be a single nonterminal symbol. So productions are of the form: formula_17 Grammar generation. To generate a string in the language, one begins with a string consisting of only a single "start symbol", and then successively applies the rules (any number of times, in any order) to rewrite this string. This stops when a string containing only terminals is obtained. The language consists of all the strings that can be generated in this manner. Any particular sequence of legal choices taken during this rewriting process yields one particular string in the language. If there are multiple different ways of generating this single string, then the grammar is said to be ambiguous. For example, assume the alphabet consists of formula_18 and formula_19, with the start symbol formula_20, and we have the following rules: 1. formula_21 2. formula_22 then we start with formula_20, and can choose a rule to apply to it. If we choose rule 1, we replace formula_20 with formula_23 and obtain the string formula_23. If we choose rule 1 again, we replace formula_20 with formula_23 and obtain the string formula_24. This process is repeated until we only have symbols from the alphabet (i.e., formula_18 and formula_19). If we now choose rule 2, we replace formula_20 with formula_25 and obtain the string formula_26, and are done. We can write this series of choices more briefly, using symbols: formula_27. The language of the grammar is the set of all the strings that can be generated using this process: formula_28.
[ { "math_id": 0, "text": "P" }, { "math_id": 1, "text": "N" }, { "math_id": 2, "text": "\\Sigma" }, { "math_id": 3, "text": "S \\in N" }, { "math_id": 4, "text": "u \\to v" }, { "math_id": 5, "text": "u" }, { "math_id": 6, "text": "v" }, { "math_id": 7, "text": "\\epsilon" }, { "math_id": 8, "text": "\\lambda" }, { "math_id": 9, "text": "V^*NV^* \\times V^* = (V^*\\setminus\\Sigma^*) \\times V^*" }, { "math_id": 10, "text": "V := N \\cup \\Sigma" }, { "math_id": 11, "text": "{}^{*}" }, { "math_id": 12, "text": "V^*NV^*" }, { "math_id": 13, "text": "\\cup" }, { "math_id": 14, "text": "\\setminus" }, { "math_id": 15, "text": "V^*" }, { "math_id": 16, "text": "(V \\setminus \\{S\\})^*" }, { "math_id": 17, "text": "N \\to (N \\cup \\Sigma)^*" }, { "math_id": 18, "text": "a" }, { "math_id": 19, "text": "b" }, { "math_id": 20, "text": "S" }, { "math_id": 21, "text": "S \\rightarrow aSb" }, { "math_id": 22, "text": "S \\rightarrow ba" }, { "math_id": 23, "text": "aSb" }, { "math_id": 24, "text": "aaSbb" }, { "math_id": 25, "text": "ba" }, { "math_id": 26, "text": "aababb" }, { "math_id": 27, "text": "S \\Rightarrow aSb \\Rightarrow aaSbb \\Rightarrow aababb" }, { "math_id": 28, "text": "\\{ba, abab, aababb, aaababbb, \\dotsc\\}" } ]
https://en.wikipedia.org/wiki?curid=12800904
12802352
Hadwiger–Finsler inequality
In mathematics, the Hadwiger–Finsler inequality is a result on the geometry of triangles in the Euclidean plane. It states that if a triangle in the plane has side lengths "a", "b" and "c" and area "T", then formula_0 formula_1 Related inequalities. Hadwiger–Finsler inequality is actually equivalent to Weitzenböck's inequality. Applying (W) to the circummidarc triangle gives (HF) Weitzenböck's inequality can also be proved using Heron's formula, by which route it can be seen that equality holds in (W) if and only if the triangle is an equilateral triangle, i.e. "a" = "b" = "c". formula_2 with equality only for a square. Where formula_3 Proof. From the cosines law we have: formula_4 α being the angle between b and c. This can be transformed into: formula_5 Since A=1/2bcsinα we have: formula_6 Now remember that formula_7 and formula_8 Using this we get: formula_9 Doing this for all sides of the triangle and adding up we get: formula_10 β and γ being the other angles of the triangle. Now since the halves of the triangle’s angles are less than π/2 the function tan is convex we have: formula_11 Using this we get: formula_12 This is the Hadwiger-Finsler inequality. History. The Hadwiger–Finsler inequality is named after Paul Finsler and Hugo Hadwiger (1937), who also published in the same paper the Finsler–Hadwiger theorem on a square derived from two other squares that share a vertex. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "a^{2} + b^{2} + c^{2} \\geq (a - b)^{2} + (b - c)^{2} + (c - a)^{2} + 4 \\sqrt{3} T \\quad \\mbox{(HF)}." }, { "math_id": 1, "text": "a^{2} + b^{2} + c^{2} \\geq 4 \\sqrt{3} T \\quad \\mbox{(W)}." }, { "math_id": 2, "text": " a^2+b^2+c^2+d^2 \\ge 4T + \\frac{\\sqrt{3}-1}{\\sqrt{3}}\\sum{(a-b)^2} " }, { "math_id": 3, "text": " \\sum{(a-b)^2}=(a-b)^2+(a-c)^2+(a-d)^2+(b-c)^2+(b-d)^2+(c-d)^2 " }, { "math_id": 4, "text": "a^2=b^2+c^2-2bc\\cos\\alpha" }, { "math_id": 5, "text": "a^2=(b-c)^2+2bc(1-\\cos\\alpha)" }, { "math_id": 6, "text": "a^2=(b-c)^2+4A\\frac{(1-\\cos\\alpha)}{\\sin\\alpha}" }, { "math_id": 7, "text": "1-\\cos\\alpha=2\\sin^2\\frac{\\alpha}{2}" }, { "math_id": 8, "text": "\\sin\\alpha=2\\sin\\frac{\\alpha}{2}\\cos\\frac{\\alpha}{2}" }, { "math_id": 9, "text": "a^2=(b-c)^2+4A\\tan\\frac{\\alpha}{2}" }, { "math_id": 10, "text": "a^2+b^2+c^2=(a-b)^2+(b-c)^2+(c-a)^2+4A(\\tan\\frac{\\alpha}{2}+\\tan\\frac{\\beta}{2}+\\tan\\frac{\\gamma}{2})" }, { "math_id": 11, "text": "\\tan\\frac{\\alpha}{2}+\\tan\\frac{\\beta}{2}+\\tan\\frac{\\gamma}{2}\\geq3\\tan\\frac{\\alpha+\\beta+\\gamma}{6}=3\\tan\\frac{\\pi}{6}=\\sqrt{3}" }, { "math_id": 12, "text": "a^2 + b^2 + c^2 \\geq (a-b)^2+(b-c)^2+(c-a)^2+ 4\\sqrt{3}\\, A" } ]
https://en.wikipedia.org/wiki?curid=12802352
12802742
Jaffard ring
In mathematics, a Jaffard ring is a type of ring, more general than a Noetherian ring, for which Krull dimension behaves as expected in polynomial extensions. They are named for Paul Jaffard who first studied them in 1960. Formally, a Jaffard ring is a ring "R" such that the polynomial ring formula_0 where "dim" denotes Krull dimension. A Jaffard ring that is also an integral domain is called a Jaffard domain. The Jaffard property is satisfied by any Noetherian ring "R", and examples of non-Noetherian rings might appear to be quite difficult to find, however they do arise naturally. For example, the ring of (all) algebraic integers, or more generally, any Prüfer domain. Another example is obtained by "pinching" formal power series at the origin along a subfield of infinite extension degree, such as the example given in 1953 by Abraham Seidenberg: the subring of formula_1 consisting of those formal power series whose constant term is rational. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\dim R[T_1,\\ldots,T_n] = n + \\dim R, \\," }, { "math_id": 1, "text": "\\overline{\\mathbf{Q}} [[T]]" } ]
https://en.wikipedia.org/wiki?curid=12802742
1280458
Marginal revenue
Additional total revenue generated by increasing product sales by 1 unit Marginal revenue (or marginal benefit) is a central concept in microeconomics that describes the additional total revenue generated by increasing product sales by 1 unit. Marginal revenue is the increase in revenue from the sale of one additional unit of product, i.e., the revenue from the sale of the last unit of product. It can be positive or negative. Marginal revenue is an important concept in vendor analysis. To derive the value of marginal revenue, it is required to examine the difference between the aggregate benefits a firm received from the quantity of a good and service produced last period and the current period with one extra unit increase in the rate of production. Marginal revenue is a fundamental tool for economic decision making within a firm's setting, together with marginal cost to be considered. In a perfectly competitive market, the incremental revenue generated by selling an additional unit of a good is equal to the price the firm is able to charge the buyer of the good. This is because a firm in a competitive market will always get the same price for every unit it sells regardless of the number of units the firm sells since the firm's sales can never impact the industry's price. Therefore, in a perfectly competitive market, firms set the price level equal to their marginal revenue formula_0. In imperfect competition, a monopoly firm is a large producer in the market and changes in its output levels impact market prices, determining the whole industry's sales. Therefore, a monopoly firm lowers its price on all units sold in order to increase output (quantity) by 1 unit. Since a reduction in price leads to a decline in revenue on each good sold by the firm, the marginal revenue generated is always lower than the price level charged formula_1. The marginal revenue (the increase in total revenue) is the price the firm gets on the additional unit sold, less the revenue lost by reducing the price on all other units that were sold prior to the decrease in price. Marginal revenue is the concept of a firm sacrificing the opportunity to sell the current output at a certain price, in order to sell a higher quantity at a reduced price. Profit maximization occurs at the point where marginal revenue (MR) equals marginal cost (MC). If formula_2 then a profit-maximizing firm will increase output to generate more profit, while if formula_3 then the firm will decrease output to gain additional profit. Thus the firm will choose the profit-maximizing level of output for which formula_4. Definition. Marginal revenue is equal to the ratio of the change in revenue for some change in quantity sold to that change in quantity sold. This can be formulated as: formula_5 This can also be represented as a derivative when the change in quantity sold becomes arbitrarily small. Define the revenue function to be formula_6 where "Q" is output and "P"("Q") is the inverse demand function of customers. By the product rule, marginal revenue is then given by formula_7 where the prime sign indicates a derivative. For a firm facing perfect competition, price does not change with quantity sold (formula_8), so marginal revenue is equal to price. For a monopoly, the price decreases with quantity sold (formula_9), so marginal revenue is less than price for positive formula_10 (see Example 1). Example 1: If a firm sells 20 units of books (quantity) for $50 each (price), this earns total revenue: P*Q = $50*20 = $1000 Then if the firm increases quantity sold to 21 units of books at $49 each, this earns total revenue: P*Q = $49*21 = $1029 Therefore, using the marginal revenue formula (MR) = formula_11 Example 2: If a firm's total revenue function is written as formula_6 formula_12 formula_13 Then, by first order derivation, marginal revenue would be expressed as formula_14 Therefore, if Q = 40, MR = 200 − 2(40) = $120 Marginal revenue curve. The marginal revenue curve is affected by the same factors as the demand curve – changes in income, changes in the prices of complements and substitutes, changes in populations, etc. These factors can cause the MR curve to shift and rotate. Marginal revenue curve differs under perfect competition and imperfect competition (monopoly). Under perfect competition, there are multiple firms present in the market. Changes in the supply level of a single firm does not have an impact on the total price in the market. Firms follow the price determined by market equilibrium of supply and demand and are price takers. The marginal revenue curve is a horizontal line at the market price, implying perfectly elastic demand and is equal to the demand curve. Under monopoly, one firm is a sole seller in the market with a differentiated product. The supply level (output) and price is determined by the monopolist in order to maximise profits, making a monopolist a price maker. The marginal revenue for a monopolist is the private gain of selling an additional unit of output. The marginal revenue curve is downward sloping and below the demand curve and the additional gain from increasing the quantity sold is lower than the chosen market price. Under monopoly, the price of all units lowers each time a firm increases its output sold, this causes the firm to face a diminishing marginal revenue. Marginal revenue curve and marginal cost curve. A company will stop producing a product/service when marginal revenue (money the company earns from each additional sale) equals marginal cost (the cost the company costs to produce an additional unit). Therefore, a company is making money when MR is greater than marginal cost (MC). And when MC = MR, it is called profit maximization. After this point; the company can no longer make a profit. Therefore, it is in their interest to stop production. Relationship between marginal revenue and elasticity. The relationship between marginal revenue and the elasticity of demand by the firm's customers can be derived as follows: formula_15 Taking the first order derivative of total revenue: formula_16 formula_17 where "R" is total revenue, "P"("Q") is the inverse of the demand function, and "e" &lt; 0 is the price elasticity of demand written as formula_18. Monopolist firm, as a price maker in the market, has the incentives to lower prices to boost quantities sold. The price effects occur when a firm raises its products' prices and increased revenue on each unit sold. The quantity effect, on the other hand, describes the stage when prices increased and consumers quantity demanded reduce. Firms' pricing decision, therefore, is based on the tradeoff between the two outcomes by considering elasticity. When a monopolist firm is facing an Inelastic demand curve (e&lt;1), it implies that a percentage change in quantity is less than the percentage change in price. By increasing quantity sold, the firm is forced to accept a reduction of price for all the current and previous production units, resulting in a negative marginal revenue (MR). As such, as consumers are less sensitive and responsive to lower prices movement and so the expected product sales boost is highly unlikely and firms lose more profits due to reduction in marginal revenue. A rational firm will have to maintain its current price levels instead or increase the price for profit expansion. Increases in consumer's responsiveness to small changes in prices leads represents an elastic demand curve (e&gt;1), resulting in a positive marginal revenue (MR) under monopoly competition. This signifies that a percentage change in quantity outweighs the percentage change in price. Firms in the imperfect competition market that lower prices by a small portion benefit from a large percentage increase in quantity sold and this generates greater marginal revenue. With that, a rational firm will recognize the value of price effects under an elastic demand function for its products and would avoid increasing prices as the quantity (demand) lost would be amplified due to the elastic demand curve. If the firm is a perfect competitor, where quantity produced and sold has no effect on the market price, then the price elasticity of demand is negative infinity and marginal revenue simply equals the (market-determined) price formula_0. Therefore, it is essential to be aware of the elasticity of demand. A monopolist prefers to be on the more elastic end of the demand curve in order to gain a positive marginal revenue. This shows that a monopolist reduces output produced up to the point where marginal revenue is positive.  Marginal revenue and Marginal benefit. Example 1: Suppose consumers want to buy an additional lipstick. If the consumer is willing to pay $50 for this extra lipstick, the marginal income of the purchase is $50. However, the more lipsticks consumers have, the less they pay for the next lipstick. This is because as consumers accumulate more and more lipsticks, the benefits of having an additional lipstick will be reduced. Example 2: Suppose customers are considering buying 10 computers. If the marginal income of the 11th computer is $2, and the computer company is willing to sell the 11th component to maximize its consumer interest, the company's marginal income is $2 and consumers' marginal income is $2. Law of diminishing marginal returns. In microeconomics, for every unit of input added to a firm, the return received decreases. When a variable factor of production is put into a firm at a constant level of technology, the initial increase in this factor of production will increase output, but when it exceeds a certain limit, the increased output will diminish and will eventually reduce output in absolute terms. Law of increasing marginal returns. In contrast to the law of diminishing marginal returns, in a knowledge-dependent economy, as knowledge and technological inputs increase, the output increases and the producer's returns tend to increase. This is an example of increasing marginal revenue; suppose a company produces toy airplanes. After some production, the company spends $10 in materials and labor to build the 1st toy airplane. The 1st toy airplane sells for $15, which means the profit on that toy is $5. Now, suppose that the 2nd toy airplane also costs $10, but this time it can be sold for $17. The profit on the 2nd toy airplane is $12 greater than the profit on the 1st toy airplane. Marginal revenue and markup pricing. Profit maximization requires that a firm produces where marginal revenue equals marginal costs. Firm managers are unlikely to have complete information concerning their marginal revenue function or their marginal costs. However, the profit maximization conditions can be expressed in a “more easily applicable form”: MR = MC, MR = P(1 + 1/e), MC = P(1 + 1/e), MC = P + P/e, (P − MC)/ P = −1/e. Markup is the difference between price and marginal cost. The formula states that markup as a percentage of price equals the negative (and hence the absolute value) of the inverse of the elasticity of demand. A lower elasticity of demand implies a higher markup at the profit maximising equilibrium. (P − MC)/ P = −1/e is called the Lerner index after economist Abba Lerner. The Lerner index is a measure of market power — the ability of a firm to charge a price that exceeds marginal cost. The index varies from zero (when demand is infinitely elastic (a perfectly competitive market) to 1 (when demand has an elasticity of −1). The closer the index value is to 1, the greater is the difference between price and marginal cost. The Lerner index increases as demand becomes less elastic. Alternatively, the relationship can be expressed as: P = MC/(1 + 1/e). Thus, for example, if "e" is −2 and MC is $5.00 then price is $10.00. Example If a company can sell 10 units at $20 each or 11 units at $19 each, then the marginal revenue from the eleventh unit is (11 × 19) − (10 × 20) = $9. Notes. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "(MR = P)" }, { "math_id": 1, "text": "(MR < P) " }, { "math_id": 2, "text": "MR > MC" }, { "math_id": 3, "text": "MR < MC" }, { "math_id": 4, "text": "MR = MC" }, { "math_id": 5, "text": "MR = \\frac{\\Delta TR}{\\Delta Q}" }, { "math_id": 6, "text": "R(Q)=P(Q)\\cdot Q ," }, { "math_id": 7, "text": "R'(Q)=P(Q) + P'(Q)\\cdot Q," }, { "math_id": 8, "text": "P'(Q)=0" }, { "math_id": 9, "text": "P'(Q)<0" }, { "math_id": 10, "text": "Q" }, { "math_id": 11, "text": "\\frac{\\Delta TR}{\\Delta Q} = \\left ( \\frac{\\$1029 - \\$1000}{21 - 20} \\right ) = \\$29" }, { "math_id": 12, "text": " R(Q)=(Q)\\cdot (200 - Q) " }, { "math_id": 13, "text": " R(Q)=200Q - Q^2 " }, { "math_id": 14, "text": " MR = R'(Q)=200- 2Q" }, { "math_id": 15, "text": "R=P(Q)\\cdot Q," }, { "math_id": 16, "text": "\\left ( \\frac{dR}{dQ} \\right )= \\left ( \\frac{dQ}{dQ} \\right )\\cdot P + \\left ( \\frac{dP}{dQ} \\right ) \\cdot Q" }, { "math_id": 17, "text": "MR = dR/dQ = P + \\frac{dP}{dQ} \\cdot Q = P + \\left(\\frac{dP}{dQ} \\frac{Q}{P}\\right) \\cdot P = P \\cdot \\left(1 + \\frac{1}{e} \\right)," }, { "math_id": 18, "text": "e = \\left(\\frac{dQ}{dP}\\frac{P}{Q}\\right)" } ]
https://en.wikipedia.org/wiki?curid=1280458
12807714
Mu problem
In theoretical physics, the μ problem is a problem of supersymmetric theories, concerned with understanding the parameters of the theory. Background. The supersymmetric Higgs mass parameter μ appears as the following term in the superpotential: . It is necessary to provide a mass for the fermionic superpartners of the Higgs bosons, i.e. the higgsinos, and it enters as well the scalar potential of the Higgs bosons. To ensure that Hu and Hd get a non-zero vacuum expectation value after electroweak symmetry breaking, μ should be of the order of magnitude of the electroweak scale, many orders of magnitude smaller than the Planck scale (Mpl), which is the natural cutoff scale. This brings about a problem of naturalness: Why is that scale so much smaller than the cutoff scale? And why, if the μ term in the superpotential has different physical origins, do the corresponding scale happen to fall so close to each other? Before LHC, it was thought that the soft supersymmetry breaking terms should also be of the same order of magnitude as the electroweak scale. This was negated by the Higgs mass measurements and limits on supersymmetry models. One proposed solution, known as the Giudice–Masiero mechanism, is that this term does not appear explicitly in the Lagrangian, because it violates some global symmetry, and can therefore be created only via spontaneous breaking of this symmetry. This is proposed to happen together with F-term supersymmetry breaking, with a spurious field X that parameterizes the hidden supersymmetry-breaking sector of the theory (meaning that FX is the non-zero F-term). Let us assume that the Kahler potential includes a term of the form formula_0 times some dimensionless coefficient, which is naturally of order one, and where Mpl is Planck mass. Then as supersymmetry breaks, FX gets a non-zero vacuum expectation value ⟨FX⟩ and the following effective term is added to the superpotential: formula_1 which gives a measured formula_2 On the other hand, soft supersymmetry breaking terms are similarly created and also have a natural scale of formula_3 References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\ \\frac{X}{\\ M_\\mathsf{pl}\\ }\\ H_\\mathsf{u}\\ H_\\mathsf{d}\\ " }, { "math_id": 1, "text": "\\ \\frac{\\ \\langle F_\\mathsf{X} \\rangle\\ }{\\ M_\\mathsf{pl}\\ }\\ H_\\mathsf{u}\\ H_\\mathsf{d}\\ ," }, { "math_id": 2, "text": "\\ \\mu = \\frac{\\ \\langle F_\\mathsf{X} \\rangle\\ }{\\ M_\\mathsf{pl}\\ }\\ ." }, { "math_id": 3, "text": "\\ \\frac{\\ \\langle F_\\mathsf{X} \\rangle\\ }{\\ M_\\mathsf{pl}\\ }\\ ." } ]
https://en.wikipedia.org/wiki?curid=12807714
12807841
Hurwitz surface
In Riemann surface theory and hyperbolic geometry, a Hurwitz surface, named after Adolf Hurwitz, is a compact Riemann surface with precisely 84("g" − 1) automorphisms, where "g" is the genus of the surface. This number is maximal by virtue of Hurwitz's theorem on automorphisms . They are also referred to as Hurwitz curves, interpreting them as complex algebraic curves (complex dimension 1 = real dimension 2). The Fuchsian group of a Hurwitz surface is a finite index torsionfree normal subgroup of the (ordinary) (2,3,7) triangle group. The finite quotient group is precisely the automorphism group. Automorphisms of complex algebraic curves are "orientation-preserving" automorphisms of the underlying real surface; if one allows orientation-"reversing" isometries, this yields a group twice as large, of order 168("g" − 1), which is sometimes of interest. A note on terminology – in this and other contexts, the "(2,3,7) triangle group" most often refers, not to the "full" triangle group Δ(2,3,7) (the Coxeter group with Schwarz triangle (2,3,7) or a realization as a hyperbolic reflection group), but rather to the "ordinary" triangle group (the von Dyck group) "D"(2,3,7) of orientation-preserving maps (the rotation group), which is index 2. The group of complex automorphisms is a quotient of the "ordinary" (orientation-preserving) triangle group, while the group of (possibly orientation-reversing) isometries is a quotient of the "full" triangle group. Classification by genus. Only finitely many Hurwitz surfaces occur with each genus. The function formula_0 mapping the genus to the number of Hurwitz surfaces with that genus is unbounded, even though most of its values are zero. The sum formula_1 converges for formula_2, implying in an approximate sense that the genus of the formula_3th Hurwitz surface grows at least as a cubic function of formula_3 . The Hurwitz surface of least genus is the Klein quartic of genus 3, with automorphism group the projective special linear group PSL(2,7), of order 84(3 − 1) = 168 = 23·3·7, which is a simple group; (or order 336 if one allows orientation-reversing isometries). The next possible genus is 7, possessed by the Macbeath surface, with automorphism group PSL(2,8), which is the simple group of order 84(7 − 1) = 504 = 23·32·7; if one includes orientation-reversing isometries, the group is of order 1,008. An interesting phenomenon occurs in the next possible genus, namely 14. Here there is a triple of distinct Riemann surfaces with the identical automorphism group (of order 84(14 − 1) = 1092 = 22·3·7·13). The explanation for this phenomenon is arithmetic. Namely, in the ring of integers of the appropriate number field, the rational prime 13 splits as a product of three distinct prime ideals. The principal congruence subgroups defined by the triplet of primes produce Fuchsian groups corresponding to the first Hurwitz triplet. The sequence of allowable values for the genus of a Hurwitz surface begins 3, 7, 14, 17, 118, 129, 146, 385, 411, 474, 687, 769, 1009, 1025, 1459, 1537, 2091, ... (sequence in the OEIS) References. &lt;templatestyles src="Refbegin/styles.css" /&gt;
[ { "math_id": 0, "text": "h(g)" }, { "math_id": 1, "text": "\\sum_{i=1}^{\\infty}\\frac{h(g)}{g^s}" }, { "math_id": 2, "text": "s > 1/3" }, { "math_id": 3, "text": "n" } ]
https://en.wikipedia.org/wiki?curid=12807841
1280843
Hilbert's syzygy theorem
Theorem about linear relations in ideals and modules over polynomial rings In mathematics, Hilbert's syzygy theorem is one of the three fundamental theorems about polynomial rings over fields, first proved by David Hilbert in 1890, that were introduced for solving important open questions in invariant theory, and are at the basis of modern algebraic geometry. The two other theorems are Hilbert's basis theorem, which asserts that all ideals of polynomial rings over a field are finitely generated, and Hilbert's Nullstellensatz, which establishes a bijective correspondence between affine algebraic varieties and prime ideals of polynomial rings. Hilbert's syzygy theorem concerns the "relations", or syzygies in Hilbert's terminology, between the generators of an ideal, or, more generally, a module. As the relations form a module, one may consider the relations between the relations; the theorem asserts that, if one continues in this way, starting with a module over a polynomial ring in "n" indeterminates over a field, one eventually finds a zero module of relations, after at most "n" steps. Hilbert's syzygy theorem is now considered to be an early result of homological algebra. It is the starting point of the use of homological methods in commutative algebra and algebraic geometry. History. The syzygy theorem first appeared in Hilbert's seminal paper "Über die Theorie der algebraischen Formen" (1890). The paper is split into five parts: part I proves Hilbert's basis theorem over a field, while part II proves it over the integers. Part III contains the syzygy theorem (Theorem III), which is used in part IV to discuss the Hilbert polynomial. The last part, part V, proves finite generation of certain rings of invariants. Incidentally part III also contains a special case of the Hilbert–Burch theorem. Syzygies (relations). Originally, Hilbert defined syzygies for ideals in polynomial rings, but the concept generalizes trivially to (left) modules over any ring. Given a generating set formula_0 of a module "M" over a ring "R", a relation or first syzygy between the generators is a "k"-tuple formula_1 of elements of "R" such that formula_2 Let formula_3 be a free module with basis formula_4 The k-tuple formula_1 may be identified with the element formula_5 and the relations form the kernel formula_6 of the linear map formula_7 defined by formula_8 In other words, one has an exact sequence formula_9 This first syzygy module formula_6 depends on the choice of a generating set, but, if formula_10 is the module that is obtained with another generating set, there exist two free modules formula_11 and formula_12 such that formula_13 where formula_14 denote the direct sum of modules. The "second syzygy" module is the module of the relations between generators of the first syzygy module. By continuing in this way, one may define the "kth syzygy module" for every positive integer "k". If the "k"th syzygy module is free for some "k", then by taking a basis as a generating set, the next syzygy module (and every subsequent one) is the zero module. If one does not take a basis as a generating set, then all subsequent syzygy modules are free. Let "n" be the smallest integer, if any, such that the "n"th syzygy module of a module "M" is free or projective. The above property of invariance, up to the sum direct with free modules, implies that "n" does not depend on the choice of generating sets. The projective dimension of "M" is this integer, if it exists, or ∞ if not. This is equivalent with the existence of an exact sequence formula_15 where the modules formula_16 are free and formula_17 is projective. It can be shown that one may always choose the generating sets for formula_17 being free, that is for the above exact sequence to be a free resolution. Statement. Hilbert's syzygy theorem states that, if "M" is a finitely generated module over a polynomial ring formula_18 in "n" indeterminates over a field "k", then the "n"th syzygy module of "M" is always a free module. In modern language, this implies that the projective dimension of "M" is at most "n", and thus that there exists a free resolution formula_19 of length "k" ≤ "n". This upper bound on the projective dimension is sharp, that is, there are modules of projective dimension exactly "n". The standard example is the field "k", which may be considered as a formula_18-module by setting formula_20 for every "i" and every "c" ∈ "k". For this module, the "n"th syzygy module is free, but not the ("n" − 1)th one (for a proof, see , below). The theorem is also true for modules that are not finitely generated. As the global dimension of a ring is the supremum of the projective dimensions of all modules, Hilbert's syzygy theorem may be restated as: "the global dimension of formula_18 is n". Low dimension. In the case of zero indeterminates, Hilbert's syzygy theorem is simply the fact that every vector space has a basis. In the case of a single indeterminate, Hilbert's syzygy theorem is an instance of the theorem asserting that over a principal ideal ring, every submodule of a free module is itself free. Koszul complex. The Koszul complex, also called "complex of exterior algebra", allows, in some cases, an explicit description of all syzygy modules. Let formula_0 be a generating system of an ideal "I" in a polynomial ring formula_21, and let formula_22 be a free module of basis formula_23 The exterior algebra of formula_22 is the direct sum formula_24 where formula_25 is the free module, which has, as a basis, the exterior products formula_26 such that formula_27 In particular, one has formula_28 (because of the definition of the empty product), the two definitions of formula_22 coincide, and formula_29 for "t" &gt; "k". For every positive "t", one may define a linear map formula_30 by formula_31 where the hat means that the factor is omitted. A straightforward computation shows that the composition of two consecutive such maps is zero, and thus that one has a complex formula_32 This is the "Koszul complex". In general the Koszul complex is not an exact sequence, but "it is an exact sequence if one works with a polynomial ring" formula_21 "and an ideal generated by a regular sequence of homogeneous polynomials." In particular, the sequence formula_33 is regular, and the Koszul complex is thus a projective resolution of formula_34formula_35 In this case, the "n"th syzygy module is free of dimension one (generated by the product of all formula_36); the ("n" − 1)th syzygy module is thus the quotient of a free module of dimension "n" by the submodule generated by formula_37 This quotient may not be a projective module, as otherwise, there would exist polynomials formula_38 such that formula_39 which is impossible (substituting 0 for the formula_40 in the latter equality provides 1 = 0). This proves that the projective dimension of formula_41 is exactly "n". The same proof applies for proving that the projective dimension of formula_42 is exactly "t" if the formula_43 form a regular sequence of homogeneous polynomials. Computation. At Hilbert's time, there was no method available for computing syzygies. It was only known that an algorithm may be deduced from any upper bound of the degree of the generators of the module of syzygies. In fact, the coefficients of the syzygies are unknown polynomials. If the degree of these polynomials is bounded, the number of their monomials is also bounded. Expressing that one has a syzygy provides a system of linear equations whose unknowns are the coefficients of these monomials. Therefore, any algorithm for linear systems implies an algorithm for syzygies, as soon as a bound of the degrees is known. The first bound for syzygies (as well as for the ideal membership problem) was given in 1926 by Grete Hermann: Let "M" a submodule of a free module "L" of dimension "t" over formula_44 if the coefficients over a basis of "L" of a generating system of "M" have a total degree at most "d", then there is a constant "c" such that the degrees occurring in a generating system of the first syzygy module is at most formula_45 The same bound applies for testing the membership to "M" of an element of "L". On the other hand, there are examples where a double exponential degree necessarily occurs. However such examples are extremely rare, and this sets the question of an algorithm that is efficient when the output is not too large. At the present time, the best algorithms for computing syzygies are Gröbner basis algorithms. They allow the computation of the first syzygy module, and also, with almost no extra cost, all syzygies modules. Syzygies and regularity. One might wonder which ring-theoretic property of formula_46 causes the Hilbert syzygy theorem to hold. It turns out that this is regularity, which is an algebraic formulation of the fact that affine "n"-space is a variety without singularities. In fact the following generalization holds: Let formula_47 be a Noetherian ring. Then formula_47 has finite global dimension if and only if formula_47 is regular and the Krull dimension of formula_47 is finite; in that case the global dimension of formula_47 is equal to the Krull dimension. This result may be proven using Serre's theorem on regular local rings. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "g_1, \\ldots, g_k" }, { "math_id": 1, "text": "(a_1, \\ldots, a_k)" }, { "math_id": 2, "text": "a_1g_1 + \\cdots + a_kg_k =0." }, { "math_id": 3, "text": "L_0" }, { "math_id": 4, "text": "(G_1, \\ldots, G_k)." }, { "math_id": 5, "text": "a_1G_1 + \\cdots + a_kG_k," }, { "math_id": 6, "text": "R_1" }, { "math_id": 7, "text": "L_0 \\to M" }, { "math_id": 8, "text": "G_i \\mapsto g_i." }, { "math_id": 9, "text": "0 \\to R_1 \\to L_0 \\to M \\to 0." }, { "math_id": 10, "text": "S_1" }, { "math_id": 11, "text": "F_1" }, { "math_id": 12, "text": "F_2" }, { "math_id": 13, "text": "R_1 \\oplus F_1 \\cong S_1 \\oplus F_2" }, { "math_id": 14, "text": "\\oplus" }, { "math_id": 15, "text": "0 \\longrightarrow R_n \\longrightarrow L_{n-1} \\longrightarrow \\cdots \\longrightarrow L_0 \\longrightarrow M \\longrightarrow 0," }, { "math_id": 16, "text": "L_i" }, { "math_id": 17, "text": "R_n" }, { "math_id": 18, "text": "k[x_1,\\ldots,x_n]" }, { "math_id": 19, "text": "0 \\longrightarrow L_k \\longrightarrow L_{k-1} \\longrightarrow \\cdots \\longrightarrow L_0 \\longrightarrow M \\longrightarrow 0" }, { "math_id": 20, "text": "x_i c=0" }, { "math_id": 21, "text": "R=k[x_1,\\ldots,x_n]" }, { "math_id": 22, "text": "L_1" }, { "math_id": 23, "text": "G_1, \\ldots, G_k." }, { "math_id": 24, "text": "\\Lambda(L_1)=\\bigoplus_{t=0}^k L_t," }, { "math_id": 25, "text": "L_t" }, { "math_id": 26, "text": "G_{i_1} \\wedge \\cdots \\wedge G_{i_t}," }, { "math_id": 27, "text": "i_1< i_2<\\cdots <i_t." }, { "math_id": 28, "text": "L_0=R" }, { "math_id": 29, "text": "L_t=0" }, { "math_id": 30, "text": "L_t\\to L_{t-1}" }, { "math_id": 31, "text": "G_{i_1} \\wedge \\cdots \\wedge G_{i_t} \\mapsto \\sum_{j=1}^t (-1)^{j+1}g_{i_j}G_{i_1}\\wedge \\cdots\\wedge \\widehat{G}_{i_j} \\wedge \\cdots\\wedge G_{i_t}, " }, { "math_id": 32, "text": "0\\to L_t \\to L_{t-1} \\to \\cdots \\to L_1 \\to L_0 \\to R/I." }, { "math_id": 33, "text": "x_1,\\ldots,x_n" }, { "math_id": 34, "text": "" }, { "math_id": 35, "text": "k=R/\\langle x_1, \\ldots, x_n\\rangle." }, { "math_id": 36, "text": "G_i" }, { "math_id": 37, "text": "(x_1, -x_2, \\ldots, \\pm x_n)." }, { "math_id": 38, "text": "p_i" }, { "math_id": 39, "text": "p_1x_1 + \\cdots +p_nx_n=1," }, { "math_id": 40, "text": "x_i" }, { "math_id": 41, "text": "k=R/\\langle x_1, \\ldots, x_n\\rangle" }, { "math_id": 42, "text": "k[x_1, \\ldots, x_n]/\\langle g_1, \\ldots, g_t\\rangle" }, { "math_id": 43, "text": "g_i" }, { "math_id": 44, "text": "k[x_1, \\ldots, x_n];" }, { "math_id": 45, "text": "(td)^{2^{cn}}." }, { "math_id": 46, "text": "A=k[x_1,\\ldots,x_n]" }, { "math_id": 47, "text": "A" } ]
https://en.wikipedia.org/wiki?curid=1280843
12809051
UNIFAC
Liquid equilibrium model in statistical thermodynamics In statistical thermodynamics, the UNIFAC method (UNIQUAC Functional-group Activity Coefficients) is a semi-empirical system for the prediction of non-electrolyte activity in non-ideal mixtures. UNIFAC uses the functional groups present on the molecules that make up the liquid mixture to calculate activity coefficients. By using interactions for each of the functional groups present on the molecules, as well as some binary interaction coefficients, the activity of each of the solutions can be calculated. This information can be used to obtain information on liquid equilibria, which is useful in many thermodynamic calculations, such as chemical reactor design, and distillation calculations. The UNIFAC model was first published in 1975 by Fredenslund, Jones and John Prausnitz, a group of chemical engineering researchers from the University of California. Subsequently they and other authors have published a wide range of UNIFAC papers, extending the capabilities of the model; this has been by the development of new or revision of existing UNIFAC model parameters. UNIFAC is an attempt by these researchers to provide a flexible liquid equilibria model for wider use in chemistry, the chemical and process engineering disciplines. Introduction. A particular problem in the area of liquid-state thermodynamics is the sourcing of reliable thermodynamic constants. These constants are necessary for the successful prediction of the free energy state of the system; without this information it is impossible to model the equilibrium phases of the system. Obtaining this free energy data is not a trivial problem, and requires careful experiments, such as calorimetry, to successfully measure the energy of the system. Even when this work is performed it is infeasible to attempt to conduct this work for every single possible class of chemicals, and the binary, or higher, mixtures thereof. To alleviate this problem, free energy prediction models, such as UNIFAC, are employed to predict the system's energy based on a few previously measured constants. It is possible to calculate some of these parameters using ab initio methods like COSMO-RS, but results should be treated with caution, because ab initio predictions can be off. Similarly, UNIFAC can be off, and for both methods it is advisable to validate the energies obtained from these calculations experimentally. UNIFAC correlation. The UNIFAC correlation attempts to break down the problem of predicting interactions between molecules by describing molecular interactions based upon the functional groups attached to the molecule. This is done in order to reduce the sheer number of binary interactions that would be needed to be measured to predict the state of the system. Chemical activity. The activity coefficient of the components in a system is a correction factor that accounts for deviations of real systems from that of an Ideal solution, which can either be measured via experiment or estimated from chemical models (such as UNIFAC). By adding a correction factor, known as the activity (formula_0, the activity of the ith component) to the liquid phase fraction of a liquid mixture, some of the effects of the real solution can be accounted for. The activity of a real chemical is a function of the thermodynamic state of the system, i.e. temperature and pressure. Equipped with the activity coefficients and a knowledge of the constituents and their relative amounts, phenomena such as phase separation and vapour-liquid equilibria can be calculated. UNIFAC attempts to be a general model for the successful prediction of activity coefficients. Model parameters. The UNIFAC model splits up the activity coefficient for each species in the system into two components; a combinatorial formula_1 and a residual component formula_2. For the formula_3-th molecule, the activity coefficients are broken down as per the following equation: formula_4 In the UNIFAC model, there are three main parameters required to determine the activity for each molecule in the system. Firstly there are the group surface area formula_5 and volume contributions formula_6 obtained from the Van der Waals surface area and volumes. These parameters depend purely upon the individual functional groups on the host molecules. Finally there is the binary interaction parameter formula_7, which is related to the interaction energy formula_8 of molecular pairs (equation in "residual" section). These parameters must be obtained either through experiments, via data fitting or molecular simulation. Combinatorial. The combinatorial component of the activity is contributed to by several terms in its equation (below), and is the same as for the UNIQUAC model. formula_9 where formula_10 and formula_11 are the molar weighted segment and area "fractional" components for the formula_3-th molecule in the total system and are defined by the following equation; formula_12 is a compound parameter of formula_13, formula_14 and formula_15. formula_14 is the coordination number of the system, but the model is found to be relatively insensitive to its value and is frequently quoted as a constant having the value of 10. formula_16 formula_17 and formula_18 are calculated from the group surface area and volume contributions formula_5 and formula_6 (Usually obtained via tabulated values) as well as the number of occurrences of the functional group on each molecule formula_19 such that: formula_20 Residual. The residual component of the activity formula_21 is due to interactions between groups present in the system, with the original paper referring to the concept of a "solution-of-groups". The residual component of the activity for the formula_3-th molecule containing formula_22 unique functional groups can be written as follows: formula_23 where formula_24 is the activity of an isolated group in a solution consisting only of molecules of type formula_3. The formulation of the residual activity ensures that the condition for the limiting case of a single molecule in a pure component solution, the activity is equal to 1; as by the definition of formula_24, one finds that formula_25 will be zero. The following formula is used for both formula_26 and formula_24 formula_27 In this formula formula_28 is the summation of the area fraction of group formula_29, over all the different groups and is somewhat similar in form, but not the same as formula_10. formula_30 is the group interaction parameter and is a measure of the interaction energy between groups. This is calculated using an Arrhenius equation (albeit with a pseudo-constant of value 1). formula_31 is the group mole fraction, which is the number of groups formula_22 in the solution divided by the total number of groups. formula_32 formula_33 formula_34 is the energy of interaction between groups "m" and "n", with SI units of joules per mole and "R" is the ideal gas constant. Note that it is not the case that formula_35, giving rise to a non-reflexive parameter. The equation for the group interaction parameter can be simplified to the following: formula_36 Thus formula_37 still represents the net energy of interaction between groups formula_29 and formula_22, but has the somewhat unusual units of absolute temperature (SI kelvins). These interaction energy values are obtained from experimental data, and are usually tabulated. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "a_i" }, { "math_id": 1, "text": "\\gamma^c" }, { "math_id": 2, "text": "\\gamma^r" }, { "math_id": 3, "text": "i" }, { "math_id": 4, "text": "\\ln \\gamma_i = \\ln \\gamma_i^c + \\ln \\gamma_i^r." }, { "math_id": 5, "text": "Q" }, { "math_id": 6, "text": "R" }, { "math_id": 7, "text": "\\tau_{ij}" }, { "math_id": 8, "text": "U_i" }, { "math_id": 9, "text": "\\ln \\gamma_i^c = \\ln \\frac{\\phi_i}{x_i} + \\frac{z}{2} q_i \\ln\\frac{\\theta_i}{\\phi_i} + L_i - \\frac{\\phi_i}{x_i} \\displaystyle\\sum_{j=1}^{n} x_j L_j," }, { "math_id": 10, "text": "\\theta_i" }, { "math_id": 11, "text": "\\phi_i" }, { "math_id": 12, "text": "L_i" }, { "math_id": 13, "text": "r" }, { "math_id": 14, "text": "z" }, { "math_id": 15, "text": "q" }, { "math_id": 16, "text": "\\theta_i = \\frac{x_i q_i}{\\displaystyle\\sum_{j=1}^{n} x_j q_j}, \\quad \\phi_i = \\frac{x_i r_i}{\\displaystyle\\sum_{j=1}^{n} x_j r_j}, \\quad L_i = \\frac{z}{2}(r_i - q_i)-(r_i-1), \\quad z=10," }, { "math_id": 17, "text": "q_i" }, { "math_id": 18, "text": "r_i" }, { "math_id": 19, "text": "\\nu_k" }, { "math_id": 20, "text": "r_i = \\displaystyle\\sum_{k=1}^{n} \\nu_k R_k, \\quad q_i = \\displaystyle\\sum_{k=1}^{n}\\nu_k Q_k." }, { "math_id": 21, "text": "\\gamma^{r}" }, { "math_id": 22, "text": "n" }, { "math_id": 23, "text": " \\ln \\gamma_i^r = \\displaystyle\\sum_{k}^n \\nu_k^{(i)} \\left[ \\ln \\Gamma_k - \\ln \\Gamma_k^{(i)} \\right]," }, { "math_id": 24, "text": "\\Gamma_k^{(i)}" }, { "math_id": 25, "text": " \\ln \\Gamma_k - \\ln \\Gamma_k^{(i)}" }, { "math_id": 26, "text": "\\Gamma_k" }, { "math_id": 27, "text": "\\ln \\Gamma_k = Q_k \\left[1 - \\ln \\displaystyle\\sum_m \\Theta_m \\Psi_{mk} - \\displaystyle\\sum_m \\frac{\\Theta_m \\Psi_{km}}{\\displaystyle\\sum_n \\Theta_n \\Psi_{nm}}\\right]." }, { "math_id": 28, "text": "\\Theta_m" }, { "math_id": 29, "text": "m" }, { "math_id": 30, "text": "\\Psi_{mn}" }, { "math_id": 31, "text": "X_n" }, { "math_id": 32, "text": "\\Theta_m = \\frac{Q_m X_m}{\\displaystyle\\sum_{n} Q_n X_n}," }, { "math_id": 33, "text": "\\Psi_{mn} = \\exp \\left[ -\\frac{U_{mn} - U_{nm} }{RT} \\right], \\quad X_m = \\frac{ \\displaystyle\\sum_j \\nu^j_m x_j}{\\displaystyle\\sum_j \\displaystyle\\sum_n \\nu_n^j x_j}," }, { "math_id": 34, "text": "U_{mn}" }, { "math_id": 35, "text": "U_{mn} = U_{nm}" }, { "math_id": 36, "text": "\\Psi_{mn} = \\exp\\frac{-a_{mn}}{T}." }, { "math_id": 37, "text": "a_{mn}" } ]
https://en.wikipedia.org/wiki?curid=12809051
12809481
Hennessy–Milner logic
In computer science, Hennessy–Milner logic (HML) is a dynamic logic used to specify properties of a labeled transition system (LTS), a structure similar to an automaton. It was introduced in 1980 by Matthew Hennessy and Robin Milner in their paper "On observing nondeterminism and concurrency" (ICALP). Another variant of the HML involves the use of recursion to extend the expressibility of the logic, and is commonly referred to as 'Hennessy-Milner Logic with recursion'. Recursion is enabled with the use of maximum and minimum fixed points. Syntax. A formula is defined by the following BNF grammar for "Act" some set of actions: formula_0 That is, a formula can be Formal semantics. Let formula_5 be a labeled transition system, and let formula_6 be the set of HML formulae. The satisfiability relation formula_7 relates states of the LTS to the formulae they satisfy, and is defined as the smallest relation such that, for all states formula_8 and formulae formula_9,
[ { "math_id": 0, "text": "\\Phi ::= \\textit{tt} \\,\\,\\, | \\,\\,\\,\\textit{ff}\\,\\,\\, | \\,\\,\\,\\Phi_1 \\land \\Phi_2 \\,\\,\\, | \\,\\,\\,\\Phi_1 \\lor \\Phi_2\\,\\,\\, | \\,\\,\\,[Act] \\Phi\\,\\,\\, | \\,\\,\\, \\langle Act \\rangle \\Phi" }, { "math_id": 1, "text": "\\textit{tt}" }, { "math_id": 2, "text": "\\textit{ff}" }, { "math_id": 3, "text": "\\scriptstyle{[Act]\\Phi}" }, { "math_id": 4, "text": "\\scriptstyle{\\langle Act \\rangle \\Phi}" }, { "math_id": 5, "text": "L = (S, \\mathsf{Act}, \\rightarrow)" }, { "math_id": 6, "text": "\\mathsf{HML}" }, { "math_id": 7, "text": "{} \\models {} \\subseteq (S \\times \\mathsf{HML})" }, { "math_id": 8, "text": "s \\in S" }, { "math_id": 9, "text": "\\phi, \\phi_1, \\phi_2 \\in \\mathsf{HML}" }, { "math_id": 10, "text": "s \\models \\textit{tt} " }, { "math_id": 11, "text": "s \\models \\textit{ff} " }, { "math_id": 12, "text": "s' \\in S" }, { "math_id": 13, "text": "s \\xrightarrow{a} s'" }, { "math_id": 14, "text": "s' \\models \\phi" }, { "math_id": 15, "text": "s \\models \\langle a \\rangle \\phi" }, { "math_id": 16, "text": "s' \\in S " }, { "math_id": 17, "text": "s \\models [ a ] \\phi" }, { "math_id": 18, "text": "s \\models \\phi_1" }, { "math_id": 19, "text": "s \\models \\phi_1 \\lor \\phi_2" }, { "math_id": 20, "text": "s \\models \\phi_2" }, { "math_id": 21, "text": "s \\models \\phi_1 \\land \\phi_2" } ]
https://en.wikipedia.org/wiki?curid=12809481
1281160
Resistance thermometer
Type of temperature sensor (thermometer) Resistance thermometers, also called resistance temperature detectors (RTDs), are sensors used to measure temperature. Many RTD elements consist of a length of fine wire wrapped around a heat-resistant ceramic or glass core but other constructions are also used. The RTD wire is a pure material, typically platinum (Pt), nickel (Ni), or copper (Cu). The material has an accurate resistance/temperature relationship which is used to provide an indication of temperature. As RTD elements are fragile, they are often housed in protective probes. RTDs, which have higher accuracy and repeatability, are slowly replacing thermocouples in industrial applications below 600 °C. Resistance/temperature relationship of metals. Common RTD sensing elements for biomedical application constructed of platinum (Pt), nickel (Ni), or copper (Cu) have a repeatable, resistance versus temperature relationship ("R" vs "T") and operating temperature range. The "R" vs "T" relationship is defined as the amount of resistance change of the sensor per degree of temperature change. The relative change in resistance (temperature coefficient of resistance) varies only slightly over the useful range of the sensor. Platinum was proposed by Sir William Siemens as an element for a resistance temperature detector at the Bakerian lecture in 1871: it is a noble metal and has the most stable resistance–temperature relationship over the largest temperature range. Nickel elements have a limited temperature range because the temperature coefficient of resistance changes at temperatures over 300 °C (572 °F). Copper has a very linear resistance–temperature relationship; however, copper oxidizes at moderate temperatures and cannot be used over 150 °C (302 °F). The significant characteristic of metals used as resistive elements is the linear approximation of the resistance versus temperature relationship between 0 and 100 °C. This temperature coefficient of resistance is denoted by α and is usually given in units of Ω/(Ω·°C): formula_0 where formula_1 is the resistance of the sensor at 0 °C, formula_2 is the resistance of the sensor at 100 °C. Pure platinum has α = 0.003925 Ω/(Ω·°C) in the 0 to 100 °C range and is used in the construction of laboratory-grade RTDs. Conversely, two widely recognized standards for industrial RTDs IEC 60751 and ASTM E-1137 specify α = 0.00385 Ω/(Ω·°C). Before these standards were widely adopted, several different α values were used. It is still possible to find older probes that are made with platinum that have α = 0.003916 Ω/(Ω·°C) and 0.003902 Ω/(Ω·°C). These different α values for platinum are achieved by doping – carefully introducing impurities, which become embedded in the lattice structure of the platinum and result in a different "R" vs. "T" curve and hence α value. Calibration. To characterize the "R" vs "T" relationship of any RTD over a temperature range that represents the planned range of use, calibration must be performed at temperatures other than 0 °C and 100 °C. This is necessary to meet calibration requirements. Although RTDs are considered to be linear in operation, it must be proven that they are accurate with regard to the temperatures with which they will actually be used (see details in Comparison calibration option). Two common calibration methods are the fixed-point method and the comparison method. Element types. The three main categories of RTD sensors are thin-film, wire-wound, and coiled elements. While these types are the ones most widely used in industry, other more exotic shapes are used; for example, carbon resistors are used at ultra-low temperatures (−273 °C to −173 °C). The current international standard that specifies tolerance and the temperature-to-electrical resistance relationship for platinum resistance thermometers (PRTs) is IEC 60751:2008; ASTM E1137 is also used in the United States. By far the most common devices used in industry have a nominal resistance of 100 ohms at 0 °C and are called Pt100 sensors ("Pt" is the symbol for platinum, "100" for the resistance in ohms at 0 °C). It is also possible to get Pt1000 sensors, where 1000 is for the resistance in ohms at 0 °C. The sensitivity of a standard 100 Ω sensor is a nominal 0.385 Ω/°C. RTDs with a sensitivity of 0.375 and 0.392 Ω/°C, as well as a variety of others, are also available. Function. Resistance thermometers are constructed in a number of forms and offer greater stability, accuracy and repeatability in some cases than thermocouples. While thermocouples use the Seebeck effect to generate a voltage, resistance thermometers use electrical resistance and require a power source to operate. The resistance ideally varies nearly linearly with temperature per the Callendar–Van Dusen equation. The platinum detecting wire needs to be kept free of contamination to remain stable. A platinum wire or film is supported on a former in such a way that it gets minimal differential expansion or other strains from its former, yet is reasonably resistant to vibration. RTD assemblies made from iron or copper are also used in some applications. Commercial platinum grades exhibit a temperature coefficient of resistance 0.00385/°C (0.385%/°C) (European Fundamental Interval). The sensor is usually made to have a resistance of 100 Ω at 0 °C. This is defined in BS EN 60751:1996 (taken from IEC 60751:1995). The American Fundamental Interval is 0.00392/°C, based on using a purer grade of platinum than the European standard. The American standard is from the Scientific Apparatus Manufacturers Association (SAMA), who are no longer in this standards field. As a result, the "American standard" is hardly the standard even in the US. Lead-wire resistance can also be a factor; adopting three- and four-wire, instead of two-wire, connections can eliminate connection-lead resistance effects from measurements (see below); three-wire connection is sufficient for most purposes and is an almost universal industrial practice. Four-wire connections are used for the most precise applications. Advantages and limitations. The advantages of platinum resistance thermometers include: Limitations: RTDs in industrial applications are rarely used above 660 °C. At temperatures above 660 °C it becomes increasingly difficult to prevent the platinum from becoming contaminated by impurities from the metal sheath of the thermometer. This is why laboratory standard thermometers replace the metal sheath with a glass construction. At very low temperatures, say below −270 °C (3 K), because there are very few phonons, the resistance of an RTD is mainly determined by impurities and boundary scattering and thus basically independent of temperature. As a result, the sensitivity of the RTD is essentially zero and therefore not useful. Compared to thermistors, platinum RTDs are less sensitive to small temperature changes and have a slower response time. However, thermistors have a smaller temperature range and stability. RTDs vs thermocouples. The two most common ways of measuring temperatures for industrial applications are with resistance temperature detectors (RTDs) and thermocouples. The choice between them is typically determined by four factors. Construction. These elements nearly always require insulated leads attached. PVC, silicone rubber or PTFE insulators are used at temperatures below about 250 °C. Above this, glass fibre or ceramic are used. The measuring point, and usually most of the leads, require a housing or protective sleeve, often made of a metal alloy that is chemically inert to the process being monitored. Selecting and designing protection sheaths can require more care than the actual sensor, as the sheath must withstand chemical or physical attack and provide convenient attachment points. The RTD construction design may be enhanced to handle shock and vibration by including compacted magnesium oxide (MgO) powder inside the sheath. MgO is used to isolate the conductors from the external sheath and from each other. MgO is used due to its dielectric constant, rounded grain structure, high-temperature capability, and its chemical inertness. Wiring configurations. Two-wire configuration. The simplest resistance-thermometer configuration uses two wires. It is only used when high accuracy is not required, as the resistance of the connecting wires is added to that of the sensor, leading to errors of measurement. This configuration allows use of 100 meters of cable. This applies equally to balanced bridge and fixed bridge system. For a balanced bridge usual setting is with R2 = R1, and R3 around the middle of the range of the RTD. So for example, if we are going to measure between , RTD resistance will range from 100 Ω to 138.5 Ω. We would choose R3 = 120 Ω. In that way we get a small measured voltage in the bridge. Three-wire configuration. In order to minimize the effects of the lead resistances, a three-wire configuration can be used. The suggested setting for the configuration shown, is with R1 = R2, and R3 around the middle of the range of the RTD. Looking at the Wheatstone bridge circuit shown, the voltage drop on the lower left hand side is V_rtd + V_lead, and on the lower righthand side is V_R3 + V_lead, therefore the bridge voltage (V_b) is the difference, V_rtd − V_R3. The voltage drop due to the lead resistance has been cancelled out. This always applies if R1=R2, and R1, R2 » RTD, R3. R1 and R2 can serve the use of limiting the current through the RTD, for example for a PT100, limiting to 1 mA, and 5 V, would suggest a limiting resistance of approximately R1 = R2 = 5/0.001 = 5,000 Ohms. Four-wire configuration. The four-wire resistance configuration increases the accuracy of measurement of resistance. Four-terminal sensing eliminates voltage drop in the measuring leads as a contribution to error. To increase accuracy further, any residual thermoelectric voltages generated by different wire types or screwed connections are eliminated by reversal of the direction of the 1 mA current and the leads to the DVM (digital voltmeter). The thermoelectric voltages will be produced in one direction only. By averaging the reversed measurements, the thermoelectric error voltages are cancelled out. Classifications of RTDs. The highest-accuracy of all PRTs are the "Ultra Precise Platinum Resistance Thermometers" (UPRTs). This accuracy is achieved at the expense of durability and cost. The UPRT elements are wound from reference-grade platinum wire. Internal lead wires are usually made from platinum, while internal supports are made from quartz or fused silica. The sheaths are usually made from quartz or sometimes Inconel, depending on temperature range. Larger-diameter platinum wire is used, which drives up the cost and results in a lower resistance for the probe (typically 25.5 Ω). UPRTs have a wide temperature range (−200 °C to 1000 °C) and are approximately accurate to ±0.001 °C over the temperature range. UPRTs are only appropriate for laboratory use. Another classification of laboratory PRTs is "Standard Platinum Resistance Thermometers" (Standard SPRTs). They are constructed like the UPRT, but the materials are more cost-effective. SPRTs commonly use reference-grade, high-purity smaller-diameter platinum wire, metal sheaths and ceramic type insulators. Internal lead wires are usually a nickel-based alloy. Standard PRTs are more limited in temperature range (−200 °C to 500 °C) and are approximately accurate to ±0.03 °C over the temperature range. "Industrial PRTs" are designed to withstand industrial environments. They can be almost as durable as a thermocouple. Depending on the application, industrial PRTs can use thin-film or coil-wound elements. The internal lead wires can range from PTFE-insulated stranded nickel-plated copper to silver wire, depending on the sensor size and application. Sheath material is typically stainless steel; higher-temperature applications may demand Inconel. Other materials are used for specialized applications. History. Contemporary to the Seebeck effect, the discovery that resistivity in metals is dependent on the temperature was announced in 1821 by Sir Humphry Davy. The practical application of the tendency of electrical conductors to increase their electrical resistance with rising temperature was first described by Sir William Siemens at the Bakerian Lecture of 1871 before the Royal Society of Great Britain, suggesting platina as a suitable element. The necessary methods of construction were established by Callendar, Griffiths, Holborn and Wein between 1885 and 1900. In 1871 Carl Wilhelm Siemens invented the Platinum Resistance Temperature Detector and presented a three-term interpolation formula. Siemens’ RTD rapidly fell out of favour due to the instability of the temperature reading. Hugh Longbourne Callendar developed the first commercially successful platinum RTD in 1885. A 1971 paper by Eriksson, Keuther, and Glatzel identified six noble metal alloys (63Pt37Rh, 37Pd63Rh, 26Pt74Ir, 10Pd90Ir, 34Pt66Au, 14Pd86Au) with approximately linear resistance temperature characteristics. The alloy 63Pt37Rh is similar to the readily available 70Pt30Rh alloy wire used in thermocouples. The Space Shuttle made extensive use of platinum resistance thermometers. The only in-flight shutdown of a Space Shuttle Main Engine – mission STS-51F – was caused by multiple failures of RTDs which had become brittle and unreliable due to multiple heat-and-cool cycles. (The failures of the sensors falsely suggested that a fuel pump was critically overheating, and the engine was automatically shut down.) Following the engine failure incident, the RTDs were replaced with thermocouples. Standard resistance thermometer data. Temperature sensors are usually supplied with thin-film elements. The resistance elements are rated in accordance with BS EN 60751:2008 as: Resistance-thermometer elements functioning up to 1000 °C can be supplied. The relation between temperature and resistance is given by the Callendar–Van Dusen equation: formula_3 formula_4 Here formula_5 is the resistance at temperature "T", formula_1 is the resistance at 0 °C, and the constants (for an α = 0.00385 platinum RTD) are: formula_6 formula_7 formula_8 Since the "B" and "C" coefficients are relatively small, the resistance changes almost linearly with the temperature. For positive temperature, solution of the quadratic equation yields the following relationship between temperature and resistance: formula_9 Then for a four-wire configuration with a 1 mA precision current source the relationship between temperature and measured voltage formula_10 is formula_11 Temperature-dependent resistances for various popular resistance thermometers. Copied from German version, please do not remove Notes. &lt;templatestyles src="Reflist/styles.css" /&gt; References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\alpha = \\frac{R_{100} - R_0}{100~^\\circ\\text{C} \\cdot R_0}," }, { "math_id": 1, "text": "R_0" }, { "math_id": 2, "text": "R_{100}" }, { "math_id": 3, "text": "R_T = R_0 \\left[ 1 + AT + BT^2 + CT^3 (T-100) \\right] \\; (-200\\;{}^{\\circ}\\mathrm{C} < T < 0\\;{}^{\\circ}\\mathrm{C})," }, { "math_id": 4, "text": "R_T = R_0 \\left[ 1 + AT + BT^2 \\right] \\; (0\\;{}^{\\circ}\\mathrm{C} \\leq T < 850\\;{}^{\\circ}\\mathrm{C})." }, { "math_id": 5, "text": "R_T" }, { "math_id": 6, "text": "A = 3.9083 \\times 10^{-3}~^\\circ\\text{C}^{-1}," }, { "math_id": 7, "text": "B = -5.775 \\times 10^{-7}~^\\circ\\text{C}^{-2}," }, { "math_id": 8, "text": "C = -4.183 \\times 10^{-12}~^\\circ\\text{C}^{-4}." }, { "math_id": 9, "text": "T = \\frac{-A + \\sqrt{A^2 - 4B\\left(1 - \\frac{R_T}{R_0}\\right)}}{2B}." }, { "math_id": 10, "text": "V_T" }, { "math_id": 11, "text": "T = \\frac{-A + \\sqrt{A^2 - 40B(0.1 - V_T)}}{2B}." } ]
https://en.wikipedia.org/wiki?curid=1281160
1281455
Peak signal-to-noise ratio
Metric used to measure signal quality Peak signal-to-noise ratio (PSNR) is an engineering term for the ratio between the maximum possible power of a signal and the power of corrupting noise that affects the fidelity of its representation. Because many signals have a very wide dynamic range, PSNR is usually expressed as a logarithmic quantity using the decibel scale. PSNR is commonly used to quantify reconstruction quality for images and video subject to lossy compression. Definition. PSNR is most easily defined via the mean squared error ("MSE"). Given a noise-free "m"×"n" monochrome image "I" and its noisy approximation "K", "MSE" is defined as formula_0 The PSNR (in dB) is defined as formula_1 Here, "MAXI" is the maximum pixel value of the original image. Application in color images. For color images with three RGB values per pixel, the definition of PSNR is the same except that the MSE is the sum over all squared value differences (now for each color, i.e. three times as many differences as in a monochrome image) divided by image size and by three. Alternately, for color images the image is converted to a different color space and PSNR is reported against each channel of that color space, e.g., YCbCr or HSL. Quality estimation with PSNR. PSNR is most commonly used to measure the quality of reconstruction of lossy compression codecs (e.g., for image compression). The signal in this case is the original data, and the noise is the error introduced by compression. When comparing compression codecs, PSNR is an "approximation" to human perception of reconstruction quality. Typical values for the PSNR in lossy image and video compression are between 30 and 50 dB, provided the bit depth is 8 bits, where higher is better. The processing quality of 12-bit images is considered high when the PSNR value is 60 dB or higher. For 16-bit data typical values for the PSNR are between 60 and 80 dB. Acceptable values for wireless transmission quality loss are considered to be about 20 dB to 25 dB. In the absence of noise, the two images "I" and "K" are identical, and thus the MSE is zero. In this case the PSNR is infinite (or undefined, see Division by zero). Performance comparison. Although a higher PSNR generally correlates with a higher quality reconstruction, in many cases it may not. One has to be extremely careful with the range of validity of this metric; it is only conclusively valid when it is used to compare results from the same codec (or codec type) and same content. Generally, when it comes to estimating the quality of images and videos as perceived by humans, PSNR has been shown to perform very poorly compared to other quality metrics. Variants. PSNR-HVS is an extension of PSNR that incorporates properties of the human visual system such as contrast perception. PSNR-HVS-M improves on PSNR-HVS by additionally taking into account visual masking. In a 2007 study, it delivered better approximations of human visual quality judgements than PSNR and SSIM by large margin. It was also shown to have a distinct advantage over DCTune and PSNR-HVS. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\mathit{MSE} = \\frac{1}{m\\,n}\\sum_{i=0}^{m-1}\\sum_{j=0}^{n-1} [I(i,j) - K(i,j)]^2." }, { "math_id": 1, "text": "\\begin{align}\n \\mathit{PSNR} &= 10 \\cdot \\log_{10} \\left( \\frac{\\mathit{MAX}_I^2}{\\mathit{MSE}} \\right) \\\\ \n &= 20 \\cdot \\log_{10} \\left( \\frac{\\mathit{MAX}_I}{\\sqrt{\\mathit{MSE}}} \\right) \\\\ \n &= 20 \\cdot \\log_{10}(\\mathit{MAX}_I) - 10 \\cdot \\log_{10} (\\mathit{MSE}).\n\\end{align}" } ]
https://en.wikipedia.org/wiki?curid=1281455
12814626
Trimethylsilanol
&lt;templatestyles src="Chembox/styles.css"/&gt; Chemical compound Trimethylsilanol (TMS) is an organosilicon compound with the formula (CH3)3SiOH. The Si centre bears three methyl groups and one hydroxyl group. It is a colourless volatile liquid. Occurrence. TMS is a contaminant in the atmospheres of spacecraft, where it arises from the degradation of silicone-based materials. Specifically, it is the volatile product from the hydrolysis of polydimethylsiloxane, which are generally terminated with trimethylsilyl groups: (CH3)3SiO[Si(CH3)2O]nR + H2O → (CH3)3SiOH + HO[Si(CH3)2O]nR TMS and related volatile siloxanes are formed by hydrolysis of silicones-based containing materials, which are found in detergents and cosmetic products. Traces of trimethylsilanol, together with other volatile siloxanes, are present in biogas and landfill gas, again resulting from the degradation of silicones. As their combustion forms particles of silicates and microcrystalline quartz, which cause abrasion of combustion engine parts, they pose problems for the use of such gases in combustion engines. Production. Trimethylsilanol cannot be produced by simple hydrolysis of chlorotrimethylsilane as this reaction leads to the etherification product hexamethyldisiloxane, because of the by-product hydrochloric acid.formula_0 Trimethylsilanol is accessible by weakly basic hydrolysis of chlorotrimethylsilane, since the dimerization can thus be avoided. Trimethylsilanol can also be obtained by the basic hydrolysis of hexamethyldisiloxane. Reactions. Trimethylsilanol is a weak acid with a pKa value of 11. The acidity is comparable to that of orthosilicic acid, but much higher than the one of alcohols like "tert"-butanol (pKa 19). Deprotonation with sodium hydroxide gives sodium trimethylsiloxide. TMS reacts with the silanol groups (R3SiOH) giving silyl ethers. Structure. In terms of its structure, the molecule is tetrahedral. The compound forms monoclinic crystals. Additional properties. The heat of evaporation is 45.64 kJ·mol−1, the evaporation entropy 123 J·K−1·mol−1. The vapor pressure function according to Antoine is obtained as log10(P/1bar) = A − B/(T + C) (P in bar, T in K) with A = 5.44591, B = 1767.766K and C = −44.888K in a temperature range from 291K to 358K. Below the melting point at −4.5 °C, The 1H NMR in CDCl3 shows a singlet at δ=0.14 ppm. Bioactivity. Like other silanols, trimethylsilanol exhibits antimicrobial properties.
[ { "math_id": 0, "text": "\\ce{2ClSi(CH3)3 ->[{}\\atop\\ce{+H2O, -HCl}]} \\ce{2HOSi(CH3)3 ->[{}\\atop\\ce{-H2O}] (CH3)3Si-O-Si(CH3)3}" } ]
https://en.wikipedia.org/wiki?curid=12814626
12815034
Hypsicles
Greek mathematician and astronomer Hypsicles (Greek: ; c. 190 – c. 120 BCE) was an ancient Greek mathematician and astronomer known for authoring "On Ascensions" (Ἀναφορικός) and possibly the Book XIV of Euclid's "Elements". Hypsicles lived in Alexandria. Life and work. Although little is known about the life of Hypsicles, it is believed that he authored the astronomical work "On Ascensions". The mathematician Diophantus of Alexandria noted on a definition of polygonal numbers, due to Hypsicles: &lt;templatestyles src="Template:Blockquote/styles.css" /&gt;If there are as many numbers as we please beginning from 1 and increasing by the same common difference, then, when the common difference is 1, the sum of all the numbers is a triangular number; when 2 a square; when 3, a pentagonal number [and so on]. And the number of angles is called after the number which exceeds the common difference by 2, and the side after the number of terms including 1. On Ascensions. In "On Ascensions" (Ἀναφορικός and sometimes translated "On Rising Times"), Hypsicles proves a number of propositions on arithmetical progressions and uses the results to calculate approximate values for the times required for the signs of the zodiac to rise above the horizon. It is thought that this is the work from which the division of the circle into 360 parts may have been adopted since it divides the day into 360 parts, a division possibly suggested by Babylonian astronomy, although this is mere speculation and no actual evidence is found to support this. Heath 1921 notes, "The earliest extant Greek book in which the division of the circle into 360 degrees appears". This work by Hypsicles is believed to represent the earliest extant Greek text to use the Babylonian division of the zodiac into 12 signs of 30 degrees each. Euclid's "Elements". Hypsicles is more famously known for possibly writing the Book XIV of Euclid's "Elements". The book may have been composed on the basis of a treatise by Apollonius. The book continues Euclid's comparison of regular solids inscribed in spheres, with the chief result being that the ratio of the surfaces of the dodecahedron and icosahedron inscribed in the same sphere is the same as the ratio of their volumes, the ratio being formula_0. Heath further notes, "Hypsicles says also that Aristaeus, in a work entitled "Comparison of the five figures", proved that the same circle circumscribes both the pentagon of the dodecahedron and the triangle of the icosahedron inscribed in the same sphere; whether this Aristaeus is the same as the Aristaeus of the Solid Loci, the elder (Aristaeus the Elder) contemporary of Euclid, we do not know." Hypsicles letter. Hypsicles letter was a preface of the supplement taken from Euclid's Book XIV, part of the thirteen books of Euclid's Elements, featuring a treatise. &lt;templatestyles src="Template:Blockquote/styles.css" /&gt;Basilides of Tyre, O Protarchus, when he came to Alexandria and met my father, spent the greater part of his sojourn with him on account of the bond between them due to their common interest in mathematics. And on one occasion, when looking into the tract written by Apollonius (Apollonius of Perga) about the comparison of the dodecahedron and icosahedron inscribed in one and the same sphere, that is to say, on the question what ratio they bear to one another, they came to the conclusion that Apollonius' treatment of it in this book was not correct; accordingly, as I understood from my father, they proceeded to amend and rewrite it. But I myself afterwards came across another book published by Apollonius, containing a demonstration of the matter in question, and I was greatly attracted by his investigation of the problem. Now the book published by Apollonius is accessible to all; for it has a large circulation in a form which seems to have been the result of later careful elaboration. For my part, I determined to dedicate to you what I deem to be necessary by way of commentary, partly because you will be able, by reason of your proficiency in all mathematics and particularly in geometry, to pass an expert judgment upon what I am about to write, and partly because, on account of your intimacy with my father and your friendly feeling towards myself, you will lend a kindly ear to my disquisition. But it is time to have done with the preamble and to begin my treatise itself. Notes. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\sqrt{\\tfrac{10}{3(5-\\sqrt{5})}}" } ]
https://en.wikipedia.org/wiki?curid=12815034
12817107
Ono's inequality
Theorem about triangles In mathematics, Ono's inequality is a theorem about triangles in the Euclidean plane. In its original form, as conjectured by Takashi Ono in 1914, the inequality is actually false; however, the statement is true for acute triangles, as shown by F. Balitrand in 1916. Statement of the inequality. Consider an acute triangle (meaning a triangle with three acute angles) in the Euclidean plane with side lengths "a", "b" and "c" and area "S". Then formula_0 This inequality fails for general triangles (to which Ono's original conjecture applied), as shown by the counterexample formula_1 The inequality holds with equality in the case of an equilateral triangle, in which up to similarity we have sides formula_2 and area formula_3 Proof. Dividing both sides of the inequality by formula_4, we obtain: formula_5 Using the formula formula_6 for the area of triangle, and applying the cosines law to the left side, we get: formula_7 And then using the identity formula_8 which is true for all triangles in euclidean plane, we transform the inequality above into: formula_9 Since the angles of the triangle are acute, the tangent of each corner is positive, which means that the inequality above is correct by AM-GM inequality.
[ { "math_id": 0, "text": "27 (b^2 + c^2 - a^2)^2 (c^2 + a^2 - b^2)^2 (a^2 + b^2 - c^2)^2 \\leq (4 S)^6." }, { "math_id": 1, "text": "a=2, \\, \\, b=3, \\, \\, c=4, \\, \\, S=3\\sqrt{15}/4." }, { "math_id": 2, "text": "1,1,1" }, { "math_id": 3, "text": "\\sqrt{3}/4." }, { "math_id": 4, "text": "64(abc)^4" }, { "math_id": 5, "text": "27 \\frac{(b^2 + c^2 - a^2)^2}{4b^2c^2} \\frac{(c^2 + a^2 - b^2)^2}{4a^2c^2} \\frac{(a^2 + b^2 - c^2)^2}{4a^2b^2} \\leq \\frac{4S^2}{b^2c^2} \\frac{4S^2}{a^2c^2} \\frac{4S^2}{a^2b^2}" }, { "math_id": 6, "text": "S= \\tfrac12 bc\\sin{A}" }, { "math_id": 7, "text": "27 (\\cos{A} \\cos{B} \\cos{C})^2 \\leq (\\sin{A} \\sin{B} \\sin{C})^2" }, { "math_id": 8, "text": "\\tan{A} + \\tan{B} + \\tan{C} = \\tan{A} \\tan{B} \\tan{C}" }, { "math_id": 9, "text": "27 (\\tan{A} \\tan{B} \\tan{C}) \\leq (\\tan{A} + \\tan{B} + \\tan{C})^3" } ]
https://en.wikipedia.org/wiki?curid=12817107
1281745
SeaWiFS
Satellite-borne sensor designed to collect global ocean biological data SeaWiFS (Sea-Viewing Wide Field-of-View Sensor) was a satellite-borne sensor designed to collect global ocean biological data. Active from September 1997 to December 2010, its primary mission was to quantify chlorophyll produced by marine phytoplankton (microscopic plants). Instrument. SeaWiFS was the only scientific instrument on GeoEye's OrbView-2 (AKA SeaStar) satellite, and was a follow-on experiment to the Coastal Zone Color Scanner on Nimbus 7. Launched August 1, 1997 on an Orbital Sciences Pegasus small air-launched rocket, SeaWiFS began scientific operations on September 18, 1997 and stopped collecting data on December 11, 2010, far exceeding its designed operating period of 5 years. The sensor resolution is 1.1 km (LAC, "Local Area Coverage") and 4.5 km (GAC, "Global Area Coverage"). The sensor recorded information in the following optical bands: The instrument was specifically designed to monitor ocean characteristics such as chlorophyll-a concentration and water clarity. It was able to tilt up to 20 degrees to avoid sunlight from the sea surface. This feature is important at equatorial latitudes where glint from sunlight often obscures water colour. SeaWiFS had used the Marine Optical Buoy for vicarious calibration. The SeaWiFS Mission is an industry/government partnership, with NASA's Ocean Biology Processing Group at Goddard Space Flight Center having responsibility for the data collection, processing, calibration, validation, archive and distribution. The current SeaWiFS Project manager is Gene Carl Feldman. Chlorophyll estimation. Chlorophyll concentrations are derived from images of the ocean's color. Generally speaking, the greener the water, the more phytoplankton are present in the water, and the higher the chlorophyll concentrations. Chlorophyll a absorbs more blue and red light than green, with the resulting reflected light changing from blue to green as the amount of chlorophyll in the water increases. Using this knowledge, scientists were able to use ratios of different reflected colors to estimate chlorophyll concentrations. Many formulas estimate chlorophyll by comparing the ratio of blue to green light and relating those ratios to known chlorophyll concentrations from the same times and locations as the satellite observations. The color of light is defined by its wavelength, and visible light has wavelengths from 400 to 700 nanometers, progressing from violet (400 nm) to red (700 nm). A typical formula used for SeaWiFS data (termed OC4v4) divides the reflectance of the maximum of several wavelengths (443, 490, or 510 nm) by the reflectance at 550 nm. This roughly equates to a ratio of blue light to green light for two of the numerator wavelengths, and a ratio of two different green wavelengths for the other possible combination. The reflectance (R) returned by this formula is then plugged into a cubic polynomial that relates the band ratio to chlorophyll. formula_0 This formula, along with others, was derived empirically using observed chlorophyll concentrations. To facilitate these comparisons, NASA maintains a system of oceanographic and atmospheric data called SeaBASS (SeaWiFS Bio-optical Archive and Storage System). This data archive is used to develop new algorithms and validate satellite data products by matching chlorophyll concentrations measured directly with those estimated remotely from a satellite. These data can also be used to assess atmospheric correction (discussed below) that also can greatly influence chlorophyll concentration calculations. Numerous chlorophyll algorithms were tested to see which ones best matched chlorophyll globally. Various algorithms perform differently in different environments. Many algorithms estimate chlorophyll concentrations more accurately in deep clear water than in shallow water. In shallow waters reflectance from other pigments, detritus, and the ocean bottom may cause inaccuracies. The stated goals of the SeaWiFS chlorophyll estimates are "… to produce water leaving radiances with an uncertainty of 5% in clear-water regions and chlorophyll a concentrations within ±35% over the range of 0.05–50 mg m-3.". When accuracy is assessed on a global scale, and all observations are grouped together, then this goal is clearly met. Many satellite estimates range from one-third to three times of those directly recorded at sea, though the overall relationship is still quite good. Differences arise when examined by region, though overall the values are still very useful. One pixel may not be particularly accurate, though when averages are taken over larger areas, the values average out and provide a useful and accurate view of the larger patterns. The benefits of chlorophyll data from satellites far outweigh any flaws in their accuracy simply by the spatial and temporal coverage possible. Ship-based measurements of chlorophyll cannot come close to the frequency and spatial coverage provided by satellite data. Atmospheric correction. Light reflected from the sub-surface ocean is called water-leaving radiance and is used to estimate chlorophyll concentrations. However, only about 5–10% of light at the top of the atmosphere is from water-leaving radiance. The remainder of light is reflected from the atmosphere and from aerosols within the atmosphere. In order to estimate chlorophyll concentrations this non-water-leaving radiance must be accounted for. Some light reflected from the ocean, such as from whitecaps and sun glint, must also be removed from chlorophyll calculations since they are representative ocean waves or the angle of the sun instead of the subsurface ocean. The process of removing these components is called atmospheric correction. A description of the light, or radiance, observed by the satellite's sensor can be more formally expressed by the following radiative transfer equation: formula_1 Where LT(λ) is total radiance at the top of the atmosphere, Lr(λ) is Rayleigh scattering by air molecules, La(λ) is scattering by aerosols in the absence of air, Lra(λ) is interactions between air molecules and aerosols, TLg(λ) is reflections from glint, t(Lf(λ) is reflections from foam, and LW(λ)) is reflections from the subsurface of the water, or the water-leaving radiance. Others may divide radiance into some slightly different components, though in each case the reflectance parameters must be resolved in order to estimate water-leaving radiance and thus chlorophyll concentrations. Data products. Though SeaWiFS was designed primarily to monitor ocean chlorophyll a concentrations from space, it also collected many other parameters that are freely available to the public for research and educational purposes. These parameters aside from chlorophyll a include reflectance, the diffuse attenuation coefficient, particulate organic carbon (POC) concentration, particulate inorganic carbon (PIC) concentration, colored dissolved organic matter (CDOM) index, photosynthetically active radiation (PAR), and normalized fluorescence line height (NFLH). In addition, despite being designed to measure ocean chlorophyll, SeaWiFS also estimates Normalized Difference Vegetation Index (NDVI), which is a measure of photosynthesis on land. Data access. SeaWiFS data are freely accessible from a variety of websites, most of which are government run. The primary location for SeaWiFS data is NASA's OceanColor website , which maintains the time series of the entire SeaWiFS mission. The website allows users to browse individual SeaWiFS images based on time and area selections. The website also allows for browsing of different temporal and spatial scales with spatial scales ranging from 4 km to 9 km for mapped data. Data are provided at numerous temporal scales including daily, multiple days (e.g., 3, 8), monthly, and seasonal images, all the way up to composites of the entire mission. Data are also available via ftp and bulk download. Data can be browsed and retrieved in a variety of formats and levels of processing, with four general levels from unprocessed to modeled output. Level 0 is unprocessed data that is not usually provided to users. Level 1 data are reconstructed but either unprocessed or minimally processed. Level 2 data contain derived geophysical variables, though are not on a uniform space/time grid. Level 3 data contain derived geophysical variables binned or mapped to a uniform grid. Lastly, Level 4 data contain modeled or derived variables such as ocean primary productivity . Scientists who aim to create calculations of chlorophyll or other parameters that differ from those provided on the OceanColor website would likely use Level 1 or 2 data. This might be done, for example, to calculate parameters for a specific region of the globe, whereas the standard SeaWiFS data products are designed for global accuracy with necessary tradeoffs for specific regions. Scientists who are more interested in relating the standard SeaWiFS outputs to other processes will commonly use Level 3 data, particularly if they do not have the capacity, training, or interest in working with Level 1 or 2 data. Level 4 data may be used for similar research if interested in a modeled product. Software. NASA offers free software designed specifically to work with SeaWiFS data through the ocean color website. This software, entitled SeaDAS (SeaWiFS Data Analysis System), is built for visualization and processing of satellite data and can work with Level 1, 2, and 3 data. Though it was originally designed for SeaWiFS data, its capabilities have since been expanded to work with many other satellite data sources. Other software or programming languages can also be used to read in and work with SeaWiFS data, such as Matlab, IDL, or Python. Applications. Estimating the amount of global or regional chlorophyll, and therefore phytoplankton, has large implications for climate change and fisheries production. Phytoplankton play a huge role in the uptake of the world's carbon dioxide, a primary contributor to climate change. A percentage of these phytoplankton sink to ocean floor, effectively taking carbon dioxide out of the atmosphere and sequestering it in the deep ocean for at least a thousand years. Therefore, the degree of primary production from the ocean could play a large role in slowing climate change. Or, if primary production slows, climate change could be accelerated. Some have proposed fertilizing the ocean with iron in order to promote phytoplankton blooms and remove carbon dioxide from the atmosphere. Whether these experiments are undertaken or not, estimating chlorophyll concentrations in the world's oceans and their role in the ocean's biological pump could play a key role in our ability to foresee and adapt to climate change. Phytoplankton is a key component in the base of the oceanic food chain and oceanographers have hypothesized a link between oceanic chlorophyll and fisheries production for some time. The degree to which phytoplankton relates to marine fish production depends on the number of trophic links in the food chain, and how efficient each link is. Estimates of the number of trophic links and trophic efficiencies from phytoplankton to commercial fisheries have been widely debated, though they have been little substantiated. More recent research suggests that positive relationships between chlorophyll a and fisheries production can be modeled and can be very highly correlated when examined on the proper scale. For example, Ware and Thomson (2005) found an r2 of 0.87 between resident fish yield (metric tons km-2) and mean annual chlorophyll a concentrations (mg m-3). Others have found the Pacific's Transition Zone Chlorophyll Front (chlorophyll density of 0.2 mg m-3) to be defining feature in loggerhead turtle distribution. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": " Chl = antilog(0.366-3.067\\mathsf{R}+1.93\\mathsf{R}^2 +0.64\\mathsf{R}^3 -1.53\\mathsf{R}^4) " }, { "math_id": 1, "text": " L_T(\\lambda) = L_r(\\lambda)+L_a(\\lambda)+L_{ra}(\\lambda)+TL_g(\\lambda)+t(L_f(\\lambda)+L_W(\\lambda)) " } ]
https://en.wikipedia.org/wiki?curid=1281745
1281850
Q-learning
Model-free reinforcement learning algorithm &lt;templatestyles src="Machine learning/styles.css"/&gt; "Q"-learning is a model-free reinforcement learning algorithm to learn the value of an action in a particular state. It does not require a model of the environment (hence "model-free"), and it can handle problems with stochastic transitions and rewards without requiring adaptations. For any finite Markov decision process, "Q"-learning finds an optimal policy in the sense of maximizing the expected value of the total reward over any and all successive steps, starting from the current state. "Q"-learning can identify an optimal action-selection policy for any given finite Markov decision process, given infinite exploration time and a partly random policy. "Q" refers to the function that the algorithm computes – the expected rewards for an action taken in a given state. Reinforcement learning. Reinforcement learning involves an agent, a set of "states" formula_0, and a set formula_1 of "actions" per state. By performing an action formula_2, the agent transitions from state to state. Executing an action in a specific state provides the agent with a "reward" (a numerical score). The goal of the agent is to maximize its total reward. It does this by adding the maximum reward attainable from future states to the reward for achieving its current state, effectively influencing the current action by the potential future reward. This potential reward is a weighted sum of expected values of the rewards of all future steps starting from the current state. As an example, consider the process of boarding a train, in which the reward is measured by the negative of the total time spent boarding (alternatively, the cost of boarding the train is equal to the boarding time). One strategy is to enter the train door as soon as they open, minimizing the initial wait time for yourself. If the train is crowded, however, then you will have a slow entry after the initial action of entering the door as people are fighting you to depart the train as you attempt to board. The total boarding time, or cost, is then: On the next day, by random chance (exploration), you decide to wait and let other people depart first. This initially results in a longer wait time. However, less time is spent fighting the departing passengers. Overall, this path has a higher reward than that of the previous day, since the total boarding time is now: Through exploration, despite the initial (patient) action resulting in a larger cost (or negative reward) than in the forceful strategy, the overall cost is lower, thus revealing a more rewarding strategy. Algorithm. After formula_3 steps into the future the agent will decide some next step. The weight for this step is calculated as formula_4, where formula_5 (the "discount factor") is a number between 0 and 1 (formula_6). Assuming formula_7, it has the effect of valuing rewards received earlier higher than those received later (reflecting the value of a "good start"). formula_8 may also be interpreted as the probability to succeed (or survive) at every step formula_3. The algorithm, therefore, has a function that calculates the quality of a state–action combination: formula_9. Before learning begins, &amp;NoBreak;&amp;NoBreak; is initialized to a possibly arbitrary fixed value (chosen by the programmer). Then, at each time formula_10 the agent selects an action formula_11, observes a reward formula_12, enters a new state formula_13 (that may depend on both the previous state formula_14 and the selected action), and formula_15 is updated. The core of the algorithm is a Bellman equation as a simple value iteration update, using the weighted average of the current value and the new information: formula_16 where "formula_12" is the reward received when moving from the state formula_17 to the state formula_13, and formula_18 is the learning rate formula_19. Note that formula_20 is the sum of three factors: An episode of the algorithm ends when state formula_13 is a final or "terminal state". However, "Q"-learning can also learn in non-episodic tasks (as a result of the property of convergent infinite series). If the discount factor is lower than 1, the action values are finite even if the problem can contain infinite loops. For all final states formula_24, formula_25 is never updated, but is set to the reward value formula_26 observed for state formula_24. In most cases, formula_27 can be taken to equal zero. Influence of variables. Learning rate. The learning rate or "step size" determines to what extent newly acquired information overrides old information. A factor of 0 makes the agent learn nothing (exclusively exploiting prior knowledge), while a factor of 1 makes the agent consider only the most recent information (ignoring prior knowledge to explore possibilities). In fully deterministic environments, a learning rate of formula_28 is optimal. When the problem is stochastic, the algorithm converges under some technical conditions on the learning rate that require it to decrease to zero. In practice, often a constant learning rate is used, such as formula_29 for all formula_10. Discount factor. The discount factor &amp;NoBreak;&amp;NoBreak; determines the importance of future rewards. A factor of 0 will make the agent "myopic" (or short-sighted) by only considering current rewards, i.e. formula_30 (in the update rule above), while a factor approaching 1 will make it strive for a long-term high reward. If the discount factor meets or exceeds 1, the action values may diverge. For &amp;NoBreak;&amp;NoBreak;, without a terminal state, or if the agent never reaches one, all environment histories become infinitely long, and utilities with additive, undiscounted rewards generally become infinite. Even with a discount factor only slightly lower than 1, "Q"-function learning leads to propagation of errors and instabilities when the value function is approximated with an artificial neural network. In that case, starting with a lower discount factor and increasing it towards its final value accelerates learning. Initial conditions ("Q"0). Since "Q"-learning is an iterative algorithm, it implicitly assumes an initial condition before the first update occurs. High initial values, also known as "optimistic initial conditions", can encourage exploration: no matter what action is selected, the update rule will cause it to have lower values than the other alternative, thus increasing their choice probability. The first reward formula_26 can be used to reset the initial conditions. According to this idea, the first time an action is taken the reward is used to set the value of formula_15. This allows immediate learning in case of fixed deterministic rewards. A model that incorporates "reset of initial conditions" (RIC) is expected to predict participants' behavior better than a model that assumes any "arbitrary initial condition" (AIC). RIC seems to be consistent with human behaviour in repeated binary choice experiments. Implementation. "Q"-learning at its simplest stores data in tables. This approach falters with increasing numbers of states/actions since the likelihood of the agent visiting a particular state and performing a particular action is increasingly small. Function approximation. "Q"-learning can be combined with function approximation. This makes it possible to apply the algorithm to larger problems, even when the state space is continuous. One solution is to use an (adapted) artificial neural network as a function approximator. Another possibility is to integrate Fuzzy Rule Interpolation (FRI) and use sparse fuzzy rule-bases instead of discrete Q-tables or ANNs, which has the advantage of being a human-readable knowledge representation form. Function approximation may speed up learning in finite problems, due to the fact that the algorithm can generalize earlier experiences to previously unseen states. Quantization. Another technique to decrease the state/action space quantizes possible values. Consider the example of learning to balance a stick on a finger. To describe a state at a certain point in time involves the position of the finger in space, its velocity, the angle of the stick and the angular velocity of the stick. This yields a four-element vector that describes one state, i.e. a snapshot of one state encoded into four values. The problem is that infinitely many possible states are present. To shrink the possible space of valid actions multiple values can be assigned to a bucket. The exact distance of the finger from its starting position (-Infinity to Infinity) is not known, but rather whether it is far away or not (Near, Far). History. "Q"-learning was introduced by Chris Watkins in 1989. A convergence proof was presented by Watkins and Peter Dayan in 1992. Watkins was addressing “Learning from delayed rewards”, the title of his PhD thesis. Eight years earlier in 1981 the same problem, under the name of “Delayed reinforcement learning”, was solved by Bozinovski's Crossbar Adaptive Array (CAA). The memory matrix formula_31 was the same as the eight years later Q-table of Q-learning. The architecture introduced the term “state evaluation” in reinforcement learning. The crossbar learning algorithm, written in mathematical pseudocode in the paper, in each iteration performs the following computation: The term “secondary reinforcement” is borrowed from animal learning theory, to model state values via backpropagation: the state value &amp;NoBreak;&amp;NoBreak; of the consequence situation is backpropagated to the previously encountered situations. CAA computes state values vertically and actions horizontally (the "crossbar"). Demonstration graphs showing delayed reinforcement learning contained states (desirable, undesirable, and neutral states), which were computed by the state evaluation function. This learning system was a forerunner of the Q-learning algorithm. In 2014, Google DeepMind patented an application of Q-learning to deep learning, titled "deep reinforcement learning" or "deep Q-learning" that can play Atari 2600 games at expert human levels. Variants. Deep Q-learning. The DeepMind system used a deep convolutional neural network, with layers of tiled convolutional filters to mimic the effects of receptive fields. Reinforcement learning is unstable or divergent when a nonlinear function approximator such as a neural network is used to represent Q. This instability comes from the correlations present in the sequence of observations, the fact that small updates to Q may significantly change the policy of the agent and the data distribution, and the correlations between Q and the target values. The method can be used for stochastic search in various domains and applications. The technique used "experience replay," a biologically inspired mechanism that uses a random sample of prior actions instead of the most recent action to proceed. This removes correlations in the observation sequence and smooths changes in the data distribution. Iterative updates adjust Q towards target values that are only periodically updated, further reducing correlations with the target. Double Q-learning. Because the future maximum approximated action value in Q-learning is evaluated using the same Q function as in current action selection policy, in noisy environments Q-learning can sometimes overestimate the action values, slowing the learning. A variant called Double Q-learning was proposed to correct this. Double Q-learning is an off-policy reinforcement learning algorithm, where a different policy is used for value evaluation than what is used to select the next action. In practice, two separate value functions formula_33 and formula_34 are trained in a mutually symmetric fashion using separate experiences. The double Q-learning update step is then as follows: formula_35, and formula_36 Now the estimated value of the discounted future is evaluated using a different policy, which solves the overestimation issue. This algorithm was later modified in 2015 and combined with deep learning, as in the DQN algorithm, resulting in Double DQN, which outperforms the original DQN algorithm. Others. Delayed Q-learning is an alternative implementation of the online "Q"-learning algorithm, with probably approximately correct (PAC) learning. Greedy GQ is a variant of "Q"-learning to use in combination with (linear) function approximation. The advantage of Greedy GQ is that convergence is guaranteed even when function approximation is used to estimate the action values. Distributional Q-learning is a variant of "Q"-learning which seeks to model the distribution of returns rather than the expected return of each action. It has been observed to facilitate estimate by deep neural networks and can enable alternative control methods, such as risk-sensitive control. Multi-agent learning. Q-learning has been proposed in the multi-agent setting (see Section 4.1.2 in ). One approach consists in pretending the environment is passive. Littman proposes the minimax Q learning algorithm. Limitations. The standard Q-learning algorithm (using a formula_15 table) applies only to discrete action and state spaces. Discretization of these values leads to inefficient learning, largely due to the curse of dimensionality. However, there are adaptations of Q-learning that attempt to solve this problem such as Wire-fitted Neural Network Q-Learning. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\mathcal{S}" }, { "math_id": 1, "text": "\\mathcal{A}" }, { "math_id": 2, "text": "a \\in \\mathcal{A}" }, { "math_id": 3, "text": "\\Delta t" }, { "math_id": 4, "text": "\\gamma^{\\Delta t}" }, { "math_id": 5, "text": "\\gamma" }, { "math_id": 6, "text": "0 \\le \\gamma \\le 1" }, { "math_id": 7, "text": "\\gamma < 1" }, { "math_id": 8, "text": " \\gamma " }, { "math_id": 9, "text": "Q: \\mathcal{S} \\times \\mathcal{A} \\to \\mathbb{R}" }, { "math_id": 10, "text": "t" }, { "math_id": 11, "text": "A_t" }, { "math_id": 12, "text": "R_{t+1}" }, { "math_id": 13, "text": "S_{t+1}" }, { "math_id": 14, "text": "S_t" }, { "math_id": 15, "text": "Q" }, { "math_id": 16, "text": "Q^{new}(S_{t},A_{t}) \\leftarrow (1 - \\underbrace{\\alpha}_{\\text{learning rate}}) \\cdot \\underbrace{Q(S_{t},A_{t})}_{\\text{current value}} + \\underbrace{\\alpha}_{\\text{learning rate}} \\cdot \\bigg( \\underbrace{\\underbrace{R_{t+1}}_{\\text{reward}} + \\underbrace{\\gamma}_{\\text{discount factor}} \\cdot \\underbrace{\\max_{a}Q(S_{t+1}, a)}_{\\text{estimate of optimal future value}}}_{\\text{new value (temporal difference target)}} \\bigg) " }, { "math_id": 17, "text": "S_{t}" }, { "math_id": 18, "text": "\\alpha" }, { "math_id": 19, "text": "(0 < \\alpha \\le 1)" }, { "math_id": 20, "text": "Q^{new}(S_t,A_t)" }, { "math_id": 21, "text": "(1 - \\alpha)Q(S_t,A_t)" }, { "math_id": 22, "text": "\\alpha \\, R_{t+1}" }, { "math_id": 23, "text": "\\alpha \\gamma \\max_{a}Q(S_{t+1},a)" }, { "math_id": 24, "text": "s_f" }, { "math_id": 25, "text": "Q(s_f, a)" }, { "math_id": 26, "text": "r" }, { "math_id": 27, "text": "Q(s_f,a)" }, { "math_id": 28, "text": "\\alpha_t = 1" }, { "math_id": 29, "text": "\\alpha_t = 0.1" }, { "math_id": 30, "text": "r_t" }, { "math_id": 31, "text": "W = \\|w(a,s)\\|" }, { "math_id": 32, "text": "w'(a,s) = w(a,s) + v(s')" }, { "math_id": 33, "text": "Q^A" }, { "math_id": 34, "text": "Q^B" }, { "math_id": 35, "text": "Q^A_{t+1}(s_{t}, a_{t}) = Q^A_{t}(s_{t}, a_{t}) + \\alpha_{t}(s_{t}, a_{t}) \\left(r_{t} + \\gamma Q^B_{t}\\left(s_{t+1}, \\mathop\\operatorname{arg~max}_{a} Q^A_t(s_{t+1}, a)\\right) - Q^A_{t}(s_{t}, a_{t})\\right)" }, { "math_id": 36, "text": "Q^B_{t+1}(s_{t}, a_{t}) = Q^B_{t}(s_{t}, a_{t}) + \\alpha_{t}(s_{t}, a_{t}) \\left(r_{t} + \\gamma Q^A_{t}\\left(s_{t+1}, \\mathop\\operatorname{arg~max}_{a} Q^B_t(s_{t+1}, a)\\right) - Q^B_{t}(s_{t}, a_{t})\\right)." } ]
https://en.wikipedia.org/wiki?curid=1281850
12819244
Spermine synthase
Spermine synthase (EC 2.5.1.22, "spermidine aminopropyltransferase", "spermine synthetase") is an enzyme that converts spermidine into spermine. This enzyme catalyses the following chemical reaction S-adenosylmethioninamine + spermidine formula_0 S-methyl-5'-thioadenosine + spermine Spermine synthase is an enzyme involved in polyamine biosynthesis. It is present in all eukaryotes and plays a role in a variety of biological functions in plants Its structure consists of two identical monomers of 41 kDa with three domains each, creating a homodimer formed via dimerization. The interactions between one of the three domains, the N-terminals of the monomers, is responsible for dimerization as that is where the active site is located; the central terminal consisting of four β- strands structurally forming a lid for the third domain, the C-terminal domain. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\rightleftharpoons" } ]
https://en.wikipedia.org/wiki?curid=12819244
12820089
Thymaridas
4th-century BCE Greek mathematician Thymaridas of Paros (; c. 400 – c. 350 BCE) was an ancient Greek mathematician and Pythagorean noted for his work on prime numbers and simultaneous linear equations. Life and work. Although little is known about the life of Thymaridas, it is believed that he was a rich man who fell into poverty. It is said that Thestor of Poseidonia traveled to Paros in order to help Thymaridas with the money that was collected for him. Iamblichus states that Thymaridas called prime numbers "rectilinear", since they can only be represented on a one-dimensional line. Non-prime numbers, on the other hand, can be represented on a two-dimensional plane as a rectangle with sides that, when multiplied, produce the non-prime number in question. He further called the number one a "limiting quantity". Iamblichus in his comments to "Introductio arithmetica" states that Thymaridas also worked with simultaneous linear equations. In particular, he created the then famous rule that was known as the "bloom of Thymaridas" or as the "flower of Thymaridas", which states that: If the sum of "n" quantities be given, and also the sum of every pair containing a particular quantity, then this particular quantity is equal to 1/("n" + 2) [this is a typo in Flegg's book – the denominator should be "n" − 2 to match the math below] of the difference between the sums of these pairs and the first given sum. or using modern notation, the solution of the following system of "n" linear equations in "n" unknowns: formula_0 is given by formula_1 Iamblichus goes on to describe how some systems of linear equations that are not in this form can be placed into this form.
[ { "math_id": 0, "text": "\n\\begin{align}\n x + x_1 + x_2 + \\cdots + x_{n-1} &= s, \\\\\n x + x_1 &= m_1, \\\\\n x + x_2 &= m_2, \\\\\n &~~\\vdots \\\\\n x + x_{n-1} &= m_{n-1}\n\\end{align}\n" }, { "math_id": 1, "text": "x = \\frac{(m_1 + m_2 + \\cdots + m_{n-1}) - s}{n - 2}." } ]
https://en.wikipedia.org/wiki?curid=12820089
12821736
Presentation complex
In geometric group theory, a presentation complex is a 2-dimensional cell complex associated to any presentation of a group "G". The complex has a single vertex, and one loop at the vertex for each generator of "G". There is one 2-cell for each relation in the presentation, with the boundary of the 2-cell attached along the appropriate word. Examples. Let formula_1 be the two-dimensional integer lattice, with presentation formula_2 Then the presentation complex for "G" is a torus, obtained by gluing the opposite sides of a square, the 2-cell, which are labelled "x" and "y". All four corners of the square are glued into a single vertex, the 0-cell of the presentation complex, while a pair consisting of a longtitudal and meridian circles on the torus, intersecting at the vertex, constitutes its 1-skeleton. The associated Cayley complex is a regular tiling of the plane by unit squares. The 1-skeleton of this complex is a Cayley graph for formula_3. Let formula_4 be the Infinite dihedral group, with presentation formula_5. The presentation complex for formula_6 is formula_7, the wedge sum of projective planes. For each path, there is one 2-cell glued to each loop, which provides the standard cell structure for each projective plane. The Cayley complex is an infinite string of spheres. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "K(G,1)" }, { "math_id": 1, "text": "G= \\Z^2" }, { "math_id": 2, "text": " G=\\langle x,y|xyx^{-1}y^{-1}\\rangle." }, { "math_id": 3, "text": "\\Z^2" }, { "math_id": 4, "text": "G = \\Z_2 *\\Z_2" }, { "math_id": 5, "text": "\\langle a,b \\mid a^2,b^2 \\rangle" }, { "math_id": 6, "text": "G" }, { "math_id": 7, "text": "\\mathbb{RP}^2 \\vee \\mathbb{RP}^2" } ]
https://en.wikipedia.org/wiki?curid=12821736
1282548
Jarzynski equality
Equation in statistical mechanics The Jarzynski equality (JE) is an equation in statistical mechanics that relates free energy differences between two states and the irreversible work along an ensemble of trajectories joining the same states. It is named after the physicist Christopher Jarzynski (then at the University of Washington and Los Alamos National Laboratory, currently at the University of Maryland) who derived it in 1996. Fundamentally, the Jarzynski equality points to the fact that the fluctuations in the work satisfy certain constraints separately from the average value of the work that occurs in some process. Overview. In thermodynamics, the free energy difference formula_0 between two states "A" and "B" is connected to the work "W" done on the system through the "inequality": formula_1, with equality holding only in the case of a quasistatic process, i.e. when one takes the system from "A" to "B" infinitely slowly (such that all intermediate states are in thermodynamic equilibrium). In contrast to the thermodynamic statement above, the JE remains valid no matter how fast the process happens. The JE states: formula_2 Here "k" is the Boltzmann constant and "T" is the temperature of the system in the equilibrium state "A" or, equivalently, the temperature of the heat reservoir with which the system was thermalized before the process took place. The over-line indicates an average over all possible realizations of an external process that takes the system from the equilibrium state "A" to a new, generally nonequilibrium state under the same external conditions as that of the equilibrium state "B". This average over possible realizations is an average over different possible fluctuations that could occur during the process (due to Brownian motion, for example), each of which will cause a slightly different value for the work done on the system. In the limit of an infinitely slow process, the work "W" performed on the system in each realization is numerically the same, so the average becomes irrelevant and the Jarzynski equality reduces to the thermodynamic equality formula_3 (see above). Away from the infinitely slow limit, the average value of the work obeys formula_4 while the distribution of the fluctuations in the work are further constrained such that formula_2 In this general case, "W" depends upon the specific initial microstate of the system, though its average can still be related to formula_5 through an application of Jensen's inequality in the JE, viz. formula_4 in accordance with the second law of thermodynamics. The Jarzynski equality holds when the initial state is a Boltzmann distribution (e.g. the system is in equilibrium) and the system and environment can be described by a large number of degrees of freedom evolving under arbitrary Hamiltonian dynamics. The final state does not need to be in equilibrium. (For example, in the textbook case of a gas compressed by a piston, the gas is equilibrated at piston position "A" and compressed to piston position "B"; in the Jarzynski equality, the final state of the gas does not need to be equilibrated at this new piston position). Since its original derivation, the Jarzynski equality has been verified in a variety of contexts, ranging from experiments with biomolecules to numerical simulations. The Crooks fluctuation theorem, proved two years later, leads immediately to the Jarzynski equality. Many other theoretical derivations have also appeared, lending further confidence to its generality. Examples. Fluctuation-dissipation theorem. Taking the log of formula_6, and use the cumulant expansion up to the second cumulant, we obtain formula_7. The left side is the work dissipated into the heat bath, and the right side could be interpreted as the fluctuation in the work due to thermal noise. Consider dragging an overdamped particle in a viscous fluid with temperature formula_8 at constant force formula_9 for a time formula_10. Because there is no potential energy for the particle, the change in free energy is zero, so we obtain formula_11. The work expended is formula_12, where formula_13 is the total displacement during the time. The particle's displacement has a mean part due to the external dragging, and a varying part due to its own diffusion, so formula_14, where formula_15 is the diffusion coefficient. Together, we obtainformula_16or formula_17, where formula_18 is the viscosity. This is the fluctuation-dissipation theorem. In fact, for most trajectories, the work is positive, but for some rare trajectories, the work is negative, and those contribute enormously to the expectation, giving us an expectation that is exactly one. History. A question has been raised about who gave the earliest statement of the Jarzynski equality. For example, in 1977 the Russian physicists G.N. Bochkov and Yu. E. Kuzovlev (see Bibliography) proposed a generalized version of the fluctuation-dissipation theorem which holds in the presence of arbitrary external time-dependent forces. Despite its close similarity to the JE, the Bochkov-Kuzovlev result does not relate free energy differences to work measurements, as discussed by Jarzynski himself in 2007. Another similar statement to the Jarzynski equality is the nonequilibrium partition identity, which can be traced back to Yamada and Kawasaki. (The Nonequilibrium Partition Identity is the Jarzynski equality applied to two systems whose free energy difference is zero - like straining a fluid.) However, these early statements are very limited in their application. Both Bochkov and Kuzovlev as well as Yamada and Kawasaki consider a deterministic time reversible Hamiltonian system. As Kawasaki himself noted this precludes any treatment of nonequilibrium steady states. The fact that these nonequilibrium systems heat up forever because of the lack of any thermostatting mechanism leads to divergent integrals etc. No purely Hamiltonian description is capable of treating the experiments carried out to verify the Crooks fluctuation theorem, Jarzynski equality and the fluctuation theorem. These experiments involve thermostatted systems in contact with heat baths. References. &lt;templatestyles src="Reflist/styles.css" /&gt; Bibliography. For earlier results dealing with the statistics of work in adiabatic (i.e. Hamiltonian) nonequilibrium processes, see: For a comparison of such results, see: For an extension to relativistic Brownian motion, see:
[ { "math_id": 0, "text": "\\Delta F = F_B - F_A" }, { "math_id": 1, "text": " \\Delta F \\leq W " }, { "math_id": 2, "text": " e^ { -\\Delta F / k T} = \\overline{ e^{ -W/kT } }. " }, { "math_id": 3, "text": "\\Delta F = W" }, { "math_id": 4, "text": "\\Delta F \\leq \\overline{W}, " }, { "math_id": 5, "text": "\\Delta F" }, { "math_id": 6, "text": "E[e^{-\\beta W}] = e^{-\\beta \\Delta F} " }, { "math_id": 7, "text": "E[W] - \\Delta F \\approx \\frac 12 \\beta \\sigma_W^2" }, { "math_id": 8, "text": "T" }, { "math_id": 9, "text": "f" }, { "math_id": 10, "text": "t" }, { "math_id": 11, "text": "E[W] = \\frac 12 \\beta \\sigma_W^2 = \\frac 12 \\beta f^2 \\sigma_x^2" }, { "math_id": 12, "text": "fx" }, { "math_id": 13, "text": "x" }, { "math_id": 14, "text": "\\sigma_{x}^2 = 2D t" }, { "math_id": 15, "text": "D" }, { "math_id": 16, "text": "f = \\frac{k_B T}{D} E[x]/t" }, { "math_id": 17, "text": "\\gamma D = k_BT " }, { "math_id": 18, "text": "\\gamma" } ]
https://en.wikipedia.org/wiki?curid=1282548
12825821
Power system simulation
Electrical power system simulation involves power system modeling and network simulation in order to analyze electrical power systems using design/offline or real-time data. Power system simulation software's are a class of computer simulation programs that focus on the operation of electrical power systems. These types of computer programs are used in a wide range of planning and operational situations for electric power systems. Applications of power system simulation include: long-term generation and transmission expansion planning, short-term operational simulations, and market analysis (e.g. price forecasting). These programs typically make use of mathematical optimization techniques such linear programming, quadratic programming, and mixed integer programming. Multiple elements of a power system can be modelled. A power-flow study calculates the loading on transmission lines and the power necessary to be generated at generating stations, given the required loads to be served. A short circuit study or fault analysis calculates the short-circuit current that would flow at various points of interest in the system under study, for short-circuits between phases or from energized wires to ground. A coordination study allows selection and setting of protective relays and fuses to rapidly clear a short-circuit fault while minimizing effects on the rest of the power system. Transient or dynamic stability studies show the effect of events such as sudden load changes, short-circuits, or accidental disconnection of load on the synchronization of the generators in the system. Harmonic or power quality studies show the effect of non-linear loads such as lighting on the waveform of the power system, and allow recommendations to be made to mitigate severe distortion. An optimal power-flow study establishes the best combination of generating plant output to meet a given load requirement, so as to minimize production cost while maintaining desired stability and reliability; such models may be updated in near-real-time to allow guidance to system operators on the lowest-cost way to achieve economic dispatch. There are many power simulation software packages in commercial and non-commercial forms that range from utility-scale software to study tools. Load flow calculation. The load-flow calculation is the most common network analysis tool for examining the undisturbed and disturbed network within the scope of operational and strategic planning. Using network topology, transmission line parameters, transformer parameters, generator location and limits, and load location and compensation, the load-flow calculation can provide voltage magnitudes and angles for all nodes and loading of network components, such as cables and transformers. With this information, compliance to operating limitations such as those stipulated by voltage ranges and maximum loads, can be examined. This is, for example, important for determining the transmission capacity of underground cables, where the influence of cable bundling on the load capability of each cable has to be taken also into account. Due to the ability to determine losses and reactive-power allocation, load-flow calculation also supports the planning engineer in the investigation of the most economical operation mode of the network. When changing over from single and/or multi-phase infeed low-voltage meshed networks to isolated networks, load-flow calculation is essential for operational and economical reasons. Load-flow calculation is also the basis of all further network studies, such as motor start-up or investigation of scheduled or unscheduled outages of equipment within the outage simulation. Especially when investigating motor start-up, the load-flow calculation results give helpful hints, for example, of whether the motor can be started in spite of the voltage drop caused by the start-up current. Short circuit analysis. Short circuit analysis analyzes the power flow after a fault occurs in a power network. The faults may be three-phase short circuit, one-phase grounded, two-phase short circuit, two-phase grounded, one-phase break, two-phase break or complex faults. Results of such an analysis may help determine the following: Transient stability simulation. The goal of transient stability simulation of power systems is to analyse the stability of a power system from sub-second to several tens of seconds. Stability in this aspect is the ability of the system to quickly return to a stable operating condition after being exposed to a disturbance such as for example a tree falling over an overhead line resulting in the automatic disconnection of that line by its protection systems. In engineering terms, a power system is deemed stable if the substation voltage levels and the rotational speeds of motors and generators return to their normal values in a quick and continuous manner. Models typically use the following inputs: The acceptable amount of time it takes grid voltages return to their intended levels is dependent on the magnitude of voltage disturbance, and the most common standard is specified by the CBEMA curve in Figure. 1. This curve informs both electronic equipment design and grid stability data reporting. Unit commitment. The problem of unit commitment involves finding the least-cost dispatch of available generation resources to meet the electrical load. Generating resources can include a wide range of types: The key decision variables that are decided by the computer program are: The latter decisions are binary {0,1}, which means that the mathematical problem is not continuous. In addition, generating plants are subject to a number of complex technical constraints, including: These constraints have many different variants; all this gives rise to a large class of mathematical optimization problems. Optimal power flow. Electricity flows through an AC network according to Kirchhoff's Laws. Transmission lines are subject to thermal limits (simple megawatt limits on flow), as well as voltage and electrical stability constraints. The simulator must calculate the flows in the AC network that result from any given combination of unit commitment and generator megawatt dispatch, and ensure that AC line flows are within both the thermal limits and the voltage and stability constraints. This may include contingencies such as the loss of any one transmission or generation element - a so-called security-constrained optimal power flow (SCOPF), and if the unit commitment is optimized inside this framework we have a security-constrained unit commitment (SCUC). In optimal power flow (OPF) the generalised scalar objective to be minimised is given by: formula_0 where "u" is a set of the control variables, "x" is a set of independent variables, and the subscript 0 indicates that the variable refers to the pre-contingency power system. The SCOPF is bound by equality and inequality constraint limits. The equality constraint limits are given by the pre and post contingency power-flow equations, where "k" refers to the "k"th contingency case: formula_1 The equipment and operating limits are given by the following inequalities: formula_2 represent hard constraints on controls formula_3 represents hard/soft constraints on variables formula_4 represents other constraints such as reactive reserve limits The objective function in OPF can take on different forms relating to active or reactive power quantities that we wish to either minimise or maximise. For example we may wish to minimise transmission losses or minimise real power generation costs on a power network. Other power flow solution methods like stochastic optimization incorporate the uncertainty found in modeling power systems by using the probability distributions of certain variables whose exact values are not known. When uncertainties in the constraints are present, such as for dynamic line ratings, chance constrained optimization can be used where the probability of violating a constraint is limited to a certain value. Another technique to model variability is the Monte Carlo method, in which different combinations of inputs and resulting outputs are considered based on the probability of their occurrence in the real world. This method can be applied to simulations for system security and unit commitment risk, and it is increasingly being used to model probabilistic load flow with renewable and/or distributed generation. Models of competitive behavior. The cost of producing a megawatt of electrical energy is a function of: In addition to this, generating plant incur fixed costs including: Assuming perfect competition, the market-based price of electricity would be based purely on the cost of producing the "next" megawatt of power, the so-called "short-run marginal cost" (SRMC). This price however might not be sufficient to cover the fixed costs of generation, and thus power market prices rarely show purely SRMC pricing. In most established power markets, generators are "free" to offer their generation capacity at prices of their choosing. Competition and use of financial contracts keeps these prices close to SRMC, but inevitably offers price above SRMC do occur (for example during the California energy crisis of 2001). In the context of power system simulation, a number of techniques have been applied to simulate imperfect competition in electrical power markets: Various heuristics have also been applied to this problem. The aim is to provide "realistic" forecasts of power market prices, given the forecast supply-demand situation. Long-term optimization. Power system long-term optimization focuses on optimizing the multi-year expansion and retirement plan for generation, transmission, and distribution facilities. The optimization problem will typically consider the long term investment cash flow and a simplified version of OPF / UC (Unit commitment), to make sure the power system operates in a secure and economic way. This area can be categorized as: Study specifications. A well-defined power systems study requirement is critical to the success of any project as it will reduce the challenge of selecting the qualified service provider and the right analysis software. The system study specification describes the project scope, analysis types, and the required deliverable. The study specification must be written to match the specific project and industry requirements and will vary based on the type of analysis. Power system simulation software. Over the years, there have been several power system simulation software used for various analysis. The first software with a graphical user interface was built by the University of Manchester in 1974 and was called IPSA - Interactive Power Systems Analysis (now owned by TNEI Services Ltd). The recently reformatted cinefilm 'A Blueprint for Power', shot in 1979 shows how this revolutionary software bridged the gap between user-friendly interfaces and the precision required for intricate network analyses. General Electric's MAPS (Multi-Area Production Simulation) is a production simulation model used by various Regional Transmission Organizations and Independent System Operators in the United States to plan for the economic impact of proposed electric transmission and generation facilities in FERC-regulated electric wholesale markets. Portions of the model may also be used for the commitment and dispatch phase (updated on 5 minute intervals) in operation of wholesale electric markets for RTO and ISO regions. Hitachi Energy's PROMOD is a similar software package. These ISO and RTO regions also utilize a GE software package called MARS (Multi-Area Reliability Simulation) to ensure the power system meets reliability criteria (a loss of load expectation (LOLE) of no greater than 0.1 days per year). Further, a GE software package called PSLF (Positive Sequence Load Flow), Siemens software packages called PSSE (Power System Simulation for Engineering) as well as PSS SINCAL (Siemens Network Calculator), and Electrical Transient Analyzer Program (ETAP) by Operation Technology Inc. analyzes load flow on the power system for short-circuits and stability during preliminary planning studies by RTOs and ISOs. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": " f(u_0, x_0) " }, { "math_id": 1, "text": " g^k(u^k, x^k)=0 \\qquad\\text{for }k=1,2,\\ldots,n \\, " }, { "math_id": 2, "text": " U^k_{\\min} \\le U^k \\le U^k_{\\max} \\, " }, { "math_id": 3, "text": " h^k_{\\min} \\le X^k \\le X^k_{\\max} \\, " }, { "math_id": 4, "text": " h^k(u^k, x^k) \\le 0 \\text{ for } k=0,1,\\ldots,n \\, " } ]
https://en.wikipedia.org/wiki?curid=12825821
12829834
Neil J. Gunther
American computer scientist Neil Gunther (born 15 August 1950) is a computer information systems researcher best known internationally for developing the open-source performance modeling software "Pretty Damn Quick" and developing the Guerrilla approach to computer capacity planning and performance analysis. He has also been cited for his contributions to the theory of large transients in computer systems and packet networks, and his universal law of computational scalability. Gunther is a Senior Member of both the Association for Computing Machinery (ACM) and the Institute of Electrical and Electronics Engineers (IEEE), as well as a member of the American Mathematical Society (AMS), American Physical Society (APS), Computer Measurement Group (CMG) and ACM SIGMETRICS. He is currently focused on developing quantum information system technologies. Biography. Gunther is an Australian of German and Scots ancestry, born in Melbourne on 15 August 1950. He attended Preston East Primary School from 1955 to 1956, and Balwyn North Primary School from 1956 until 1962. For his tenth birthday, Gunther received a copy of the now famous book entitled "The Golden Book of Chemistry Experiments" from an older cousin. Inspired by the book, he started working on various experiments, making use of various chemicals that could be found around in his house. After he spilled some potassium permanganate solution on his bedroom carpet his mother confined him to an alcove in the garage which he turned into a small laboratory, replete with industrial chemicals and second-hand laboratory glassware. Gunther was interested in finding out how things like detergents and oils were composed by "cracking" them in his fractionating column. He took particular interest in mixing paints for his art classes, as well as his chemistry classes in Balwyn High School. His father, being the Superintendent of Melbourne's electrical power station, borrowed an organic chemistry text from the chemists in the quality control laboratory. This ultimately led to an intense interest in synthesizing Azo dyes. At around age 14, Gunther attempted to predict the color of azo dyes based on the chromophore-auxochrome combination. Apart from drawing up empirical tables, this effort was largely unsuccessful due to his lack of knowledge of quantum theory. Post-Doc years. Gunther taught physics at San Jose State University from 1980 to 1981. He then joined Syncal Corporation, a small company contracted by NASA and JPL to develop thermoelectric materials for their deep-space missions. Gunther was asked to analyze the thermal stability test data from the Voyager RTGs. He discovered that the stability of the silicon-germanium (Si-Ge) thermoelectric alloy was controlled by a soliton-based precipitation mechanism. JPL used his work to select the next generation of RTG materials for the Galileo mission launched in 1989. Xerox years. In 1982, Gunther joined Xerox PARC to develop parametric and functional test software for PARC's small-scale VLSI design fabrication line. Ultimately, he was recruited onto the "Dragon" multiprocessor workstation project where he also developed the "PARCbench" multiprocessor benchmark. This was his first foray into computer performance analysis. 1989, he developed a Wick-rotated version of Richard Feynman's quantum path integral formalism for analyzing performance degradation in large-scale computer systems and packet networks. Pyramid years. In 1990 Gunther joined Pyramid Technology (now part of Fujitsu Siemens Computers) where he held positions as senior scientist and manager of the Performance Analysis Group that was responsible for attaining industry-high TPC benchmarks on their Unix multiprocessors. He also performed simulations for the design of the "Reliant RM1000" parallel database server. Consulting practice. Gunther founded Performance Dynamics Company as a sole proprietorship, registered in California in 1994, to provide consulting and educational services for the management of high performance computer systems with an emphasis on performance analysis and enterprise-wide capacity planning. He went on to release and develop his own open-source performance modeling software called "PDQ (Pretty Damn Quick)" around 1998. That software also accompanied his first textbook on performance analysis entitled "The Practical Performance Analyst". Several other books have followed since then. Current research interests. Quantum information systems. In 2004, Gunther has embarked on joint research into quantum information systems based on photonics. During the course of his research in this area, he has developed a theory of "photon bifurcation" that is currently being tested experimentally at École Polytechnique Fédérale de Lausanne. This represents yet another application of path integral formulation to circumvent the wave-particle duality of light. In its simplest rendition, this theory can be considered as providing the quantum corrections to the Abbe-Rayleigh diffraction theory of imaging and the Fourier theory of optical information processing. Performance visualization. Inspired by the work of Tukey, Gunther explored ways to help systems analyst visualize performance in a manner similar to that already available in scientific visualization and information visualization. In 1991, he developed a tool called "Barry", which employs barycentric coordinates to visualize sampled CPU usage data on large-scale multiprocessor systems. More recently, he has applied the same 2-simplex barycentric coordinates to visualizing the Apdex application performance metric, which is based on categorical response time data. A barycentric 3-simplex (a tetrahedron), that can be swivelled on the computer screen using a mouse, has been found useful for visualizing packet network performance data. In 2008, he co-founded the PerfViz google group. Universal Law of Computational Scalability. The throughput capacity X(N) of a computational platform is given by: formula_0 where N represents either the number of physical processors in the hardware configuration or the number of users driving the software application. The parameters formula_1, formula_2 and formula_3 respectively represent the levels of "contention" (e.g., queueing for shared resources), "coherency" delay (i.e., latency for data to become consistent) and "concurrency" (or effective parallelism) in the system. The formula_2 parameter also quantifies the retrograde throughput seen in many stress tests but not accounted for in either Amdahl's law or event-based simulations. This scalability law was originally developed by Gunther in 1993 while he was employed at Pyramid Technology. Since there are no topological dependencies, C(N) can model symmetric multiprocessors, multicores, clusters, and GRID architectures. Also, because each of the three terms has a definite physical meaning, they can be employed as a heuristic to determine where to make performance improvements in hardware platforms or software applications. At a more fundamental level, the above equation can be derived from the Machine Repairman queueing model: Theorem (Gunther 2008): The universal scalability law is equivalent to the synchronous queueing bound on throughput in a modified Machine Repairman with state-dependent service times. The following corollary (Gunther 2008 with formula_4) corresponds to Amdahl's law: Theorem (Gunther 2002): Amdahl's law for parallel speedup is equivalent to the synchronous queueing bound on throughput in a Machine Repairman model of a multiprocessor. Selected bibliography. Theses. BSc Honors dissertation, department of physics, October (1974) Books. Heidelberg, Germany, October 2001, (Contributed chapter) References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "X(N) = \\frac{\\gamma N}{1 + \\alpha (N-1) + \\beta N (N-1)} " }, { "math_id": 1, "text": "\\alpha" }, { "math_id": 2, "text": "\\beta" }, { "math_id": 3, "text": "\\gamma" }, { "math_id": 4, "text": "\\beta = 0" } ]
https://en.wikipedia.org/wiki?curid=12829834