id
stringlengths
2
8
title
stringlengths
1
130
text
stringlengths
0
252k
formulas
listlengths
1
823
url
stringlengths
38
44
12059679
Umbrella sampling
Sampling technique used in physics Umbrella sampling is a technique in computational physics and chemistry, used to improve sampling of a system (or different systems) where ergodicity is hindered by the form of the system's energy landscape. It was first suggested by Torrie and Valleau in 1977. It is a particular physical application of the more general importance sampling in statistics. Systems in which an energy barrier separates two regions of configuration space may suffer from poor sampling. In Metropolis Monte Carlo runs, the low probability of overcoming the potential barrier can leave inaccessible configurations poorly sampled—or even entirely unsampled—by the simulation. An easily visualised example occurs with a solid at its melting point: considering the state of the system with an order parameter "Q", both liquid (low "Q") and solid (high "Q") phases are low in energy, but are separated by a free-energy barrier at intermediate values of "Q". This prevents the simulation from adequately sampling both phases. Umbrella sampling is a means of "bridging the gap" in this situation. The standard Boltzmann weighting for Monte Carlo sampling is replaced by a potential chosen to cancel the influence of the energy barrier present. The Markov chain generated has a distribution given by formula_0 with "U" the potential energy, "w"(r"N") a function chosen to promote configurations that would otherwise be inaccessible to a Boltzmann-weighted Monte Carlo run. In the example above, "w" may be chosen such that "w" = "w"("Q"), taking high values at intermediate "Q" and low values at low/high "Q", facilitating barrier crossing. Values for a thermodynamic property "A" deduced from a sampling run performed in this manner can be transformed into canonical-ensemble values by applying the formula formula_1 with the formula_2 subscript indicating values from the umbrella-sampled simulation. The effect of introducing the weighting function "w"(r"N") is equivalent to adding a biasing potential formula_3 to the potential energy of the system. If the biasing potential is strictly a function of a reaction coordinate or order parameter formula_4, then the (unbiased) free-energy profile on the reaction coordinate can be calculated by subtracting the biasing potential from the biased free-energy profile: formula_5 where formula_6 is the free-energy profile of the unbiased system, and formula_7 is the free-energy profile calculated for the biased, umbrella-sampled system. Series of umbrella sampling simulations can be analyzed using the weighted histogram analysis method (WHAM) or its generalization. WHAM can be derived using the maximum likelihood method. Subtleties exist in deciding the most computationally efficient way to apply the umbrella sampling method, as described in Frenkel and Smit's book "Understanding Molecular Simulation". Alternatives to umbrella sampling for computing potentials of mean force or reaction rates are free-energy perturbation and transition interface sampling. A further alternative, which functions in full non-equilibrium, is S-PRES. References. <templatestyles src="Reflist/styles.css" />
[ { "math_id": 0, "text": "\n\\pi(\\mathbf{r}^N) =\n \\frac{w(\\textbf{r}^N) \\exp{\\left(-\\frac{U(\\mathbf{r}^N)}{k_B T}\\right)}}\n {\\int{w(\\mathbf{r}'^N) \\exp{\\left(-\\frac{U(\\mathbf{r}'^N)}{k_B T}\\right)} \\,d\\mathbf{r}'^N}}," }, { "math_id": 1, "text": "\\langle A \\rangle = \\frac{\\langle A / w \\rangle_\\pi}{\\langle 1 / w \\rangle_\\pi}," }, { "math_id": 2, "text": "\\pi" }, { "math_id": 3, "text": "V(\\mathbf{r}^N) = -k_B T \\ln w(\\mathbf{r}^N)" }, { "math_id": 4, "text": "Q" }, { "math_id": 5, "text": "F_0(Q) = F_\\pi(Q) - V(Q)," }, { "math_id": 6, "text": "F_0(Q)" }, { "math_id": 7, "text": "F_\\pi(Q)" } ]
https://en.wikipedia.org/wiki?curid=12059679
1205989
Spherical 3-manifold
Subclass of manifold In mathematics, a spherical 3-manifold "M" is a 3-manifold of the form formula_0 where formula_1 is a finite subgroup of O(4) acting freely by rotations on the 3-sphere formula_2. All such manifolds are prime, orientable, and closed. Spherical 3-manifolds are sometimes called elliptic 3-manifolds. Properties. A special case of the Bonnet–Myers theorem says that every smooth manifold which has a smooth Riemannian metric which is both geodesically complete and of constant positive curvature must be closed and have finite fundamental group. William Thurston's elliptization conjecture, proven by Grigori Perelman using Richard Hamilton's Ricci flow, states a converse: every closed three-dimensional manifold with finite fundamental group has a smooth Riemannian metric of constant positive curvature. (This converse is special to three dimensions.) As such, the spherical three-manifolds are precisely the closed 3-manifolds with finite fundamental group. According to Synge's theorem, every spherical 3-manifold is orientable, and in particular formula_1 must be included in SO(4). The fundamental group is either cyclic, or is a central extension of a dihedral, tetrahedral, octahedral, or icosahedral group by a cyclic group of even order. This divides the set of such manifolds into five classes, described in the following sections. The spherical manifolds are exactly the manifolds with spherical geometry, one of the eight geometries of Thurston's geometrization conjecture. Cyclic case (lens spaces). The manifolds formula_3 with Γ cyclic are precisely the 3-dimensional lens spaces. A lens space is not determined by its fundamental group (there are non-homeomorphic lens spaces with isomorphic fundamental groups); but any other spherical manifold is. Three-dimensional lens spaces arise as quotients of formula_4 by the action of the group that is generated by elements of the form formula_5 where formula_6. Such a lens space formula_7 has fundamental group formula_8 for all formula_9, so spaces with different formula_10 are not homotopy equivalent. Moreover, classifications up to homeomorphism and homotopy equivalence are known, as follows. The three-dimensional spaces formula_11 and formula_12 are: In particular, the lens spaces "L"(7,1) and "L"(7,2) give examples of two 3-manifolds that are homotopy equivalent but not homeomorphic. The lens space "L"(1,0) is the 3-sphere, and the lens space "L"(2,1) is 3 dimensional real projective space. Lens spaces can be represented as Seifert fiber spaces in many ways, usually as fiber spaces over the 2-sphere with at most two exceptional fibers, though the lens space with fundamental group of order 4 also has a representation as a Seifert fiber space over the projective plane with no exceptional fibers. Dihedral case (prism manifolds). A prism manifold is a closed 3-dimensional manifold "M" whose fundamental group is a central extension of a dihedral group. The fundamental group π1("M") of "M" is a product of a cyclic group of order "m" with a group having presentation formula_16 for integers "k", "m", "n" with "k" ≥ 1, "m" ≥ 1, "n" ≥ 2 and "m" coprime to 2"n". Alternatively, the fundamental group has presentation formula_17 for coprime integers "m", "n" with "m" ≥ 1, "n" ≥ 2. (The "n" here equals the previous "n", and the "m" here is 2"k"-1 times the previous "m".) We continue with the latter presentation. This group is a metacyclic group of order 4"mn" with abelianization of order 4"m" (so "m" and "n" are both determined by this group). The element "y" generates a cyclic normal subgroup of order 2"n", and the element "x" has order 4"m". The center is cyclic of order 2"m" and is generated by "x"2, and the quotient by the center is the dihedral group of order 2"n". When "m" = 1 this group is a binary dihedral or dicyclic group. The simplest example is "m" = 1, "n" = 2, when π1("M") is the quaternion group of order 8. Prism manifolds are uniquely determined by their fundamental groups: if a closed 3-manifold has the same fundamental group as a prism manifold "M", it is homeomorphic to "M". Prism manifolds can be represented as Seifert fiber spaces in two ways. Tetrahedral case. The fundamental group is a product of a cyclic group of order "m" with a group having presentation formula_18 for integers "k", "m" with "k" ≥ 1, "m" ≥ 1 and "m" coprime to 6. Alternatively, the fundamental group has presentation formula_19 for an odd integer "m" ≥ 1. (The "m" here is 3"k"-1 times the previous "m".) We continue with the latter presentation. This group has order 24"m". The elements "x" and "y" generate a normal subgroup isomorphic to the quaternion group of order 8. The center is cyclic of order 2"m". It is generated by the elements "z"3 and "x"2 = "y"2, and the quotient by the center is the tetrahedral group, equivalently, the alternating group "A"4. When "m" = 1 this group is the binary tetrahedral group. These manifolds are uniquely determined by their fundamental groups. They can all be represented in an essentially unique way as Seifert fiber spaces: the quotient manifold is a sphere and there are 3 exceptional fibers of orders 2, 3, and 3. Octahedral case. The fundamental group is a product of a cyclic group of order "m" coprime to 6 with the binary octahedral group (of order 48) which has the presentation formula_20 These manifolds are uniquely determined by their fundamental groups. They can all be represented in an essentially unique way as Seifert fiber spaces: the quotient manifold is a sphere and there are 3 exceptional fibers of orders 2, 3, and 4. Icosahedral case. The fundamental group is a product of a cyclic group of order "m" coprime to 30 with the binary icosahedral group (order 120) which has the presentation formula_21 When "m" is 1, the manifold is the Poincaré homology sphere. These manifolds are uniquely determined by their fundamental groups. They can all be represented in an essentially unique way as Seifert fiber spaces: the quotient manifold is a sphere and there are 3 exceptional fibers of orders 2, 3, and 5.
[ { "math_id": 0, "text": "M=S^3/\\Gamma" }, { "math_id": 1, "text": "\\Gamma" }, { "math_id": 2, "text": "S^3" }, { "math_id": 3, "text": "S^3/\\Gamma" }, { "math_id": 4, "text": "S^3 \\subset \\mathbb{C}^2" }, { "math_id": 5, "text": "\\begin{pmatrix}\\omega &0\\\\0&\\omega^q\\end{pmatrix}." }, { "math_id": 6, "text": "\\omega=e^{2\\pi i/p}" }, { "math_id": 7, "text": "L(p;q)" }, { "math_id": 8, "text": "\\mathbb{Z}/p\\mathbb{Z}" }, { "math_id": 9, "text": "q" }, { "math_id": 10, "text": "p" }, { "math_id": 11, "text": "L(p;q_1)" }, { "math_id": 12, "text": "L(p;q_2)" }, { "math_id": 13, "text": "q_1 q_2 \\equiv \\pm n^2 \\pmod{p}" }, { "math_id": 14, "text": "n \\in \\mathbb{N};" }, { "math_id": 15, "text": "q_1 \\equiv \\pm q_2^{\\pm1} \\pmod{p}." }, { "math_id": 16, "text": "\\langle x,y\\mid xyx^{-1}=y^{-1}, x^{2^k}=y^n\\rangle" }, { "math_id": 17, "text": "\\langle x,y \\mid xyx^{-1}=y^{-1}, x^{2m}=y^n\\rangle" }, { "math_id": 18, "text": "\\langle x,y,z \\mid (xy)^2=x^2=y^2, zxz^{-1}=y,zyz^{-1}=xy, z^{3^k}=1\\rangle" }, { "math_id": 19, "text": "\\langle x,y,z \\mid (xy)^2=x^2=y^2, zxz^{-1}=y,zyz^{-1}=xy, z^{3m}=1\\rangle" }, { "math_id": 20, "text": "\\langle x,y \\mid (xy)^2=x^3=y^4\\rangle." }, { "math_id": 21, "text": "\\langle x,y \\mid (xy)^2=x^3=y^5\\rangle." } ]
https://en.wikipedia.org/wiki?curid=1205989
1206
Atomic orbital
Function describing an electron in an atom In quantum mechanics, an atomic orbital () is a function describing location and wave-like behavior of an electron in an atom. This function describes an electron's charge distribution around atom's nucleus, and can be used to calculate probability of finding an electron in a specific region around nucleus. Each orbital in an atom is characterized by a set of values of three quantum numbers n, ℓ, and mℓ, which respectively correspond to electron's energy, its orbital angular momentum, and its orbital angular momentum projected along a chosen axis (magnetic quantum number). The orbitals with a well-defined magnetic quantum number are generally complex-valued. Real-valued orbitals can be formed as linear combinations of mℓ and −mℓ orbitals, and are often labeled using associated harmonic polynomials (e.g., "xy", "x"2 − "y"2) which describe their angular structure. An orbital can be occupied by a maximum of two electrons, each with its own projection of spin formula_0. The simple names s orbital, p orbital, d orbital, and f orbital refer to orbitals with angular momentum quantum number "ℓ" = 0, 1, 2, and 3 respectively. These names, together with their n values , are used to describe electron configurations of atoms. They are derived from description by early spectroscopists of certain series of alkali metal spectroscopic lines as sharp, principal, diffuse, and fundamental. Orbitals for "ℓ" > 3 continue alphabetically (g, h, i, k, ...), omitting j because some languages do not distinguish between letters "i" and "j". Atomic orbitals are basic building blocks of atomic orbital model (or electron cloud or wave mechanics model), a modern framework for visualizing submicroscopic behavior of electrons in matter. In this model electron cloud of an atom may be seen as being built up (in approximation) in an electron configuration that is a product of simpler hydrogen-like atomic orbitals. The repeating "periodicity" of blocks of 2, 6, 10, and 14 elements within sections of periodic table arises naturally from total number of electrons that occupy a complete set of s, p, d, and f orbitals, respectively, though for higher values of quantum number n, particularly when the atom bears a positive charge, energies of certain sub-shells become very similar and so, order in which they are said to be populated by electrons (e.g., Cr = [Ar]4s13d5 and Cr2+ = [Ar]3d4) can be rationalized only somewhat arbitrarily. Electron properties. With the development of quantum mechanics and experimental findings (such as the two slit diffraction of electrons), it was found that the electrons orbiting a nucleus could not be fully described as particles, but needed to be explained by wave–particle duality. In this sense, electrons have the following properties: Wave-like properties: Particle-like properties: Thus, electrons cannot be described simply as solid particles. An analogy might be that of a large and often oddly shaped "atmosphere" (the electron), distributed around a relatively tiny planet (the nucleus). Atomic orbitals exactly describe the shape of this "atmosphere" only when one electron is present. When more electrons are added, the additional electrons tend to more evenly fill in a volume of space around the nucleus so that the resulting collection ("electron cloud") tends toward a generally spherical zone of probability describing the electron's location, because of the uncertainty principle. One should remember that these orbital 'states', as described here, are merely eigenstates of an electron in its orbit. An actual electron exists in a superposition of states, which is like a weighted average, but with complex number weights. So, for instance, an electron could be in a state (2, 1, 0), a pure eigenstate, or a mixed state (2, 1, 0) + (2, 1, 1), or even the mixed state (2, 1, 0) + formula_1 (2, 1, 1). For each eigenstate, a property has an eigenvalue. So, for the three states just mentioned, the value of formula_2 is 2, and the value of formula_3 is 1. But the value for formula_4 is a superposition of 0 and 1. It is not a fraction like ; it is ambiguous, either 0.0 or 1.0. A superposition of eigenstates (2, 1, 1) and (3, 2, 1) would have an ambiguous formula_2 and formula_3, but formula_4 would definitely be 1. Eigenstates make it easier to deal with the math. You can choose a different basis of eigenstates by superimposing eigenstates from any other basis (see Real orbitals below). Formal quantum mechanical definition. Atomic orbitals may be defined more precisely in formal quantum mechanical language. They are approximate solutions to the Schrödinger equation for the electrons bound to the atom by the electric field of the atom's nucleus. Specifically, in quantum mechanics, the state of an atom, i.e., an eigenstate of the atomic Hamiltonian, is approximated by an expansion (see configuration interaction expansion and basis set) into linear combinations of anti-symmetrized products (Slater determinants) of one-electron functions. The spatial components of these one-electron functions are called atomic orbitals. (When one considers also their spin component, one speaks of atomic spin orbitals.) A state is actually a function of the coordinates of all the electrons, so that their motion is correlated, but this is often approximated by this independent-particle model of products of single electron wave functions. (The London dispersion force, for example, depends on the correlations of the motion of the electrons.) In atomic physics, the atomic spectral lines correspond to transitions (quantum leaps) between quantum states of an atom. These states are labeled by a set of quantum numbers summarized in the term symbol and usually associated with particular electron configurations, i.e., by occupation schemes of atomic orbitals (for example, 1s2 2s2 2p6 for the ground state of neon-term symbol: 1S0). This notation means that the corresponding Slater determinants have a clear higher weight in the configuration interaction expansion. The atomic orbital concept is therefore a key concept for visualizing the excitation process associated with a given transition. For example, one can say for a given transition that it corresponds to the excitation of an electron from an occupied orbital to a given unoccupied orbital. Nevertheless, one has to keep in mind that electrons are fermions ruled by the Pauli exclusion principle and cannot be distinguished from each other. Moreover, it sometimes happens that the configuration interaction expansion converges very slowly and that one cannot speak about simple one-determinant wave function at all. This is the case when electron correlation is large. Fundamentally, an atomic orbital is a one-electron wave function, even though many electrons are not in one-electron atoms, and so the one-electron view is an approximation. When thinking about orbitals, we are often given an orbital visualization heavily influenced by the Hartree–Fock approximation, which is one way to reduce the complexities of molecular orbital theory. Types of orbital. Atomic orbitals can be the hydrogen-like "orbitals" which are exact solutions to the Schrödinger equation for a hydrogen-like "atom" (i.e., atom with one electron). Alternatively, atomic orbitals refer to functions that depend on the coordinates of one electron (i.e., orbitals) but are used as starting points for approximating wave functions that depend on the simultaneous coordinates of all the electrons in an atom or molecule. The coordinate systems chosen for orbitals are usually spherical coordinates ("r", "θ", "φ") in atoms and Cartesian ("x", "y", "z") in polyatomic molecules. The advantage of spherical coordinates here is that an orbital wave function is a product of three factors each dependent on a single coordinate: "ψ"("r", "θ", "φ") = "R"("r") Θ("θ") Φ("φ"). The angular factors of atomic orbitals Θ("θ") Φ("φ") generate s, p, d, etc. functions as real combinations of spherical harmonics "Y""ℓm"("θ", "φ") (where ℓ and m are quantum numbers). There are typically three mathematical forms for the radial functions "R"("r") which can be chosen as a starting point for the calculation of the properties of atoms and molecules with many electrons: Although hydrogen-like orbitals are still used as pedagogical tools, the advent of computers has made STOs preferable for atoms and diatomic molecules since combinations of STOs can replace the nodes in hydrogen-like orbitals. Gaussians are typically used in molecules with three or more atoms. Although not as accurate by themselves as STOs, combinations of many Gaussians can attain the accuracy of hydrogen-like orbitals. History. The term "orbital" was coined by Robert S. Mulliken in 1932 as short for "one-electron orbital wave function". Niels Bohr explained around 1913 that electrons might revolve around a compact nucleus with definite angular momentum. Bohr's model was an improvement on the 1911 explanations of Ernest Rutherford, that of the electron moving around a nucleus. Japanese physicist Hantaro Nagaoka published an orbit-based hypothesis for electron behavior as early as 1904. These theories were each built upon new observations starting with simple understanding and becoming more correct and complex. Explaining the behavior of these electron "orbits" was one of the driving forces behind the development of quantum mechanics. Early models. With J. J. Thomson's discovery of the electron in 1897, it became clear that atoms were not the smallest building blocks of nature, but were rather composite particles. The newly discovered structure within atoms tempted many to imagine how the atom's constituent parts might interact with each other. Thomson theorized that multiple electrons revolve in orbit-like rings within a positively charged jelly-like substance, and between the electron's discovery and 1909, this "plum pudding model" was the most widely accepted explanation of atomic structure. Shortly after Thomson's discovery, Hantaro Nagaoka predicted a different model for electronic structure. Unlike the plum pudding model, the positive charge in Nagaoka's "Saturnian Model" was concentrated into a central core, pulling the electrons into circular orbits reminiscent of Saturn's rings. Few people took notice of Nagaoka's work at the time, and Nagaoka himself recognized a fundamental defect in the theory even at its conception, namely that a classical charged object cannot sustain orbital motion because it is accelerating and therefore loses energy due to electromagnetic radiation. Nevertheless, the Saturnian model turned out to have more in common with modern theory than any of its contemporaries. Bohr atom. In 1909, Ernest Rutherford discovered that the bulk of the atomic mass was tightly condensed into a nucleus, which was also found to be positively charged. It became clear from his analysis in 1911 that the plum pudding model could not explain atomic structure. In 1913, Rutherford's post-doctoral student, Niels Bohr, proposed a new model of the atom, wherein electrons orbited the nucleus with classical periods, but were permitted to have only discrete values of angular momentum, quantized in units ħ. This constraint automatically allowed only certain electron energies. The Bohr model of the atom fixed the problem of energy loss from radiation from a ground state (by declaring that there was no state below this), and more importantly explained the origin of spectral lines. After Bohr's use of Einstein's explanation of the photoelectric effect to relate energy levels in atoms with the wavelength of emitted light, the connection between the structure of electrons in atoms and the emission and absorption spectra of atoms became an increasingly useful tool in the understanding of electrons in atoms. The most prominent feature of emission and absorption spectra (known experimentally since the middle of the 19th century), was that these atomic spectra contained discrete lines. The significance of the Bohr model was that it related the lines in emission and absorption spectra to the energy differences between the orbits that electrons could take around an atom. This was, however, "not" achieved by Bohr through giving the electrons some kind of wave-like properties, since the idea that electrons could behave as matter waves was not suggested until eleven years later. Still, the Bohr model's use of quantized angular momenta and therefore quantized energy levels was a significant step toward the understanding of electrons in atoms, and also a significant step towards the development of quantum mechanics in suggesting that quantized restraints must account for all discontinuous energy levels and spectra in atoms. With de Broglie's suggestion of the existence of electron matter waves in 1924, and for a short time before the full 1926 Schrödinger equation treatment of hydrogen-like atoms, a Bohr electron "wavelength" could be seen to be a function of its momentum; so a Bohr orbiting electron was seen to orbit in a circle at a multiple of its half-wavelength. The Bohr model for a short time could be seen as a classical model with an additional constraint provided by the 'wavelength' argument. However, this period was immediately superseded by the full three-dimensional wave mechanics of 1926. In our current understanding of physics, the Bohr model is called a semi-classical model because of its quantization of angular momentum, not primarily because of its relationship with electron wavelength, which appeared in hindsight a dozen years after the Bohr model was proposed. The Bohr model was able to explain the emission and absorption spectra of hydrogen. The energies of electrons in the "n" = 1, 2, 3, etc. states in the Bohr model match those of current physics. However, this did not explain similarities between different atoms, as expressed by the periodic table, such as the fact that helium (two electrons), neon (10 electrons), and argon (18 electrons) exhibit similar chemical inertness. Modern quantum mechanics explains this in terms of electron shells and subshells which can each hold a number of electrons determined by the Pauli exclusion principle. Thus the "n" = 1 state can hold one or two electrons, while the "n" = 2 state can hold up to eight electrons in 2s and 2p subshells. In helium, all "n" = 1 states are fully occupied; the same is true for "n" = 1 and "n" = 2 in neon. In argon, the 3s and 3p subshells are similarly fully occupied by eight electrons; quantum mechanics also allows a 3d subshell but this is at higher energy than the 3s and 3p in argon (contrary to the situation for hydrogen) and remains empty. Modern conceptions and connections to the Heisenberg uncertainty principle. Immediately after Heisenberg discovered his uncertainty principle, Bohr noted that the existence of any sort of wave packet implies uncertainty in the wave frequency and wavelength, since a spread of frequencies is needed to create the packet itself. In quantum mechanics, where all particle momenta are associated with waves, it is the formation of such a wave packet which localizes the wave, and thus the particle, in space. In states where a quantum mechanical particle is bound, it must be localized as a wave packet, and the existence of the packet and its minimum size implies a spread and minimal value in particle wavelength, and thus also momentum and energy. In quantum mechanics, as a particle is localized to a smaller region in space, the associated compressed wave packet requires a larger and larger range of momenta, and thus larger kinetic energy. Thus the binding energy to contain or trap a particle in a smaller region of space increases without bound as the region of space grows smaller. Particles cannot be restricted to a geometric point in space, since this would require infinite particle momentum. In chemistry, Erwin Schrödinger, Linus Pauling, Mulliken and others noted that the consequence of Heisenberg's relation was that the electron, as a wave packet, could not be considered to have an exact location in its orbital. Max Born suggested that the electron's position needed to be described by a probability distribution which was connected with finding the electron at some point in the wave-function which described its associated wave packet. The new quantum mechanics did not give exact results, but only the probabilities for the occurrence of a variety of possible such results. Heisenberg held that the path of a moving particle has no meaning if we cannot observe it, as we cannot with electrons in an atom. In the quantum picture of Heisenberg, Schrödinger and others, the Bohr atom number "n" for each orbital became known as an "n-sphere" in a three-dimensional atom and was pictured as the most probable energy of the probability cloud of the electron's wave packet which surrounded the atom. Orbital names. Orbital notation and subshells. Orbitals have been given names, which are usually given in the form: formula_7 where "X" is the energy level corresponding to the principal quantum number n; type is a lower-case letter denoting the shape or subshell of the orbital, corresponding to the angular momentum quantum number ℓ. For example, the orbital 1s (pronounced as the individual numbers and letters: "'one' 'ess'") is the lowest energy level ("n" = 1) and has an angular quantum number of "ℓ" = 0, denoted as s. Orbitals with "ℓ" = 1, 2 and 3 are denoted as p, d and f respectively. The set of orbitals for a given n and ℓ is called a "subshell", denoted formula_8. The superscript y shows the number of electrons in the subshell. For example, the notation 2p4 indicates that the 2p subshell of an atom contains 4 electrons. This subshell has 3 orbitals, each with n = 2 and ℓ = 1. X-ray notation. There is also another, less common system still used in X-ray science known as X-ray notation, which is a continuation of the notations used before orbital theory was well understood. In this system, the principal quantum number is given a letter associated with it. For "n" = 1, 2, 3, 4, 5, ..., the letters associated with those numbers are K, L, M, N, O, ... respectively. Hydrogen-like orbitals. The simplest atomic orbitals are those that are calculated for systems with a single electron, such as the hydrogen atom. An atom of any other element ionized down to a single electron is very similar to hydrogen, and the orbitals take the same form. In the Schrödinger equation for this system of one negative and one positive particle, the atomic orbitals are the eigenstates of the Hamiltonian operator for the energy. They can be obtained analytically, meaning that the resulting orbitals are products of a polynomial series, and exponential and trigonometric functions. (see hydrogen atom). For atoms with two or more electrons, the governing equations can be solved only with the use of methods of iterative approximation. Orbitals of multi-electron atoms are "qualitatively" similar to those of hydrogen, and in the simplest models, they are taken to have the same form. For more rigorous and precise analysis, numerical approximations must be used. A given (hydrogen-like) atomic orbital is identified by unique values of three quantum numbers: n, ℓ, and mℓ. The rules restricting the values of the quantum numbers, and their energies (see below), explain the electron configuration of the atoms and the periodic table. The stationary states (quantum states) of the hydrogen-like atoms are its atomic orbitals. However, in general, an electron's behavior is not fully described by a single orbital. Electron states are best represented by time-depending "mixtures" (linear combinations) of multiple orbitals. See Linear combination of atomic orbitals molecular orbital method. The quantum number n first appeared in the Bohr model where it determines the radius of each circular electron orbit. In modern quantum mechanics however, n determines the mean distance of the electron from the nucleus; all electrons with the same value of "n" lie at the same average distance. For this reason, orbitals with the same value of "n" are said to comprise a "shell". Orbitals with the same value of "n" and also the same value of ℓ are even more closely related, and are said to comprise a "subshell". Quantum numbers. Because of the quantum mechanical nature of the electrons around a nucleus, atomic orbitals can be uniquely defined by a set of integers known as quantum numbers. These quantum numbers occur only in certain combinations of values, and their physical interpretation changes depending on whether real or complex versions of the atomic orbitals are employed. Complex orbitals. In physics, the most common orbital descriptions are based on the solutions to the hydrogen atom, where orbitals are given by the product between a radial function and a pure spherical harmonic. The quantum numbers, together with the rules governing their possible values, are as follows: The principal quantum number n describes the energy of the electron and is always a positive integer. In fact, it can be any positive integer, but for reasons discussed below, large numbers are seldom encountered. Each atom has, in general, many orbitals associated with each value of "n"; these orbitals together are sometimes called "electron shells". The azimuthal quantum number ℓ describes the orbital angular momentum of each electron and is a non-negative integer. Within a shell where n is some integer "n"0, ℓ ranges across all (integer) values satisfying the relation formula_9. For instance, the "n" = 1 shell has only orbitals with formula_10, and the "n" = 2 shell has only orbitals with formula_10, and formula_11. The set of orbitals associated with a particular value of ℓ are sometimes collectively called a "subshell". The magnetic quantum number, formula_12, describes the projection of the orbital angular momentum along a chosen axis. It determines the magnitude of the current circulating around that axis and the orbital contribution to the magnetic moment of an electron via the Ampèrian loop model. Within a subshell formula_13, formula_12 obtains the integer values in the range formula_14. The above results may be summarized in the following table. Each cell represents a subshell, and lists the values of formula_12 available in that subshell. Empty cells represent subshells that do not exist. Subshells are usually identified by their formula_2- and formula_13-values. formula_2 is represented by its numerical value, but formula_13 is represented by a letter as follows: 0 is represented by 's', 1 by 'p', 2 by 'd', 3 by 'f', and 4 by 'g'. For instance, one may speak of the subshell with formula_15 and formula_10 as a '2s subshell'. Each electron also has angular momentum in the form of quantum mechanical spin given by spin "s" = . Its projection along a specified axis is given by the spin magnetic quantum number, "ms", which can be + or −. These values are also called "spin up" or "spin down" respectively. The Pauli exclusion principle states that no two electrons in an atom can have the same values of all four quantum numbers. If there are two electrons in an orbital with given values for three quantum numbers, (n, ℓ, m), these two electrons must differ in their spin projection "ms". The above conventions imply a preferred axis (for example, the "z" direction in Cartesian coordinates), and they also imply a preferred direction along this preferred axis. Otherwise there would be no sense in distinguishing "m" = +1 from "m" = −1. As such, the model is most useful when applied to physical systems that share these symmetries. The Stern–Gerlach experiment—where an atom is exposed to a magnetic field—provides one such example. Real orbitals. Instead of the complex orbitals described above, it is common, especially in the chemistry literature, to use "real" atomic orbitals. These real orbitals arise from simple linear combinations of complex orbitals. Using the Condon–Shortley phase convention, real orbitals are related to complex orbitals in the same way that the real spherical harmonics are related to complex spherical harmonics. Letting formula_17 denote a complex orbital with quantum numbers formula_2, formula_3, and formula_18, the real orbitals formula_19 may be defined by formula_20 If formula_21, with formula_22 the radial part of the orbital, this definition is equivalent to formula_23 where formula_24 is the real spherical harmonic related to either the real or imaginary part of the complex spherical harmonic formula_25. Real spherical harmonics are physically relevant when an atom is embedded in a crystalline solid, in which case there are multiple preferred symmetry axes but no single preferred direction. Real atomic orbitals are also more frequently encountered in introductory chemistry textbooks and shown in common orbital visualizations. In real hydrogen-like orbitals, quantum numbers formula_2 and formula_13 have the same interpretation and significance as their complex counterparts, but formula_18 is no longer a good quantum number (but its absolute value is). Some real orbitals are given specific names beyond the simple formula_26 designation. Orbitals with quantum number formula_13 equal to formula_27 are called formula_28 orbitals. With this one can already assign names to complex orbitals such as formula_29; the first symbol is the formula_2 quantum number, the second character is the symbol for that particular formula_13 quantum number and the subscript is the formula_18 quantum number. As an example of how the full orbital names are generated for real orbitals, one may calculate formula_30. From the table of spherical harmonics, formula_31 with formula_32. Then formula_33 Likewise formula_34. As a more complicated example: formula_35 In all these cases we generate a Cartesian label for the orbital by examining, and abbreviating, the polynomial in formula_36, formula_37, and formula_38 appearing in the numerator. We ignore any terms in the formula_39 polynomial except for the term with the highest exponent in formula_38. We then use the abbreviated polynomial as a subscript label for the atomic state, using the same nomenclature as above to indicate the formula_2 and formula_13 quantum numbers. formula_40 The expression above all use the Condon–Shortley phase convention which is favored by quantum physicists. Other conventions exist for the phase of the spherical harmonics. Under these different conventions the formula_16 and formula_41 orbitals may appear, for example, as the sum and difference of formula_42 and formula_43, contrary to what is shown above. Below is a list of these Cartesian polynomial names for the atomic orbitals. There does not seem to be reference in the literature as to how to abbreviate the long Cartesian spherical harmonic polynomials for formula_44 so there does not seem be consensus on the naming of formula_45 orbitals or higher according to this nomenclature. Shapes of orbitals. Simple pictures showing orbital shapes are intended to describe the angular forms of regions in space where the electrons occupying the orbital are likely to be found. The diagrams cannot show the entire region where an electron can be found, since according to quantum mechanics there is a non-zero probability of finding the electron (almost) anywhere in space. Instead the diagrams are approximate representations of boundary or contour surfaces where the probability density | ψ("r", "θ", "φ") |2 has a constant value, chosen so that there is a certain probability (for example 90%) of finding the electron within the contour. Although | "ψ" |2 as the square of an absolute value is everywhere non-negative, the sign of the wave function ψ("r", "θ", "φ") is often indicated in each subregion of the orbital picture. Sometimes the ψ function is graphed to show its phases, rather than | ψ("r", "θ", "φ") |2 which shows probability density but has no phase (which is lost when taking absolute value, since ψ("r", "θ", "φ") is a complex number). orbital graphs tend to have less spherical, thinner lobes than ψ("r", "θ", "φ") graphs, but have the same number of lobes in the same places, and otherwise are recognizable. This article, to show wave function phase, shows mostly ψ("r", "θ", "φ") graphs. The lobes can be seen as standing wave interference patterns between the two counter-rotating, ring-resonant traveling wave m and −"m" modes; the projection of the orbital onto the xy plane has a resonant m wavelength around the circumference. Although rarely shown, the traveling wave solutions can be seen as rotating banded tori; the bands represent phase information. For each m there are two standing wave solutions ⟨"m"⟩ + ⟨−"m"⟩ and ⟨"m"⟩ − ⟨−"m"⟩. If "m" = 0, the orbital is vertical, counter rotating information is unknown, and the orbital is "z"-axis symmetric. If "ℓ" = 0 there are no counter rotating modes. There are only radial modes and the shape is spherically symmetric. "Nodal planes" and "nodal spheres" are surfaces on which the probability density vanishes. The number of nodal surfaces is controlled by the quantum numbers n and ℓ. An orbital with azimuthal quantum number ℓ has ℓ radial nodal planes passing through the origin. For example, the s orbitals ("ℓ" = 0) are spherically symmetric and have no nodal planes, whereas the p orbitals ("ℓ" = 1) have a single nodal plane between the lobes. The number of nodal spheres equals n-ℓ-1, consistent with the restriction ℓ ≤ n-1 on the quantum numbers. The principal quantum number controls the total number of nodal surfaces which is n-1. Loosely speaking, n is energy, ℓ is analogous to eccentricity, and m is orientation. In general, n determines size and energy of the orbital for a given nucleus; as n increases, the size of the orbital increases. The higher nuclear charge Z of heavier elements causes their orbitals to contract by comparison to lighter ones, so that the size of the atom remains very roughly constant, even as the number of electrons increases. Also in general terms, ℓ determines an orbital's shape, and mℓ its orientation. However, since some orbitals are described by equations in complex numbers, the shape sometimes depends on mℓ also. Together, the whole set of orbitals for a given ℓ and n fill space as symmetrically as possible, though with increasingly complex sets of lobes and nodes. The single s orbitals (formula_10) are shaped like spheres. For "n" = 1 it is roughly a solid ball (densest at center and fades outward exponentially), but for "n" ≥ 2, each single s orbital is made of spherically symmetric surfaces which are nested shells (i.e., the "wave-structure" is radial, following a sinusoidal radial component as well). See illustration of a cross-section of these nested shells, at right. The s orbitals for all n numbers are the only orbitals with an anti-node (a region of high wave function density) at the center of the nucleus. All other orbitals (p, d, f, etc.) have angular momentum, and thus avoid the nucleus (having a wave node "at" the nucleus). Recently, there has been an effort to experimentally image the 1s and 2p orbitals in a SrTiO3 crystal using scanning transmission electron microscopy with energy dispersive x-ray spectroscopy. Because the imaging was conducted using an electron beam, Coulombic beam-orbital interaction that is often termed as the impact parameter effect is included in the outcome (see the figure at right). The shapes of p, d and f orbitals are described verbally here and shown graphically in the "Orbitals table" below. The three p orbitals for "n" = 2 have the form of two ellipsoids with a point of tangency at the nucleus (the two-lobed shape is sometimes referred to as a "dumbbell"—there are two lobes pointing in opposite directions from each other). The three p orbitals in each shell are oriented at right angles to each other, as determined by their respective linear combination of values of mℓ. The overall result is a lobe pointing along each direction of the primary axes. Four of the five d orbitals for "n" = 3 look similar, each with four pear-shaped lobes, each lobe tangent at right angles to two others, and the centers of all four lying in one plane. Three of these planes are the xy-, xz-, and yz-planes—the lobes are between the pairs of primary axes—and the fourth has the center along the x and y axes themselves. The fifth and final d orbital consists of three regions of high probability density: a torus in between two pear-shaped regions placed symmetrically on its z axis. The overall total of 18 directional lobes point in every primary axis direction and between every pair. There are seven f orbitals, each with shapes more complex than those of the d orbitals. Additionally, as is the case with the s orbitals, individual p, d, f and g orbitals with n values higher than the lowest possible value, exhibit an additional radial node structure which is reminiscent of harmonic waves of the same type, as compared with the lowest (or fundamental) mode of the wave. As with s orbitals, this phenomenon provides p, d, f, and g orbitals at the next higher possible value of n (for example, 3p orbitals vs. the fundamental 2p), an additional node in each lobe. Still higher values of n further increase the number of radial nodes, for each type of orbital. The shapes of atomic orbitals in one-electron atom are related to 3-dimensional spherical harmonics. These shapes are not unique, and any linear combination is valid, like a transformation to cubic harmonics, in fact it is possible to generate sets where all the d's are the same shape, just like the p"x", p"y", and p"z" are the same shape. Although individual orbitals are most often shown independent of each other, the orbitals coexist around the nucleus at the same time. Also, in 1927, Albrecht Unsöld proved that if one sums the electron density of all orbitals of a particular azimuthal quantum number ℓ of the same shell n (e.g., all three 2p orbitals, or all five 3d orbitals) where each orbital is occupied by an electron or each is occupied by an electron pair, then all angular dependence disappears; that is, the resulting total density of all the atomic orbitals in that subshell (those with the same ℓ) is spherical. This is known as Unsöld's theorem. Orbitals table. This table shows the real hydrogen-like wave functions for all atomic orbitals up to 7s, and therefore covers the occupied orbitals in the ground state of all elements in the periodic table up to radium and some beyond. "ψ" graphs are shown with − and + wave function phases shown in two different colors (arbitrarily red and blue). The p"z" orbital is the same as the p0 orbital, but the p"x" and p"y" are formed by taking linear combinations of the p+1 and p−1 orbitals (which is why they are listed under the "m" = ±1 label). Also, the p+1 and p−1 are not the same shape as the p0, since they are pure spherical harmonics. † "Elements with 7p electrons have been discovered, but their electronic configurations are only predicted – save the exceptional Lr, which fills 7p1 instead of 6d1." ‡ "For the elements whose highest occupied orbital is a 6d orbital, only some electronic configurations have been confirmed." (Mt, Ds, Rg and Cn are still missing). These are the real-valued orbitals commonly used in chemistry. Only the formula_46 orbitals where are eigenstates of the orbital angular momentum operator, formula_47. The columns with formula_48 are combinations of two eigenstates. See : Qualitative understanding of shapes. The shapes of atomic orbitals can be qualitatively understood by considering the analogous case of standing waves on a circular drum. To see the analogy, the mean vibrational displacement of each bit of drum membrane from the equilibrium point over many cycles (a measure of average drum membrane velocity and momentum at that point) must be considered relative to that point's distance from the center of the drum head. If this displacement is taken as being analogous to the probability of finding an electron at a given distance from the nucleus, then it will be seen that the many modes of the vibrating disk form patterns that trace the various shapes of atomic orbitals. The basic reason for this correspondence lies in the fact that the distribution of kinetic energy and momentum in a matter-wave is predictive of where the particle associated with the wave will be. That is, the probability of finding an electron at a given place is also a function of the electron's average momentum at that point, since high electron momentum at a given position tends to "localize" the electron in that position, via the properties of electron wave-packets (see the Heisenberg uncertainty principle for details of the mechanism). This relationship means that certain key features can be observed in both drum membrane modes and atomic orbitals. For example, in all of the modes analogous to s orbitals (the top row in the animated illustration below), it can be seen that the very center of the drum membrane vibrates most strongly, corresponding to the antinode in all s orbitals in an atom. This antinode means the electron is most likely to be at the physical position of the nucleus (which it passes straight through without scattering or striking it), since it is moving (on average) most rapidly at that point, giving it maximal momentum. A mental "planetary orbit" picture closest to the behavior of electrons in s orbitals, all of which have no angular momentum, might perhaps be that of a Keplerian orbit with the orbital eccentricity of 1 but a finite major axis, not physically possible (because particles were to collide), but can be imagined as a limit of orbits with equal major axes but increasing eccentricity. Below, a number of drum membrane vibration modes and the respective wave functions of the hydrogen atom are shown. A correspondence can be considered where the wave functions of a vibrating drum head are for a two-coordinate system ψ("r", "θ") and the wave functions for a vibrating sphere are three-coordinate ψ("r", "θ", "φ"). None of the other sets of modes in a drum membrane have a central antinode, and in all of them the center of the drum does not move. These correspond to a node at the nucleus for all non-s orbitals in an atom. These orbitals all have some angular momentum, and in the planetary model, they correspond to particles in orbit with eccentricity less than 1.0, so that they do not pass straight through the center of the primary body, but keep somewhat away from it. In addition, the drum modes analogous to p and d modes in an atom show spatial irregularity along the different radial directions from the center of the drum, whereas all of the modes analogous to s modes are perfectly symmetrical in radial direction. The non-radial-symmetry properties of non-s orbitals are necessary to localize a particle with angular momentum and a wave nature in an orbital where it must tend to stay away from the central attraction force, since any particle localized at the point of central attraction could have no angular momentum. For these modes, waves in the drum head tend to avoid the central point. Such features again emphasize that the shapes of atomic orbitals are a direct consequence of the wave nature of electrons. Orbital energy. In atoms with one electron (hydrogen-like atom), the energy of an orbital (and, consequently, any electron in the orbital) is determined mainly by formula_2. The formula_49 orbital has the lowest possible energy in the atom. Each successively higher value of formula_2 has a higher energy, but the difference decreases as formula_2 increases. For high formula_2, the energy becomes so high that the electron can easily escape the atom. In single electron atoms, all levels with different formula_13 within a given formula_2 are degenerate in the Schrödinger approximation, and have the same energy. This approximation is broken slightly in the solution to the Dirac equation (where energy depends on n and another quantum number j), and by the effect of the magnetic field of the nucleus and quantum electrodynamics effects. The latter induce tiny binding energy differences especially for s electrons that go nearer the nucleus, since these feel a very slightly different nuclear charge, even in one-electron atoms; see Lamb shift. In atoms with multiple electrons, the energy of an electron depends not only on its orbital, but also on its interactions with other electrons. These interactions depend on the detail of its spatial probability distribution, and so the energy levels of orbitals depend not only on formula_2 but also on formula_13. Higher values of formula_13 are associated with higher values of energy; for instance, the 2p state is higher than the 2s state. When formula_50, the increase in energy of the orbital becomes so large as to push the energy of orbital above the energy of the s orbital in the next higher shell; when formula_51 the energy is pushed into the shell two steps higher. The filling of the 3d orbitals does not occur until the 4s orbitals have been filled. The increase in energy for subshells of increasing angular momentum in larger atoms is due to electron–electron interaction effects, and it is specifically related to the ability of low angular momentum electrons to penetrate more effectively toward the nucleus, where they are subject to less screening from the charge of intervening electrons. Thus, in atoms with higher atomic number, the formula_13 of electrons becomes more and more of a determining factor in their energy, and the principal quantum numbers formula_2 of electrons becomes less and less important in their energy placement. The energy sequence of the first 35 subshells (e.g., 1s, 2p, 3d, etc.) is given in the following table. Each cell represents a subshell with formula_2 and formula_13 given by its row and column indices, respectively. The number in the cell is the subshell's position in the sequence. For a linear listing of the subshells in terms of increasing energies in multielectron atoms, see the section below. "Note: empty cells indicate non-existent sublevels, while numbers in italics indicate sublevels that could (potentially) exist, but which do not hold electrons in any element currently known." Electron placement and the periodic table. Several rules govern the placement of electrons in orbitals ("electron configuration"). The first dictates that no two electrons in an atom may have the same set of values of quantum numbers (this is the Pauli exclusion principle). These quantum numbers include the three that define orbitals, as well as the spin magnetic quantum number ms. Thus, two electrons may occupy a single orbital, so long as they have different values of ms. Because ms takes one of only two values ( or ), at most two electrons can occupy each orbital. Additionally, an electron always tends to fall to the lowest possible energy state. It is possible for it to occupy any orbital so long as it does not violate the Pauli exclusion principle, but if lower-energy orbitals are available, this condition is unstable. The electron will eventually lose energy (by releasing a photon) and drop into the lower orbital. Thus, electrons fill orbitals in the order specified by the energy sequence given above. This behavior is responsible for the structure of the periodic table. The table may be divided into several rows (called 'periods'), numbered starting with 1 at the top. The presently known elements occupy seven periods. If a certain period has number "i", it consists of elements whose outermost electrons fall in the "i"th shell. Niels Bohr was the first to propose (1923) that the periodicity in the properties of the elements might be explained by the periodic filling of the electron energy levels, resulting in the electronic structure of the atom. The periodic table may also be divided into several numbered rectangular 'blocks'. The elements belonging to a given block have this common feature: their highest-energy electrons all belong to the same ℓ-state (but the n associated with that ℓ-state depends upon the period). For instance, the leftmost two columns constitute the 's-block'. The outermost electrons of Li and Be respectively belong to the 2s subshell, and those of Na and Mg to the 3s subshell. The following is the order for filling the "subshell" orbitals, which also gives the order of the "blocks" in the periodic table: 1s, 2s, 2p, 3s, 3p, 4s, 3d, 4p, 5s, 4d, 5p, 6s, 4f, 5d, 6p, 7s, 5f, 6d, 7p The "periodic" nature of the filling of orbitals, as well as emergence of the s, p, d, and f "blocks", is more obvious if this order of filling is given in matrix form, with increasing principal quantum numbers starting the new rows ("periods") in the matrix. Then, each subshell (composed of the first two quantum numbers) is repeated as many times as required for each pair of electrons it may contain. The result is a compressed periodic table, with each entry representing two successive elements: Although this is the general order of orbital filling according to the Madelung rule, there are exceptions, and the actual electronic energies of each element are also dependent upon additional details of the atoms (see ). The number of electrons in an electrically neutral atom increases with the atomic number. The electrons in the outermost shell, or "valence electrons", tend to be responsible for an element's chemical behavior. Elements that contain the same number of valence electrons can be grouped together and display similar chemical properties. Relativistic effects. For elements with high atomic number Z, the effects of relativity become more pronounced, and especially so for s electrons, which move at relativistic velocities as they penetrate the screening electrons near the core of high-Z atoms. This relativistic increase in momentum for high speed electrons causes a corresponding decrease in wavelength and contraction of 6s orbitals relative to 5d orbitals (by comparison to corresponding s and d electrons in lighter elements in the same column of the periodic table); this results in 6s valence electrons becoming lowered in energy. Examples of significant physical outcomes of this effect include the lowered melting temperature of mercury (which results from 6s electrons not being available for metal bonding) and the golden color of gold and caesium. In the Bohr model, an "n" = 1 electron has a velocity given by formula_52, where Z is the atomic number, formula_53 is the fine-structure constant, and "c" is the speed of light. In non-relativistic quantum mechanics, therefore, any atom with an atomic number greater than 137 would require its 1s electrons to be traveling faster than the speed of light. Even in the Dirac equation, which accounts for relativistic effects, the wave function of the electron for atoms with formula_54 is oscillatory and unbounded. The significance of element 137, also known as untriseptium, was first pointed out by the physicist Richard Feynman. Element 137 is sometimes informally called feynmanium (symbol Fy). However, Feynman's approximation fails to predict the exact critical value of Z due to the non-point-charge nature of the nucleus and very small orbital radius of inner electrons, resulting in a potential seen by inner electrons which is effectively less than Z. The critical Z value, which makes the atom unstable with regard to high-field breakdown of the vacuum and production of electron-positron pairs, does not occur until Z is about 173. These conditions are not seen except transiently in collisions of very heavy nuclei such as lead or uranium in accelerators, where such electron-positron production from these effects has been claimed to be observed. There are no nodes in relativistic orbital densities, although individual components of the wave function will have nodes. pp hybridization (conjectured). In late period 8 elements, a hybrid of 8p3/2 and 9p1/2 is expected to exist, where "3/2" and "1/2" refer to the total angular momentum quantum number. This "pp" hybrid may be responsible for the p-block of the period due to properties similar to p subshells in ordinary valence shells. Energy levels of 8p3/2 and 9p1/2 come close due to relativistic spin–orbit effects; the 9s subshell should also participate, as these elements are expected to be analogous to the respective 5p elements indium through xenon. Transitions between orbitals. Bound quantum states have discrete energy levels. When applied to atomic orbitals, this means that the energy differences between states are also discrete. A transition between these states (i.e., an electron absorbing or emitting a photon) can thus happen only if the photon has an energy corresponding with the exact energy difference between said states. Consider two states of the hydrogen atom: By quantum theory, state 1 has a fixed energy of "E"1, and state 2 has a fixed energy of "E"2. Now, what would happen if an electron in state 1 were to move to state 2? For this to happen, the electron would need to gain an energy of exactly "E"2 − "E"1. If the electron receives energy that is less than or greater than this value, it cannot jump from state 1 to state 2. Now, suppose we irradiate the atom with a broad-spectrum of light. Photons that reach the atom that have an energy of exactly "E"2 − "E"1 will be absorbed by the electron in state 1, and that electron will jump to state 2. However, photons that are greater or lower in energy cannot be absorbed by the electron, because the electron can jump only to one of the orbitals, it cannot jump to a state between orbitals. The result is that only photons of a specific frequency will be absorbed by the atom. This creates a line in the spectrum, known as an absorption line, which corresponds to the energy difference between states 1 and 2. The atomic orbital model thus predicts line spectra, which are observed experimentally. This is one of the main validations of the atomic orbital model. The atomic orbital model is nevertheless an approximation to the full quantum theory, which only recognizes many electron states. The predictions of line spectra are qualitatively useful but are not quantitatively accurate for atoms and ions other than those containing only one electron. See also. <templatestyles src="Div col/styles.css"/> References. <templatestyles src="Reflist/styles.css" />
[ { "math_id": 0, "text": "m_s" }, { "math_id": 1, "text": "i" }, { "math_id": 2, "text": "n" }, { "math_id": 3, "text": "l" }, { "math_id": 4, "text": "m_l" }, { "math_id": 5, "text": " e^{-\\alpha r} " }, { "math_id": 6, "text": " e^{-\\alpha r^2} " }, { "math_id": 7, "text": "X \\, \\mathrm{type} \\ " }, { "math_id": 8, "text": "X \\, \\mathrm{type}^y \\ " }, { "math_id": 9, "text": "0 \\le \\ell \\le n_0-1" }, { "math_id": 10, "text": "\\ell=0" }, { "math_id": 11, "text": "\\ell=1" }, { "math_id": 12, "text": "m_\\ell" }, { "math_id": 13, "text": "\\ell" }, { "math_id": 14, "text": "-\\ell \\le m_\\ell \\le \\ell " }, { "math_id": 15, "text": "n=2" }, { "math_id": 16, "text": "\\text{p}_x" }, { "math_id": 17, "text": "\\psi_{n,\\ell, m}" }, { "math_id": 18, "text": "m" }, { "math_id": 19, "text": "\\psi_{n, \\ell, m}^{\\text{real}}" }, { "math_id": 20, "text": "\n\\psi_{n,\\ell, m}^{\\text{real}} = \\begin{cases}\n\\sqrt{2} (-1)^m \\text{Im}\\left\\{\\psi_{n,\\ell,|m|}\\right\\} &\\text{ for } m<0\\\\\n\\psi_{n,\\ell,|m|} &\\text{ for } m=0\\\\\n\\sqrt{2} (-1)^m \\text{Re}\\left\\{\\psi_{n,\\ell,|m|}\\right\\} &\\text{ for } m>0\n\\end{cases}\n=\n\\begin{cases}\n\\frac{i}{\\sqrt{2}}\\left(\\psi_{n,\\ell, -|m|} - (-1)^m \\psi_{n,\\ell, |m|}\\right) & \\text{ for } m<0\\\\\n\\psi_{n, \\ell, |m|}& \\text{ for } m=0\\\\\n\\frac{1}{\\sqrt{2}}\\left(\\psi_{n,\\ell, -|m|} + (-1)^m \\psi_{n,\\ell, |m|}\\right) & \\text{ for } m>0\\\\\n\\end{cases}\n" }, { "math_id": 21, "text": "\\psi_{n,\\ell, m}(r, \\theta, \\phi) = R_{nl}(r) Y_{\\ell}^m(\\theta, \\phi)" }, { "math_id": 22, "text": "R_{nl}(r)" }, { "math_id": 23, "text": "\\psi_{n,\\ell, m}^{\\text{real}}(r, \\theta, \\phi) = R_{nl}(r) Y_{\\ell m}(\\theta, \\phi)" }, { "math_id": 24, "text": "Y_{\\ell m}" }, { "math_id": 25, "text": "Y_{\\ell}^m" }, { "math_id": 26, "text": "\\psi_{n, \\ell, m}" }, { "math_id": 27, "text": "0, 1, 2, 3, 4, 5, 6\\ldots" }, { "math_id": 28, "text": "\\text{s, p, d, f, g, h, i} \\ldots" }, { "math_id": 29, "text": "2\\text{p}_{\\pm 1} = \\psi_{2, 1, \\pm 1}" }, { "math_id": 30, "text": "\\psi_{n, 1, \\pm 1}^{\\text{real}}" }, { "math_id": 31, "text": "\\psi_{n, 1, \\pm1} = R_{n, 1}Y_1^{\\pm 1} = \\mp R_{n, 1} \\sqrt{3/8\\pi} \\cdot (x\\pm i y)/r" }, { "math_id": 32, "text": "r = \\sqrt{x^2+y^2+z^2}" }, { "math_id": 33, "text": "\n\\begin{align}\n\\psi_{n, 1, +1}^{\\text{real}} =& R_{n, 1} \\sqrt{\\frac{3}{4\\pi}} \\cdot \\frac{x}{r}\\\\\n\\psi_{n, 1, -1}^{\\text{real}} =& R_{n, 1} \\sqrt{\\frac{3}{4\\pi}} \\cdot \\frac{y}{r}\n\\end{align}\n" }, { "math_id": 34, "text": "\\psi_{n, 1, 0} = R_{n, 1} \\sqrt{3/4\\pi} \\cdot z/r" }, { "math_id": 35, "text": "\n\\psi_{n, 3, +1}^{\\text{real}} = R_{n, 3} \\frac{1}{4} \\sqrt{\\frac{21}{2\\pi}} \\cdot \\frac{x\\cdot (5z^2 - r^2)}{r^3}\n" }, { "math_id": 36, "text": "x" }, { "math_id": 37, "text": "y" }, { "math_id": 38, "text": "z" }, { "math_id": 39, "text": "z, r" }, { "math_id": 40, "text": "\n\\begin{align}\n\\psi_{n, 1, -1}^{\\text{real}} =& n\\text{p}_y = \\frac{i}{\\sqrt{2}} \\left(n\\text{p}_{-1} + n\\text{p}_{+1}\\right)\\\\\n\\psi_{n, 1, 0}^{\\text{real}} =& n\\text{p}_z = 2\\text{p}_0\\\\\n\\psi_{n, 1, +1}^{\\text{real}} =& n\\text{p}_x = \\frac{1}{\\sqrt{2}} \\left(n\\text{p}_{-1} - n\\text{p}_{+1}\\right)\\\\\n\\psi_{n, 3, +1}^{\\text{real}} =& nf_{xz^2} = \\frac{1}{\\sqrt{2}} \\left(nf_{-1} - nf_{+1}\\right)\n\\end{align}\n" }, { "math_id": 41, "text": "\\text{p}_y" }, { "math_id": 42, "text": "\\text{p}_{+1}" }, { "math_id": 43, "text": "\\text{p}_{-1}" }, { "math_id": 44, "text": "\\ell>3" }, { "math_id": 45, "text": "g" }, { "math_id": 46, "text": "m = 0" }, { "math_id": 47, "text": "\\hat L_z" }, { "math_id": 48, "text": "m = \\pm 1, \\pm 2,\\cdots" }, { "math_id": 49, "text": "n=1" }, { "math_id": 50, "text": "\\ell = 2" }, { "math_id": 51, "text": "\\ell = 3" }, { "math_id": 52, "text": "v = Z \\alpha c" }, { "math_id": 53, "text": "\\alpha" }, { "math_id": 54, "text": "Z > 137" } ]
https://en.wikipedia.org/wiki?curid=1206
12065590
Doxastic logic
Type of logic regarding reasoning about beliefs Doxastic logic is a type of logic concerned with reasoning about beliefs. The term "" derives from the Ancient Greek ("doxa", "opinion, belief"), from which the English term "doxa" ("popular opinion or belief") is also borrowed. Typically, a doxastic logic uses the notation formula_0 to mean "It is believed that formula_1 is the case", and the set formula_2 denotes a set of beliefs. In doxastic logic, belief is treated as a modal operator. There is complete parallelism between a person who believes propositions and a formal system that derives propositions. Using doxastic logic, one can express the epistemic counterpart of Gödel's incompleteness theorem of metalogic, as well as Löb's theorem, and other metalogical results in terms of belief. Types of reasoners. To demonstrate the properties of sets of beliefs, Raymond Smullyan defines the following types of reasoners: formula_3 formula_4 formula_5 formula_7 A variation on this would be someone who, while not believing formula_6 also "believes" they don't believe p (modal axiom 5). formula_8 formula_10 formula_13 formula_17 If a reflexive reasoner of type 4 [see below] believes formula_18, they will believe p. This is a parallelism of Löb's theorem for reasoners. formula_19 Rewritten in "de re" form, this is logically equivalent to: formula_20 This implies that: formula_21 This shows that a conceited reasoner is always a stable reasoner (see below). formula_22 formula_25 formula_27 formula_28 formula_29 The symbol formula_30 means formula_31 is a tautology/theorem provable in Propositional Calculus. Also, their set of beliefs (past, present and future) is logically closed under modus ponens. If they ever believe formula_14 and formula_32 then they will (sooner or later) believe formula_15: formula_33 This rule can also be thought of as stating that belief distributes over implication, as it's logically equivalent to formula_34. Note that, in reality, even the assumption of type 1 reasoner may be too strong for some cases (see Lottery paradox). formula_37 formula_39 formula_40 formula_41 formula_42 Self-fulfilling beliefs. For systems, we define reflexivity to mean that for any formula_14 (in the language of the system) there is some formula_15 such that formula_43 is provable in the system. Löb's theorem (in a general form) is that for any reflexive system of type 4, if formula_26 is provable in the system, so is formula_9 Inconsistency of the belief in one's stability. If a consistent reflexive reasoner of type 4 believes that they are stable, then they will become unstable. Stated otherwise, if a stable reflexive reasoner of type 4 believes that they are stable, then they will become inconsistent. Why is this? Suppose that a stable reflexive reasoner of type 4 believes that they are stable. We will show that they will (sooner or later) believe every proposition formula_14 (and hence be inconsistent). Take any proposition formula_9 The reasoner believes formula_44 hence by Löb's theorem they will believe formula_23 (because they believe formula_45 where formula_46 is the proposition formula_47 and so they will believe formula_48 which is the proposition formula_23). Being stable, they will then believe formula_9 References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\mathcal{B}x" }, { "math_id": 1, "text": "x" }, { "math_id": 2, "text": "\\mathbb{B} : \\left \\{ b_1, \\ldots ,b_n \\right \\}" }, { "math_id": 3, "text": "\\forall p: \\mathcal{B}p \\to p" }, { "math_id": 4, "text": "\\exists p: \\neg p \\wedge \\mathcal{B}p" }, { "math_id": 5, "text": "\\neg\\exists p: \\mathcal{B}p \\wedge \\mathcal{B}\\neg p \\quad \\text{or} \\quad \\forall p: \\mathcal{B}p \\to \\neg\\mathcal{B}\\neg p" }, { "math_id": 6, "text": "p," }, { "math_id": 7, "text": "\\forall p: \\mathcal{B}p \\to \\mathcal{BB}p" }, { "math_id": 8, "text": "\\forall p: \\neg\\mathcal{B}p \\to \\mathcal{B}(\\neg \\mathcal{B}p)" }, { "math_id": 9, "text": "p." }, { "math_id": 10, "text": "\\exists p: \\mathcal{B}p \\wedge \\mathcal{B\\neg B}p" }, { "math_id": 11, "text": " p \\to q " }, { "math_id": 12, "text": " \\mathcal{B}p \\to \\mathcal{B}q " }, { "math_id": 13, "text": "\\forall p \\forall q : \\mathcal{B}(p \\to q) \\to \\mathcal{B} (\\mathcal{B}p \\to \\mathcal{B}q)" }, { "math_id": 14, "text": "p" }, { "math_id": 15, "text": "q" }, { "math_id": 16, "text": " q \\equiv ( \\mathcal{B}q \\to p) " }, { "math_id": 17, "text": "\\forall p \\exists q: \\mathcal{B}(q \\equiv ( \\mathcal{B}q \\to p)) " }, { "math_id": 18, "text": " \\mathcal{B}p \\to p " }, { "math_id": 19, "text": "\\mathcal{B}[\\neg\\exists p ( \\neg p \\wedge \\mathcal{B}p )] \\quad \\text{or} \\quad \\mathcal{B}[\\forall p( \\mathcal{B}p \\to p) ]" }, { "math_id": 20, "text": "\\forall p[\\mathcal{B} ( \\mathcal{B}p \\to p) ]" }, { "math_id": 21, "text": "\\forall p(\\mathcal{B} \\mathcal{B}p \\to \\mathcal{B}p )" }, { "math_id": 22, "text": "\\exists p: \\mathcal{B}\\mathcal{B}p \\wedge \\neg\\mathcal{B}p " }, { "math_id": 23, "text": "\\mathcal{B}p" }, { "math_id": 24, "text": "\\mathcal{B}\\mathcal{B}p \\to \\mathcal{B}p" }, { "math_id": 25, "text": "\\forall p: \\mathcal{BB}p\\to\\mathcal{B}p" }, { "math_id": 26, "text": "\\mathcal{B}p \\to p" }, { "math_id": 27, "text": "\\forall p: \\mathcal{B}(\\mathcal{B}p \\to p) \\to \\mathcal{B}p" }, { "math_id": 28, "text": "\\forall p: \\mathcal{B}(\\mathcal{B}p \\to \\mathcal{B}\\bot) \\to \\neg\\mathcal{B}p " }, { "math_id": 29, "text": " \\vdash_{PC} p \\Rightarrow\\ \\vdash \\mathcal{B}p" }, { "math_id": 30, "text": " \\vdash_{PC}p" }, { "math_id": 31, "text": " p" }, { "math_id": 32, "text": "p \\to q" }, { "math_id": 33, "text": "\\forall p \\forall q : ( \\mathcal{B}p \\wedge \\mathcal{B}( p \\to q)) \\to \\mathcal{B} q )" }, { "math_id": 34, "text": "\\forall p \\forall q : \\mathcal{B}(p \\to q) \\to (\\mathcal{B}p \\to \\mathcal{B}q )" }, { "math_id": 35, "text": "q," }, { "math_id": 36, "text": "p \\to q," }, { "math_id": 37, "text": "\\forall p \\forall q : \\mathcal{B}(p \\to q) \\to \\mathcal{B} (\\mathcal{B}p \\to \\mathcal{B}q )" }, { "math_id": 38, "text": "\\mathcal{B}(p \\to q) \\to (\\mathcal{B}p \\to \\mathcal{B}q)." }, { "math_id": 39, "text": "\\forall p \\forall q : \\mathcal{B}(( \\mathcal{B}p \\wedge \\mathcal{B}( p \\to q)) \\to \\mathcal{B} q )" }, { "math_id": 40, "text": "\\forall p: \\mathcal{B} p \\to \\mathcal{B} \\mathcal{B}p " }, { "math_id": 41, "text": "\\mathcal{B}[ \\forall p ( \\mathcal{B} p \\to \\mathcal{B} \\mathcal{B}p )]" }, { "math_id": 42, "text": "\\mathcal{B}[ \\forall p ( \\mathcal{B}(\\mathcal{B}p \\to p) \\to \\mathcal{B}p ) ]" }, { "math_id": 43, "text": "q \\equiv \\mathcal{B}q \\to p" }, { "math_id": 44, "text": "\\mathcal{B}\\mathcal{B}p \\to \\mathcal{B}p," }, { "math_id": 45, "text": "\\mathcal{B}r \\to r," }, { "math_id": 46, "text": "r" }, { "math_id": 47, "text": "\\mathcal{B}p," }, { "math_id": 48, "text": "r," } ]
https://en.wikipedia.org/wiki?curid=12065590
12066797
Gromov's systolic inequality for essential manifolds
In the mathematical field of Riemannian geometry, M. Gromov's systolic inequality bounds the length of the shortest non-contractible loop on a Riemannian manifold in terms of the volume of the manifold. Gromov's systolic inequality was proved in 1983; it can be viewed as a generalisation, albeit non-optimal, of Loewner's torus inequality and Pu's inequality for the real projective plane. Technically, let "M" be an essential Riemannian manifold of dimension "n"; denote by sys"π"1("M") the homotopy 1-systole of "M", that is, the least length of a non-contractible loop on "M". Then Gromov's inequality takes the form formula_0 where "C""n" is a universal constant only depending on the dimension of "M". Essential manifolds. A closed manifold is called "essential" if its fundamental class defines a nonzero element in the homology of its fundamental group, or more precisely in the homology of the corresponding Eilenberg–MacLane space. Here the fundamental class is taken in homology with integer coefficients if the manifold is orientable, and in coefficients modulo 2, otherwise. Examples of essential manifolds include aspherical manifolds, real projective spaces, and lens spaces. Proofs of Gromov's inequality. Gromov's original 1983 proof is about 35 pages long. It relies on a number of techniques and inequalities of global Riemannian geometry. The starting point of the proof is the imbedding of X into the Banach space of Borel functions on X, equipped with the sup norm. The imbedding is defined by mapping a point "p" of "X", to the real function on "X" given by the distance from the point "p". The proof utilizes the coarea inequality, the isoperimetric inequality, the cone inequality, and the deformation theorem of Herbert Federer. Filling invariants and recent work. One of the key ideas of the proof is the introduction of filling invariants, namely the filling radius and the filling volume of "X". Namely, Gromov proved a sharp inequality relating the systole and the filling radius, formula_1 valid for all essential manifolds "X"; as well as an inequality formula_2 valid for all closed manifolds "X". It was shown by that the filling invariants, unlike the systolic invariants, are independent of the topology of the manifold in a suitable sense. and developed approaches to the proof of Gromov's systolic inequality for essential manifolds. Inequalities for surfaces and polyhedra. Stronger results are available for surfaces, where the asymptotics when the genus tends to infinity are by now well understood, see systoles of surfaces. A uniform inequality for arbitrary 2-complexes with non-free fundamental groups is available, whose proof relies on the Grushko decomposition theorem. Notes. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": " \\left(\\operatorname{sys\\pi}_1(M)\\right)^n \\leq C_n \\operatorname{vol}(M)," }, { "math_id": 1, "text": "\\mathrm{sys\\pi}_1 \\leq 6\\; \\mathrm{FillRad}(X)," }, { "math_id": 2, "text": "\\mathrm{FillRad}(X) \\leq C_n \\mathrm{vol}_n{}^{\\tfrac{1}{n}}(X)," } ]
https://en.wikipedia.org/wiki?curid=12066797
12069013
Recurrence period density entropy
Recurrence period density entropy (RPDE) is a method, in the fields of dynamical systems, stochastic processes, and time series analysis, for determining the periodicity, or repetitiveness of a signal. Overview. Recurrence period density entropy is useful for characterising the extent to which a time series repeats the same sequence, and is therefore similar to linear autocorrelation and time delayed mutual information, except that it measures repetitiveness in the phase space of the system, and is thus a more reliable measure based upon the dynamics of the underlying system that generated the signal. It has the advantage that it does not require the assumptions of linearity, Gaussianity or dynamical determinism. It has been successfully used to detect abnormalities in biomedical contexts such as speech signal. The RPDE value formula_0 is a scalar in the range zero to one. For purely periodic signals, formula_1, whereas for purely i.i.d., uniform white noise, formula_2. Method description. The RPDE method first requires the embedding of a time series in phase space, which, according to stochastic extensions to Taken's embedding theorems, can be carried out by forming time-delayed vectors: formula_3 for each value "x""n" in the time series, where "M" is the embedding dimension, and τ is the embedding delay. These parameters are obtained by systematic search for the optimal set (due to lack of practical embedding parameter techniques for stochastic systems) (Stark et al. 2003). Next, around each point formula_4 in the phase space, an formula_5-neighbourhood (an "m"-dimensional ball with this radius) is formed, and every time the time series returns to this ball, after having left it, the time difference "T" between successive returns is recorded in a histogram. This histogram is normalised to sum to unity, to form an estimate of the recurrence period density function "P"("T"). The normalised entropy of this density: formula_6 is the RPDE value, where formula_7 is the largest recurrence value (typically on the order of 1000 samples). Note that RPDE is intended to be applied to both deterministic and stochastic signals, therefore, strictly speaking, Taken's original embedding theorem does not apply, and needs some modification. RPDE in practice. RPDE has the ability to detect subtle changes in natural biological time series such as the breakdown of regular periodic oscillation in abnormal cardiac function which are hard to detect using classical signal processing tools such as the Fourier transform or linear prediction. The recurrence period density is a sparse representation for nonlinear, non-Gaussian and nondeterministic signals, whereas the Fourier transform is only sparse for purely periodic signals. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\scriptstyle H_\\mathrm{norm}" }, { "math_id": 1, "text": "\\scriptstyle H_\\mathrm{norm}=0" }, { "math_id": 2, "text": "\\scriptstyle H_\\mathrm{norm} \\approx 1" }, { "math_id": 3, "text": "\\mathbf{X}_n=[x_n, x_{n+\\tau}, x_{n+2\\tau}, \\ldots, x_{n+(M-1)\\tau}]" }, { "math_id": 4, "text": "\\scriptstyle \\mathbf{X}_n" }, { "math_id": 5, "text": "\\varepsilon" }, { "math_id": 6, "text": "H_\\mathrm{norm} = -(\\ln{T_\\max)}^{-1} \\sum_{t=1}^{T_\\max} P(t) \\ln{P(t)}" }, { "math_id": 7, "text": "\\scriptstyle T_\\max" } ]
https://en.wikipedia.org/wiki?curid=12069013
12070637
Imaginary curve
Algebraic curve In algebraic geometry an imaginary curve is an algebraic curve which does not contain any real points. For example, the set of pairs of complex numbers formula_0 satisfying the equation formula_1 forms an imaginary circle, containing points such as formula_2 and formula_3 but not containing any points both of whose coordinates are real. In some cases, more generally, an algebraic curve with only finitely many real points is considered to be an imaginary curve. For instance, an imaginary line is a line (in a complex projective space) that contains only one real point. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "(x,y)" }, { "math_id": 1, "text": "x^2+y^2=-1" }, { "math_id": 2, "text": "(i,0)" }, { "math_id": 3, "text": "(\\frac{5i}{3},\\frac{4}{3})" } ]
https://en.wikipedia.org/wiki?curid=12070637
1207070
Biquaternion
Quaternions with complex number coefficients In abstract algebra, the biquaternions are the numbers "w" + "x" i + "y" j + "z" k, where "w", "x", "y", and z are complex numbers, or variants thereof, and the elements of {1, i, j, k} multiply as in the quaternion group and commute with their coefficients. There are three types of biquaternions corresponding to complex numbers and the variations thereof: This article is about the "ordinary biquaternions" named by William Rowan Hamilton in 1844. Some of the more prominent proponents of these biquaternions include Alexander Macfarlane, Arthur W. Conway, Ludwik Silberstein, and Cornelius Lanczos. As developed below, the unit quasi-sphere of the biquaternions provides a representation of the Lorentz group, which is the foundation of special relativity. The algebra of biquaternions can be considered as a tensor product C ⊗R H, where C is the field of complex numbers and H is the division algebra of (real) quaternions. In other words, the biquaternions are just the complexification of the quaternions. Viewed as a complex algebra, the biquaternions are isomorphic to the algebra of 2 × 2 complex matrices M2(C). They are also isomorphic to several Clifford algebras including C ⊗R H = Cl(C) = Cl2(C) = Cl1,2(R), the Pauli algebra Cl3,0(R), and the even part Cl(R) = Cl(R) of the spacetime algebra. Definition. Let {1, i, j, k} be the basis for the (real) quaternions H, and let "u", "v", "w", "x" be complex numbers, then formula_0 is a "biquaternion". To distinguish square roots of minus one in the biquaternions, Hamilton and Arthur W. Conway used the convention of representing the square root of minus one in the scalar field C by "h" to avoid confusion with the i in the quaternion group. Commutativity of the scalar field with the quaternion group is assumed: formula_1 Hamilton introduced the terms "bivector", "biconjugate", "bitensor", and "biversor" to extend notions used with real quaternions H. Hamilton's primary exposition on biquaternions came in 1853 in his "Lectures on Quaternions". The editions of "Elements of Quaternions", in 1866 by William Edwin Hamilton (son of Rowan), and in 1899, 1901 by Charles Jasper Joly, reduced the biquaternion coverage in favour of the real quaternions. Considered with the operations of component-wise addition, and multiplication according to the quaternion group, this collection forms a 4-dimensional algebra over the complex numbers C. The algebra of biquaternions is associative, but not commutative. A biquaternion is either a unit or a zero divisor. The algebra of biquaternions forms a composition algebra and can be constructed from bicomplex numbers. See "" below. Place in ring theory. Linear representation. Note that the matrix product formula_2. Because "h" is the imaginary unit, each of these three arrays has a square equal to the negative of the identity matrix. When this matrix product is interpreted as i j = k, then one obtains a subgroup of matrices that is isomorphic to the quaternion group. Consequently, formula_3 represents biquaternion "q" = "u" 1 + "v" i + "w" j + "x" k. Given any 2 × 2 complex matrix, there are complex values "u", "v", "w", and "x" to put it in this form so that the matrix ring M(2, C) is isomorphic to the biquaternion ring. Subalgebras. Considering the biquaternion algebra over the scalar field of real numbers R, the set formula_4 forms a basis so the algebra has eight real dimensions. The squares of the elements "hi, "hj, and "hk are all positive one, for example, ("hi)2 = "h"2i2 = (−1)(−1) = +1. The subalgebra given by formula_5 is ring isomorphic to the plane of split-complex numbers, which has an algebraic structure built upon the unit hyperbola. The elements "hj and "hk also determine such subalgebras. Furthermore, formula_6 is a subalgebra isomorphic to the bicomplex numbers. A third subalgebra called coquaternions is generated by "hj and "hk. It is seen that ("hj)("hk) = (−1)i, and that the square of this element is −1. These elements generate the dihedral group of the square. The linear subspace with basis {1, i, "hj, "hk} thus is closed under multiplication, and forms the coquaternion algebra. In the context of quantum mechanics and spinor algebra, the biquaternions "hi, "hj, and "h"k (or their negatives), viewed in the M2(C) representation, are called Pauli matrices. Algebraic properties. The biquaternions have two "conjugations": where formula_9 when formula_10 Note that formula_11 Clearly, if formula_12 then "q" is a zero divisor. Otherwise formula_13 is a complex number. Further, formula_14 is easily verified. This allows the inverse to be defined by Relation to Lorentz transformations. Consider now the linear subspace formula_17 "M" is not a subalgebra since it is not closed under products; for example formula_18 Indeed, "M" cannot form an algebra if it is not even a magma. Proposition: If q is in M, then formula_19 Proof: From the definitions, formula_20 Definition: Let biquaternion g satisfy formula_21 Then the Lorentz transformation associated with g is given by formula_22 Proposition: If q is in M, then "T"("q") is also in "M". Proof: formula_23 Proposition: formula_24 Proof: Note first that "gg"* = 1 implies that the sum of the squares of its four complex components is one. Then the sum of the squares of the "complex conjugates" of these components is also one. Therefore, formula_25 Now formula_26 Associated terminology. As the biquaternions have been a fixture of linear algebra since the beginnings of mathematical physics, there is an array of concepts that are illustrated or represented by biquaternion algebra. The transformation group formula_27 has two parts, formula_28 and formula_29 The first part is characterized by formula_30 ; then the Lorentz transformation corresponding to g is given by formula_31 since formula_32 Such a transformation is a rotation by quaternion multiplication, and the collection of them is SO(3) formula_33 But this subgroup of G is not a normal subgroup, so no quotient group can be formed. To view formula_34 it is necessary to show some subalgebra structure in the biquaternions. Let r represent an element of the sphere of square roots of minus one in the real quaternion subalgebra H. Then ("hr")2 = +1 and the plane of biquaternions given by formula_35 is a commutative subalgebra isomorphic to the plane of split-complex numbers. Just as the ordinary complex plane has a unit circle, formula_36 has a unit hyperbola given by formula_37 Just as the unit circle turns by multiplication through one of its elements, so the hyperbola turns because formula_38 Hence these algebraic operators on the hyperbola are called hyperbolic versors. The unit circle in C and unit hyperbola in "D""r" are examples of one-parameter groups. For every square root "r" of minus one in H, there is a one-parameter group in the biquaternions given by formula_39 The space of biquaternions has a natural topology through the Euclidean metric on 8-space. With respect to this topology, G is a topological group. Moreover, it has analytic structure making it a six-parameter Lie group. Consider the subspace of bivectors formula_40. Then the exponential map formula_41 takes the real vectors to formula_28 and the h-vectors to formula_29 When equipped with the commutator, A forms the Lie algebra of G. Thus this study of a six-dimensional space serves to introduce the general concepts of Lie theory. When viewed in the matrix representation, G is called the special linear group SL(2,C) in M(2, C). Many of the concepts of special relativity are illustrated through the biquaternion structures laid out. The subspace M corresponds to Minkowski space, with the four coordinates giving the time and space locations of events in a resting frame of reference. Any hyperbolic versor exp("ahr") corresponds to a velocity in direction r of speed "c" tanh "a" where c is the velocity of light. The inertial frame of reference of this velocity can be made the resting frame by applying the Lorentz boost T given by "g" = exp(0.5"ahr") since then formula_42 so that formula_43 Naturally the hyperboloid formula_44 which represents the range of velocities for sub-luminal motion, is of physical interest. There has been considerable work associating this "velocity space" with the hyperboloid model of hyperbolic geometry. In special relativity, the hyperbolic angle parameter of a hyperbolic versor is called rapidity. Thus we see the biquaternion group G provides a group representation for the Lorentz group. After the introduction of spinor theory, particularly in the hands of Wolfgang Pauli and Élie Cartan, the biquaternion representation of the Lorentz group was superseded. The new methods were founded on basis vectors in the set formula_45 which is called the "complex light cone". The above representation of the Lorentz group coincides with what physicists refer to as four-vectors. Beyond four-vectors, the standard model of particle physics also includes other Lorentz representations, known as scalars, and the (1, 0) ⊕ (0, 1)-representation associated with e.g. the electromagnetic field tensor. Furthermore, particle physics makes use of the SL(2, C) representations (or projective representations of the Lorentz group) known as left- and right-handed Weyl spinors, Majorana spinors, and Dirac spinors. It is known that each of these seven representations can be constructed as invariant subspaces within the biquaternions. As a composition algebra. Although W. R. Hamilton introduced biquaternions in the 19th century, its delineation of its mathematical structure as a special type of algebra over a field was accomplished in the 20th century: the biquaternions may be generated out of the bicomplex numbers in the same way that Adrian Albert generated the real quaternions out of complex numbers in the so-called Cayley–Dickson construction. In this construction, a bicomplex number ("w", "z") has conjugate ("w", "z")* = ("w", – "z"). The biquaternion is then a pair of bicomplex numbers ("a", "b"), where the product with a second biquaternion ("c", "d") is formula_46 If formula_47 then the "biconjugate" formula_48 When ("a", "b")* is written as a 4-vector of ordinary complex numbers, formula_49 The biquaternions form an example of a quaternion algebra, and it has norm formula_50 Two biquaternions "p" and "q" satisfy "N"("pq") = "N"("p") "N"("q"), indicating that "N" is a quadratic form admitting composition, so that the biquaternions form a composition algebra. Citations. &lt;templatestyles src="Reflist/styles.css" /&gt; References. &lt;templatestyles src="Refbegin/styles.css" /&gt;
[ { "math_id": 0, "text": "q = u \\mathbf 1 + v \\mathbf i + w \\mathbf j + x \\mathbf k" }, { "math_id": 1, "text": " h \\mathbf i = \\mathbf i h,\\ \\ h \\mathbf j = \\mathbf j h,\\ \\ h \\mathbf k = \\mathbf k h ." }, { "math_id": 2, "text": "\\begin{pmatrix}h & 0\\\\0 & -h\\end{pmatrix}\\begin{pmatrix}0 & 1\\\\-1 & 0\\end{pmatrix} = \\begin{pmatrix}0 & h\\\\h & 0\\end{pmatrix}" }, { "math_id": 3, "text": "\\begin{pmatrix}u+hv & w+hx\\\\-w+hx & u-hv\\end{pmatrix}" }, { "math_id": 4, "text": "\\{\\mathbf 1, h, \\mathbf i, h\\mathbf i, \\mathbf j, h\\mathbf j, \\mathbf k, h\\mathbf k \\}" }, { "math_id": 5, "text": "\\{ x + y(h\\mathbf i) : x, y \\in \\R \\} " }, { "math_id": 6, "text": "\\{ x + y \\mathbf j : x,y \\in \\Complex \\} " }, { "math_id": 7, "text": "q^* = w - x\\mathbf i - y\\mathbf j - z\\mathbf k \\!\\ ," }, { "math_id": 8, "text": "\\bar{q} = \\bar{w} + \\bar{x}\\mathbf i + \\bar{y} \\mathbf j + \\bar{z}\\mathbf k " }, { "math_id": 9, "text": "\\bar{z} = a - bh" }, { "math_id": 10, "text": "z = a + bh,\\quad a,b \\in \\reals,\\quad h^2 = -\\mathbf 1." }, { "math_id": 11, "text": "(pq)^* = q^* p^*, \\quad \\overline{pq} = \\bar{p} \\bar{q}, \\quad \\overline{q^*} = \\bar{q}^*." }, { "math_id": 12, "text": "q q^* = 0 " }, { "math_id": 13, "text": "\\lbrace q q^* \\rbrace^{-\\mathbf 1} " }, { "math_id": 14, "text": "q q^* = q^* q " }, { "math_id": 15, "text": "q^{-1} = q^* \\lbrace q q^* \\rbrace^{-1}" }, { "math_id": 16, "text": "qq^* \\neq 0." }, { "math_id": 17, "text": "M = \\lbrace q\\colon q^* = \\bar{q} \\rbrace = \\lbrace t + x(h\\mathbf i) + y(h \\mathbf j) + z(h \\mathbf k)\\colon t, x, y, z \\in \\reals \\rbrace ." }, { "math_id": 18, "text": "(h\\mathbf i)(h\\mathbf j) = h^2 \\mathbf{ij} = -\\mathbf k \\notin M." }, { "math_id": 19, "text": "q q^* = t^2 - x^2 - y^2 - z^2." }, { "math_id": 20, "text": "\\begin{align}\nq q^* &= (t+xh\\mathbf i+yh\\mathbf j+zh\\mathbf k)(t-xh\\mathbf i-yh\\mathbf j-zh\\mathbf k)\\\\\n&= t^2 - x^2(h\\mathbf i)^2 - y^2(h\\mathbf j)^2 - z^2(h\\mathbf k)^2 \\\\\n&= t^2 - x^2 - y^2 - z^2.\n\\end{align} " }, { "math_id": 21, "text": "g g^* = 1." }, { "math_id": 22, "text": "T(q) = g^* q \\bar{g}." }, { "math_id": 23, "text": "(g^* q \\bar{g})^* = \\bar{g}^* q^* g = \\overline{g^*} \\bar{q} g = \\overline{g^* q \\bar{g})}." }, { "math_id": 24, "text": "\\quad T(q) (T(q))^* = q q^* " }, { "math_id": 25, "text": "\\bar{g} (\\bar{g})^* = 1." }, { "math_id": 26, "text": "(g^* q \\bar{g})(g^* q \\bar{g})^* = g^* q (\\bar{g} \\bar{g}^*) q^* g = g^* q q^* g = q q^*." }, { "math_id": 27, "text": "G = \\lbrace g : g g^* = 1 \\rbrace " }, { "math_id": 28, "text": "G \\cap H" }, { "math_id": 29, "text": "G \\cap M." }, { "math_id": 30, "text": "g = \\bar{g}" }, { "math_id": 31, "text": "T(q) = g^{-1} q g " }, { "math_id": 32, "text": "g^* = g^{-1}. " }, { "math_id": 33, "text": "\\cong G \\cap H ." }, { "math_id": 34, "text": "G \\cap M" }, { "math_id": 35, "text": "D_r = \\lbrace z = x + yhr : x, y \\in \\mathbb R \\rbrace" }, { "math_id": 36, "text": "D_r " }, { "math_id": 37, "text": "\\exp(ahr) = \\cosh(a) + hr\\ \\sinh(a),\\quad a \\in R. " }, { "math_id": 38, "text": "\\exp(ahr) \\exp(bhr) = \\exp((a+b)hr). " }, { "math_id": 39, "text": "G \\cap D_r." }, { "math_id": 40, "text": "A = \\lbrace q : q^* = -q \\rbrace " }, { "math_id": 41, "text": "\\exp:A \\to G" }, { "math_id": 42, "text": "g^{\\star} = \\exp(-0.5ahr) = g^*" }, { "math_id": 43, "text": "T(\\exp(ahr)) = 1 ." }, { "math_id": 44, "text": "G \\cap M," }, { "math_id": 45, "text": "\\{ q \\ :\\ q q^* = 0 \\} = \\left\\{ w + x\\mathbf i + y\\mathbf j + z\\mathbf k \\ :\\ w^2 + x^2 + y^2 + z^2 = 0 \\right\\} " }, { "math_id": 46, "text": "(a,b)(c,d) = (a c - d^* b, d a + b c^* )." }, { "math_id": 47, "text": "a = (u, v), b = (w,z), " }, { "math_id": 48, "text": "(a, b)^* = (a^*, -b)." }, { "math_id": 49, "text": "(u, v, w, z)^* = (u, -v, -w, -z). " }, { "math_id": 50, "text": "N(u,v,w,z) = u^2 + v^2 + w^2 + z^2 ." } ]
https://en.wikipedia.org/wiki?curid=1207070
1207119
Zeller's congruence
Algorithm to calculate the day of the weekZeller's congruence is an algorithm devised by Christian Zeller in the 19th century to calculate the day of the week for any Julian or Gregorian calendar date. It can be considered to be based on the conversion between Julian day and the calendar date. Formula. For the Gregorian calendar, Zeller's congruence is formula_0 for the Julian calendar it is formula_1 where Note: In this algorithm January and February are counted as months 13 and 14 of the previous year. E.g. if it is 2 February 2010 (02/02/2010 in DD/MM/YYYY), the algorithm counts the date as the second day of the fourteenth month of 2009 (02/14/2009 in DD/MM/YYYY format) For an ISO week date Day-of-Week "d" (1 = Monday to 7 = Sunday), use formula_5 Analysis. These formulas are based on the observation that the day of the week progresses in a predictable manner based upon each subpart of that date. Each term within the formula is used to calculate the offset needed to obtain the correct day of the week. For the Gregorian calendar, the various parts of this formula can therefore be understood as follows: The reason that the formula differs between calendars is that the Julian calendar does not have a separate rule for leap centuries and is offset from the Gregorian calendar by a fixed number of days each century. Since the Gregorian calendar was adopted at different times in different regions of the world, the location of an event is significant in determining the correct day of the week for a date that occurred during this transition period. This is only required through 1929, as this was the last year that the Julian calendar was still in use by any country on earth, and thus is not required for 1930 or later. The formulae can be used proleptically, but "Year 0" is in fact year 1 BC (see astronomical year numbering). The Julian calendar is in fact proleptic right up to 1 March AD 4 owing to mismanagement in Rome (but not Egypt) in the period since the calendar was put into effect on 1 January 45 BC (which was not a leap year). In addition, the modulo operator might truncate integers to the wrong direction (ceiling instead of floor). To accommodate this, one can add a sufficient multiple of 400 Gregorian or 700 Julian years. Examples. For 1 January 2000, the date would be treated as the 13th month of 1999, so the values would be: formula_15 formula_16 formula_17 formula_18 So the formula evaluates as formula_19. However, for 1 March 2000, the date is treated as the 3rd month of 2000, so the values become formula_15 formula_21 formula_22 formula_23 so the formula evaluates as formula_24. Implementations in software. Basic modification. The formulas rely on the mathematician's definition of modulo division, which means that −2 mod 7 is equal to positive 5. Unfortunately, in the truncating way most computer languages implement the remainder function, −2 mod 7 returns a result of −2. So, to implement Zeller's congruence on a computer, the formulas should be altered slightly to ensure a positive numerator. The simplest way to do this is to replace − 2"J" with + 5"J" and − "J" with + 6"J". For the Gregorian calendar, Zeller's congruence becomes formula_25 For the Julian calendar, Zeller's congruence becomes formula_26 One can readily see that, in a given year, the last day of February and March 1 are a good test dates. As an aside note, if we have a three-digit number abc, where a, b, and c are the digits, each nonpositive if abc is nonpositive; we have (abc) mod 7 = 9*a + 3*b + c. Repeat the formula down to a single digit. If the result is 7, 8, or 9, then subtract 7. If, instead, the result is negative, then add 7. If the result is still negative, then add 7 one more time. Utilizing this approach, we can avoid the worries of language specific differences in mod 7 evaluations. This also may enhance a mental math technique. Common simplification. Zeller used decimal arithmetic, and found it convenient to use "J" and "K" in representing the year. But when using a computer, it is simpler to handle the modified year "Y" and month "m", which are "Y" - 1 and "m" + 12 during January and February: For the Gregorian calendar, Zeller's congruence becomes formula_27 In this case there is no possibility of underflow due to the single negative term because formula_28. For the Julian calendar, Zeller's congruence becomes formula_29 The algorithm above is mentioned for the Gregorian case in , Appendix B, albeit in an abridged form that returns 0 for Sunday. Other variations. At least three other algorithms share the overall structure of Zeller's congruence in its "common simplification" type, also using an "m" ∈ [3, 14] ∩ Z and the "modified year" construct. Both expressions can be shown to progress in a way that is off by one compared to the original month-length component over the required range of m, resulting in a starting value of 0 for Sunday. References. &lt;templatestyles src="Reflist/styles.css" /&gt; Bibliography. Each of these four similar imaged papers deals firstly with the day of the week and secondly with the date of Easter Sunday, for the Julian and Gregorian calendars. The pages link to translations into English.
[ { "math_id": 0, "text": "h = \\left(q + \\left\\lfloor\\frac{13(m+1)}{5}\\right\\rfloor + K + \\left\\lfloor\\frac{K}{4}\\right\\rfloor + \\left\\lfloor\\frac{J}{4}\\right\\rfloor - 2J\\right) \\bmod 7," }, { "math_id": 1, "text": "h = \\left(q + \\left\\lfloor\\frac{13(m+1)}{5}\\right\\rfloor + K + \\left\\lfloor\\frac{K}{4}\\right\\rfloor + 5 - J\\right) \\bmod 7," }, { "math_id": 2, "text": "year \\bmod 100" }, { "math_id": 3, "text": "\\lfloor year/100 \\rfloor" }, { "math_id": 4, "text": "\\lfloor...\\rfloor" }, { "math_id": 5, "text": "d = ((h + 5) \\bmod 7) + 1" }, { "math_id": 6, "text": "q" }, { "math_id": 7, "text": "K" }, { "math_id": 8, "text": "365\\bmod 7 = 1" }, { "math_id": 9, "text": "\\left\\lfloor\\frac{K}{4}\\right\\rfloor" }, { "math_id": 10, "text": "36525\\bmod 7 = 6" }, { "math_id": 11, "text": "36524\\bmod 7 = 5" }, { "math_id": 12, "text": "\\left\\lfloor\\frac{J}{4}\\right\\rfloor - 2J" }, { "math_id": 13, "text": "\\left\\lfloor\\frac{13(m+1)}{5}\\right\\rfloor" }, { "math_id": 14, "text": "\\operatorname{mod}\\,7" }, { "math_id": 15, "text": "q = 1" }, { "math_id": 16, "text": "m = 13" }, { "math_id": 17, "text": "K = 99" }, { "math_id": 18, "text": "J = 19" }, { "math_id": 19, "text": "(1 + 36 + 99 + 24 + 4 - 38) \\bmod 7 = 126 \\bmod 7 = 0 = \\text{Saturday}" }, { "math_id": 20, "text": "(13 + 1) \\times 13/5 = 182/5" }, { "math_id": 21, "text": "m = 3" }, { "math_id": 22, "text": "K = 0" }, { "math_id": 23, "text": "J = 20" }, { "math_id": 24, "text": "(1 + 10 + 0 + 0 + 5 - 40) \\bmod 7 = -24 \\bmod 7 = 4 = \\text{Wednesday}" }, { "math_id": 25, "text": "h = \\left(q + \\left\\lfloor\\frac{13(m+1)}{5}\\right\\rfloor + K + \\left\\lfloor\\frac{K}{4}\\right\\rfloor + \\left\\lfloor\\frac{J}{4}\\right\\rfloor + 5J\\right) \\bmod 7," }, { "math_id": 26, "text": "h = \\left(q + \\left\\lfloor\\frac{13(m+1)}{5}\\right\\rfloor + K + \\left\\lfloor\\frac{K}{4}\\right\\rfloor + 5 + 6J\\right) \\bmod 7," }, { "math_id": 27, "text": "h = \\left(q + \\left\\lfloor\\frac{13(m+1)}{5}\\right\\rfloor + Y + \\left\\lfloor\\frac{Y}{4}\\right\\rfloor - \\left\\lfloor\\frac{Y}{100}\\right\\rfloor + \\left\\lfloor\\frac{Y}{400}\\right\\rfloor\\right) \\bmod 7," }, { "math_id": 28, "text": "\\left\\lfloor Y/4\\right\\rfloor \\ge \\left\\lfloor Y/100\\right\\rfloor" }, { "math_id": 29, "text": "h = \\left(q + \\left\\lfloor\\frac{13(m+1)}{5}\\right\\rfloor + Y + \\left\\lfloor\\frac{Y}{4}\\right\\rfloor + 5\\right) \\bmod 7," }, { "math_id": 30, "text": "\\left\\lfloor\\frac{23m}{9}\\right\\rfloor + 4" }, { "math_id": 31, "text": "\\left\\lfloor\\frac{13(m-2)}{5}\\right\\rfloor + 2" }, { "math_id": 32, "text": "\\left\\lfloor\\frac{31(m-2)}{12}\\right\\rfloor" } ]
https://en.wikipedia.org/wiki?curid=1207119
1207161
Ultrashort pulse
Laser pulse with duration a picosecond (10^-12 s) or less In optics, an ultrashort pulse, also known as an ultrafast event, is an electromagnetic pulse whose time duration is of the order of a picosecond (10−12 second) or less. Such pulses have a broadband optical spectrum, and can be created by mode-locked oscillators. Amplification of ultrashort pulses almost always requires the technique of chirped pulse amplification, in order to avoid damage to the gain medium of the amplifier. They are characterized by a high peak intensity (or more correctly, irradiance) that usually leads to nonlinear interactions in various materials, including air. These processes are studied in the field of nonlinear optics. In the specialized literature, "ultrashort" refers to the femtosecond (fs) and picosecond (ps) range, although such pulses no longer hold the record for the shortest pulses artificially generated. Indeed, x-ray pulses with durations on the attosecond time scale have been reported. The 1999 Nobel Prize in Chemistry was awarded to Ahmed H. Zewail, for the use of ultrashort pulses to observe chemical reactions at the timescales on which they occur, opening up the field of femtochemistry. A further Nobel prize, the 2023 Nobel Prize in Physics, was also awarded for ultrashort pulses. This prize was awarded to Pierre Agostini, Ferenc Krausz, and Anne L'Huillier for the development of attosecond pulses and their ability to probe electron dynamics. Definition. There is no standard definition of ultrashort pulse. Usually the attribute 'ultrashort' applies to pulses with a duration of a few tens of femtoseconds, but in a larger sense any pulse which lasts less than a few picoseconds can be considered ultrashort. The distinction between "Ultrashort" and "Ultrafast" is necessary as the speed at which the pulse propagates is a function of the index of refraction of the medium through which it travels, whereas "Ultrashort" refers to the temporal width of the pulse wavepacket. A common example is a chirped Gaussian pulse, a wave whose field amplitude follows a Gaussian envelope and whose instantaneous phase has a frequency sweep. Background. The real electric field corresponding to an ultrashort pulse is oscillating at an angular frequency "ω"0 corresponding to the central wavelength of the pulse. To facilitate calculations, a complex field "E"("t") is defined. Formally, it is defined as the analytic signal corresponding to the real field. The central angular frequency "ω"0 is usually explicitly written in the complex field, which may be separated as a temporal intensity function "I"("t") and a temporal phase function "ψ"("t"): formula_0 The expression of the complex electric field in the frequency domain is obtained from the Fourier transform of "E"("t"): formula_1 Because of the presence of the formula_2 term, "E"("ω") is centered around "ω"0, and it is a common practice to refer to "E"("ω"-"ω"0) by writing just "E"("ω"), which we will do in the rest of this article. Just as in the time domain, an intensity and a phase function can be defined in the frequency domain: formula_3 The quantity formula_4 is the "power spectral density" (or simply, the "spectrum") of the pulse, and formula_5 is the "phase spectral density" (or simply "spectral phase"). Example of spectral phase functions include the case where formula_5 is a constant, in which case the pulse is called a bandwidth-limited pulse, or where formula_5 is a quadratic function, in which case the pulse is called a chirped pulse because of the presence of an instantaneous frequency sweep. Such a chirp may be acquired as a pulse propagates through materials (like glass) and is due to their dispersion. It results in a temporal broadening of the pulse. The intensity functions—temporal formula_6 and spectral formula_4 —determine the time duration and spectrum bandwidth of the pulse. As stated by the uncertainty principle, their product (sometimes called the time-bandwidth product) has a lower bound. This minimum value depends on the definition used for the duration and on the shape of the pulse. For a given spectrum, the minimum time-bandwidth product, and therefore the shortest pulse, is obtained by a transform-limited pulse, i.e., for a constant spectral phase formula_5. High values of the time-bandwidth product, on the other hand, indicate a more complex pulse. Pulse shape control. Although optical devices also used for continuous light, like beam expanders and spatial filters, may be used for ultrashort pulses, several optical devices have been specifically designed for ultrashort pulses. One of them is the pulse compressor, a device that can be used to control the spectral phase of ultrashort pulses. It is composed of a sequence of prisms, or gratings. When properly adjusted it can alter the spectral phase "φ"("ω") of the input pulse so that the output pulse is a bandwidth-limited pulse with the shortest possible duration. A pulse shaper can be used to make more complicated alterations on both the phase and the amplitude of ultrashort pulses. To accurately control the pulse, a full characterization of the pulse spectral phase is a must in order to get certain pulse spectral phase (such as transform-limited). Then, a spatial light modulator can be used in the 4f plane to control the pulse. Multiphoton intrapulse interference phase scan (MIIPS) is a technique based on this concept. Through the phase scan of the spatial light modulator, MIIPS can not only characterize but also manipulate the ultrashort pulse to get the needed pulse shape at target spot (such as transform-limited pulse for optimized peak power, and other specific pulse shapes). If the pulse shaper is fully calibrated, this technique allows controlling the spectral phase of ultrashort pulses using a simple optical setup with no moving parts. However the accuracy of MIIPS is somewhat limited with respect to other techniques, such as frequency-resolved optical gating (FROG). Measurement techniques. Several techniques are available to measure ultrashort optical pulses. Intensity autocorrelation gives the pulse width when a particular pulse shape is assumed. Spectral interferometry (SI) is a linear technique that can be used when a pre-characterized reference pulse is available. It gives the intensity and phase. The algorithm that extracts the intensity and phase from the SI signal is direct. Spectral phase interferometry for direct electric-field reconstruction (SPIDER) is a nonlinear self-referencing technique based on spectral shearing interferometry. The method is similar to SI, except that the reference pulse is a spectrally shifted replica of itself, allowing one to obtain the spectral intensity and phase of the probe pulse via a direct FFT filtering routine similar to SI, but which requires integration of the phase extracted from the interferogram to obtain the probe pulse phase. Frequency-resolved optical gating (FROG) is a nonlinear technique that yields the intensity and phase of a pulse. It is a spectrally resolved autocorrelation. The algorithm that extracts the intensity and phase from a FROG trace is iterative. Grating-eliminated no-nonsense observation of ultrafast incident laser light e-fields (GRENOUILLE) is a simplified version of FROG. ("Grenouille" is French for "frog".) Chirp scan is a technique similar to MIIPS which measures the spectral phase of a pulse by applying a ramp of quadratic spectral phases and measuring second harmonic spectra. With respect to MIIPS, which requires many iterations to measure the spectral phase, only two chirp scans are needed to retrieve both the amplitude and the phase of the pulse. Multiphoton intrapulse interference phase scan (MIIPS) is a method to characterize and manipulate the ultrashort pulse. Wave packet propagation in nonisotropic media. To partially reiterate the discussion above, the slowly varying envelope approximation (SVEA) of the electric field of a wave with central wave vector formula_7 and central frequency formula_8 of the pulse, is given by: formula_9 We consider the propagation for the SVEA of the electric field in a homogeneous dispersive nonisotropic medium. Assuming the pulse is propagating in the direction of the z-axis, it can be shown that the envelope formula_10 for one of the most general of cases, namely a biaxial crystal, is governed by the PDE: formula_11 formula_12 where the coefficients contains diffraction and dispersion effects which have been determined analytically with computer algebra and verified numerically to within third order for both isotropic and non-isotropic media, valid in the near-field and far-field. formula_13 is the inverse of the group velocity projection. The term in formula_14 is the group velocity dispersion (GVD) or second-order dispersion; it increases the pulse duration and chirps the pulse as it propagates through the medium. The term in formula_15 is a third-order dispersion term that can further increase the pulse duration, even if formula_14 vanishes. The terms in formula_16 and formula_17 describe the walk-off of the pulse; the coefficient formula_18 is the ratio of the component of the group velocity formula_19 and the unit vector in the direction of propagation of the pulse (z-axis). The terms in formula_20 and formula_21 describe diffraction of the optical wave packet in the directions perpendicular to the axis of propagation. The terms in formula_22 and formula_23 containing mixed derivatives in time and space rotate the wave packet about the formula_24 and formula_25 axes, respectively, increase the temporal width of the wave packet (in addition to the increase due to the GVD), increase the dispersion in the formula_25 and formula_24 directions, respectively, and increase the chirp (in addition to that due to formula_14) when the latter and/or formula_26 and formula_21 are nonvanishing. The term formula_27 rotates the wave packet in the formula_28 plane. Oddly enough, because of previously incomplete expansions, this rotation of the pulse was not realized until the late 1990s but it has been "experimentally" confirmed. To third order, the RHS of the above equation is found to have these additional terms for the uniaxial crystal case: formula_29 The first and second terms are responsible for the curvature of the propagating front of the pulse. These terms, including the term in formula_30 are present in an isotropic medium and account for the spherical surface of a propagating front originating from a point source. The term formula_31 can be expressed in terms of the index of refraction, the frequency formula_32 and derivatives thereof and the term formula_33 also distorts the pulse but in a fashion that reverses the roles of formula_34 and formula_35 (see reference of Trippenbach, Scott and Band for details). So far, the treatment herein is linear, but nonlinear dispersive terms are ubiquitous to nature. Studies involving an additional nonlinear term formula_36 have shown that such terms have a profound effect on wave packet, including amongst other things, a "self-steepening" of the wave packet. The non-linear aspects eventually lead to optical solitons. Despite being rather common, the SVEA is not required to formulate a simple wave equation describing the propagation of optical pulses. In fact, as shown in, even a very general form of the electromagnetic second order wave equation can be factorized into directional components, providing access to a single first order wave equation for the field itself, rather than an envelope. This requires only an assumption that the field evolution is slow on the scale of a wavelength, and does not restrict the bandwidth of the pulse at all—as demonstrated vividly by. High harmonics. High energy ultrashort pulses can be generated through high harmonic generation in a nonlinear medium. A high intensity ultrashort pulse will generate an array of harmonics in the medium; a particular harmonic of interest is then selected with a monochromator. This technique has been used to produce ultrashort pulses in the extreme ultraviolet and soft-X-ray regimes from near infrared Ti-sapphire laser pulses. Applications. Advanced material 3D micro-/nano-processing. The ability of femtosecond lasers to efficiently fabricate complex structures and devices for a wide variety of applications has been extensively studied during the last decade. State-of-the-art laser processing techniques with ultrashort light pulses can be used to structure materials with a sub-micrometer resolution. Direct laser writing (DLW) of suitable photoresists and other transparent media can create intricate three-dimensional photonic crystals (PhC), micro-optical components, gratings, tissue engineering (TE) scaffolds and optical waveguides. Such structures are potentially useful for empowering next-generation applications in telecommunications and bioengineering that rely on the creation of increasingly sophisticated miniature parts. The precision, fabrication speed and versatility of ultrafast laser processing make it well placed to become a vital industrial tool for manufacturing. Micro-machining. Among the applications of femtosecond laser, the microtexturization of implant surfaces have been experimented for the enhancement of the bone formation around zirconia dental implants. The technique demonstrated to be precise with a very low thermal damage and with the reduction of the surface contaminants. Posterior animal studies demonstrated that the increase on the oxygen layer and the micro and nanofeatures created by the microtexturing with femtosecond laser resulted in higher rates of bone formation, higher bone density and improved mechanical stability. Multiphoton Polymerization. Multiphoton Polymerization (MPP) stands out for its ability to fabricate micro- and nano-scale structures with exceptional precision. This process leverages the concentrated power of femtosecond lasers to initiate highly controlled photopolymerization reactions, crafting detailed three-dimensional constructs. These capabilities make MPP essential in creating complex geometries for biomedical applications, including tissue engineering and micro-device fabrication, highlighting the versatility and precision of ultrashort pulse lasers in advanced manufacturing processes. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "E(t) = \\sqrt{I(t)}e^{i\\omega_0t}e^{i\\psi(t)}" }, { "math_id": 1, "text": "E(\\omega) = \\mathcal{F}(E(t))" }, { "math_id": 2, "text": "e^{i\\omega_0t}" }, { "math_id": 3, "text": "E(\\omega) = \\sqrt{S(\\omega)}e^{i\\phi(\\omega)}" }, { "math_id": 4, "text": "S(\\omega)" }, { "math_id": 5, "text": "\\phi(\\omega) " }, { "math_id": 6, "text": " I(t) " }, { "math_id": 7, "text": " \\textbf{K}_0 " }, { "math_id": 8, "text": " \\omega_0 " }, { "math_id": 9, "text": "\n\\textbf{E} ( \\textbf{x} , t) = \\textbf{ A } ( \\textbf{x} , t) \\exp ( i \\textbf{K}_0 \\textbf{x} - i \\omega_0 t )\n" }, { "math_id": 10, "text": " \\textbf{A} " }, { "math_id": 11, "text": "\n\\frac{\\partial \\textbf{A} }{\\partial z } =\n~-~ \\beta_1 \\frac{\\partial \\textbf{A} }{\\partial t}\n~-~ \\frac{i}{2} \\beta_2 \\frac{\\partial^2 \\textbf{A} }{\\partial t^2}\n~+~ \\frac{1}{6} \\beta_3 \\frac{\\partial^3 \\textbf{A} }{\\partial t^3}\n~+~ \\gamma_x \\frac{\\partial \\textbf{A} }{\\partial x}\n~+~ \\gamma_y \\frac{\\partial \\textbf{A} }{\\partial y}\n" }, { "math_id": 12, "text": "\n~~~~~~~~~~~\n~+~ i \\gamma_{tx} \\frac{\\partial^2 \\textbf{A} }{\\partial t \\partial x}\n~+~ i \\gamma_{ty} \\frac{\\partial^2 \\textbf{A} }{\\partial t \\partial y}\n~-~ \\frac{i}{2} \\gamma_{xx} \\frac{\\partial^2 \\textbf{A} }{ \\partial x^2}\n~-~ \\frac{i}{2} \\gamma_{yy} \\frac{\\partial^2 \\textbf{A} }{ \\partial y^2}\n~+~ i \\gamma_{xy} \\frac{\\partial^2 \\textbf{A} }{ \\partial x \\partial y} + \\cdots\n" }, { "math_id": 13, "text": " \\beta_1 " }, { "math_id": 14, "text": " \\beta_2 " }, { "math_id": 15, "text": " \\beta_3 " }, { "math_id": 16, "text": " \\gamma_x " }, { "math_id": 17, "text": " \\gamma_y " }, { "math_id": 18, "text": " \\gamma_x ~ (\\gamma_y ) " }, { "math_id": 19, "text": " x ~ (y) " }, { "math_id": 20, "text": "\\gamma_{xx}" }, { "math_id": 21, "text": " \\gamma_{yy} " }, { "math_id": 22, "text": " \\gamma_{tx} " }, { "math_id": 23, "text": " \\gamma_{ty} " }, { "math_id": 24, "text": "y" }, { "math_id": 25, "text": "x" }, { "math_id": 26, "text": " \\gamma_{xx} " }, { "math_id": 27, "text": " \\gamma_{xy} " }, { "math_id": 28, "text": " x-y " }, { "math_id": 29, "text": "\n\\cdots\n~+~ \\frac{1}{3} \\gamma_{t x x } \\frac{\\partial^3 \\textbf{A} }{ \\partial x^2 \\partial t}\n~+~ \\frac{1}{3} \\gamma_{t y y } \\frac{\\partial^3 \\textbf{A} }{ \\partial y^2 \\partial t}\n~+~ \\frac{1}{3} \\gamma_{t t x } \\frac{\\partial^3 \\textbf{A} }{ \\partial t^2 \\partial x} + \\cdots\n" }, { "math_id": 30, "text": "\\beta_3" }, { "math_id": 31, "text": " \\gamma_{txx} " }, { "math_id": 32, "text": " \\omega " }, { "math_id": 33, "text": " \\gamma_{ttx} " }, { "math_id": 34, "text": " t " }, { "math_id": 35, "text": " x " }, { "math_id": 36, "text": " \\gamma_{nl} |A|^2 A " } ]
https://en.wikipedia.org/wiki?curid=1207161
1207207
Little–Parks effect
The Little–Parks effect was discovered in 1962 by William A. Little and Ronald D. Parks in experiments with empty and thin-walled superconducting cylinders subjected to a parallel magnetic field. It was one of the first experiments to indicate the importance of Cooper-pairing principle in BCS theory. The essence of the Little–Parks (LP) effect is slight suppression of the cylinder's superconductivity by persistent current. Explanation. The electrical resistance of such cylinders shows a periodic oscillation with the magnetic flux piercing the cylinder, the period being "h"/2"e" ≈ where "h" is the Planck constant and "e" is the electron charge. The explanation provided by Little and Parks is that the resistance oscillation reflects a more fundamental phenomenon, i.e. periodic oscillation of the superconducting critical temperature "T"c. The Little–Parks effect consists in a periodic variation of the "T"c with the magnetic flux, which is the product of the magnetic field (coaxial) and the cross sectional area of the cylinder. "T"c depends on the kinetic energy of the superconducting electrons. More precisely, the "T"c is such temperature at which the free energies of normal and superconducting electrons are equal, for a given magnetic field. To understand the periodic oscillation of the "T"c, which constitutes the Little–Parks effect, one needs to understand the periodic variation of the kinetic energy. The kinetic energy oscillates because the applied magnetic flux increases the kinetic energy while superconducting vortices, periodically entering the cylinder, compensate for the flux effect and reduce the kinetic energy. Thus, the periodic oscillation of the kinetic energy and the related periodic oscillation of the critical temperature occur together. The Little–Parks effect is a result of collective quantum behavior of superconducting electrons. It reflects the general fact that it is the fluxoid rather than the flux which is quantized in superconductors. The Little–Parks effect can be seen as a result of the requirement that quantum physics be invariant with respect to the gauge choice for the electromagnetic potential, of which the magnetic vector potential A forms part. Electromagnetic theory implies that a particle with electric charge "q" travelling along some path "P" in a region with zero magnetic field B, but non-zero A (by formula_0), acquires a phase shift formula_1, given in SI units by formula_2 In a superconductor, the electrons form a quantum superconducting condensate, called a Bardeen–Cooper–Schrieffer (BCS) condensate. In the BCS condensate all electrons behave coherently, i.e. as one particle. Thus the phase of the collective BCS wavefunction behaves under the influence of the vector potential A in the same way as the phase of a single electron. Therefore, the BCS condensate flowing around a closed path in a multiply connected superconducting sample acquires a phase difference Δ"φ" determined by the magnetic flux "ΦB" through the area enclosed by the path (via Stokes' theorem and formula_3), and given by: formula_4 This phase effect is responsible for the quantized-flux requirement and the Little–Parks effect in superconducting loops and empty cylinders. The quantization occurs because the superconducting wave function must be single valued in a loop or an empty superconducting cylinder: its phase difference Δ"φ" around a closed loop must be an integer multiple of 2π, with the charge "q" = 2"e" for the BCS electronic superconducting pairs. If the period of the Little–Parks oscillations is 2π with respect to the superconducting phase variable, from the formula above it follows that the period with respect to the magnetic flux is the same as the magnetic flux quantum, namely formula_5 Applications. Little–Parks oscillations are a widely used proof mechanism of Cooper pairing. One of good example is the study of the Superconductor Insulator Transition. The challenge here is to separate Little–Parks oscillations from weak (anti-)localization, as in Altshuler et al. results, where authors observed the Aharonov–Bohm effect in a dirty metallic film. History. Fritz London predicted that the fluxoid is quantized in a multiply connected superconductor. Experimentally has been shown, that the trapped magnetic flux existed only in discrete quantum units "h"/2"e". Deaver and Fairbank were able to achieve the accuracy 20–30% because of the wall thickness of the cylinder. Little and Parks examined a "thin-walled" (Materials: Al, In, Pb, Sn and Sn–In alloys) cylinder (diameter was about 1 micron) at "T" very close to the transition temperature in an applied magnetic field in the axial direction. They found magnetoresistance oscillations with the period consistent with "h"/2"e". What they actually measured was an infinitely small changes of resistance versus temperature for (different) constant magnetic field. The figure to the right shows instead measurements of the resistance for varying applied magnetic field, which corresponds to varying magnetic flux, with the different colors (probably) representing different temperatures. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\mathbf{B} = 0 = \\nabla \\times \\mathbf{A}" }, { "math_id": 1, "text": "\\varphi" }, { "math_id": 2, "text": "\\varphi = \\frac{q}{\\hbar} \\int_P \\mathbf{A} \\cdot d\\mathbf{x}," }, { "math_id": 3, "text": "\\nabla \\times \\mathbf{A} = \\mathbf{B}" }, { "math_id": 4, "text": "\\Delta\\varphi = \\frac{q\\Phi_B}{\\hbar}." }, { "math_id": 5, "text": "\\Delta \\Phi_B = 2\\pi\\hbar/2e=h/2e." } ]
https://en.wikipedia.org/wiki?curid=1207207
12079734
All-pay auction
In economics and game theory, an all-pay auction is an auction in which every bidder must pay regardless of whether they win the prize, which is awarded to the highest bidder as in a conventional auction. As shown by Riley and Samuelson (1981), equilibrium bidding in an all pay auction with private information is revenue equivalent to bidding in a sealed high bid or open ascending price auction. In the simplest version, there is complete information. The Nash equilibrium is such that each bidder plays a mixed strategy and expected pay-offs are zero. The seller's expected revenue is equal to the value of the prize. However, some economic experiments and studies have shown that over-bidding is common. That is, the seller's revenue frequently exceeds that of the value of the prize, in hopes of securing the winning bid. In repeated games even bidders that win the prize frequently will most likely take a loss in the long run. The all-pay auction with complete information does not have a Nash equilibrium in pure strategies, but does have a Nash equilibrium in mixed-strategies. Forms of all-pay auctions. The most straightforward form of an all-pay auction is a Tullock auction, sometimes called a Tullock lottery after Gordon Tullock, in which everyone submits a bid but both the losers and the winners pay their submitted bids. This is instrumental in describing certain ideas in public choice economics. The dollar auction is a two player Tullock auction, or a multiplayer game in which only the two highest bidders pay their bids. Another practical examples are the bidding fee auction and the penny raffle "(pejoratively known as a "Chinese auction""). Other forms of all-pay auctions exist, such as a war of attrition (also known as biological auctions), in which the highest bidder wins, but all (or more typically, both) bidders pay only the lower bid. The war of attrition is used by biologists to model conventional contests, or agonistic interactions resolved without recourse to physical aggression. Rules. The following analysis follows a few basic rules. Symmetry Assumption. In IPV bidders are symmetric because valuations are from the same distribution. These make the analysis focus on symmetric and monotonic bidding strategies. This implies that two bidders with the same valuation will submit the same bid. As a result, under symmetry, the bidder with the highest value will always win. Using revenue equivalence to predict bidding function. Consider the two-player version of the all-pay auction and formula_1 be the private valuations independent and identically distributed on a uniform distribution from [0,1]. We wish to find a monotone increasing bidding function, formula_2, that forms a symmetric Nash Equilibrium. If player formula_3 bids formula_4, he wins the auction only if his bid is larger than player formula_5's bid formula_6. The probability for this to happen is formula_7, since formula_8 is monotone and formula_9 Thus, the probability of allocation of good to formula_3 is formula_10. Thus, formula_3's expected utility when he bids as if his private value is formula_10 is given by formula_11. For formula_8 to be a Bayesian-Nash Equilibrium, formula_12 should have its maximum at formula_13 so that formula_3 has no incentive to deviate given formula_5 sticks with his bid of formula_6. formula_14 Upon integrating, we get formula_15. We know that if player formula_3 has private valuation formula_16, then they will bid 0; formula_17. We can use this to show that the constant of integration is also 0. Thus, we get formula_18. Since this function is indeed monotone increasing, this bidding strategy formula_8 constitutes a Bayesian-Nash Equilibrium. The revenue from the all-pay auction in this example is formula_19 Since formula_20 are drawn "iid" from Unif[0,1], the expected  revenue is formula_21. Due to the revenue equivalence theorem, all auctions with 2 players will have an expected revenue of formula_22 when the private valuations are "iid" from Unif[0,1]. Bidding Function in the Generic Symmetric Case. Suppose the auction has formula_23 risk-neutral bidders. Each bidder has a private value formula_24 drawn i.i.d. from a common smooth distribution formula_25. Given free disposal, each bidder's value is bounded below by zero. Without loss of generality, then, normalize the lowest possible value to zero. Because the game is symmetric, the optimal bidding function must be the same for all players. Call this optimal bidding function formula_26. Because each player's payoff is defined as their expected gain minus their bid, we can recursively define the optimal bid function as follows: formula_27 Note because F is smooth the probability of a tie is zero. This means the probability of winning the auction will be equal to the CDF raised to the number of players minus 1: i.e., formula_28. The objective now satisfies the requirements for the envelope theorem. Thus, we can write: formula_29 This yields the unique symmetric Nash Equilibrium bidding function formula_30. Examples. Consider a corrupt official who is dealing with campaign donors: Each wants him to do a favor that is worth somewhere between $0 and $1000 to them (uniformly distributed). Their actual valuations are $250, $500 and $750. They can only observe their own valuations. They each treat the official to an expensive present - if they spend X Dollars on the present then this is worth X dollars to the official. The official can only do one favor and will do the favor to the donor who is giving him the most expensive present. This is a typical model for all-pay auction. To calculate the optimal bid for each donor, we need to normalize the valuations {250, 500, 750} to {0.25, 0.5, 0.75} so that IPV may apply. According to the formula for optimal bid: formula_31 The optimal bids for three donors under IPV are: formula_32 formula_33 formula_34 To get the real optimal amount that each of the three donors should give, simply multiplied the IPV values by 1000: formula_35 formula_36 formula_37 This example implies that the official will finally get $375 but only the third donor, who donated $281.3 will win the official's favor. Note that the other two donors know their valuations are not high enough (low chance of winning),  so they do not donate much, thus balancing the possible huge winning profit and the low chance of winning. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "0.6^2=0.36" }, { "math_id": 1, "text": "v_i, v_j" }, { "math_id": 2, "text": "b(v)" }, { "math_id": 3, "text": "i" }, { "math_id": 4, "text": "b(x)" }, { "math_id": 5, "text": "j" }, { "math_id": 6, "text": "b(v_j)" }, { "math_id": 7, "text": " \\mathbb{P}[b(x) > b(v_j)] = \\mathbb{P}[x > v_j] = x " }, { "math_id": 8, "text": "b" }, { "math_id": 9, "text": "v_j \\sim \\mathrm{Unif}[0,1]" }, { "math_id": 10, "text": "x" }, { "math_id": 11, "text": "u_i(x|v_i)=v_ix-b(x)" }, { "math_id": 12, "text": "u_i(x_i|v_i)" }, { "math_id": 13, "text": "x_i = v_i" }, { "math_id": 14, "text": " \\implies u_i'(v_i) = 0 \\implies 2v_i = b'(v_i) " }, { "math_id": 15, "text": "b(v_i) = v_i^2 + c" }, { "math_id": 16, "text": "v_i = 0" }, { "math_id": 17, "text": "b(0) = 0" }, { "math_id": 18, "text": "b(v_i) = \\frac{v_i^2}{2}" }, { "math_id": 19, "text": "R=b(v_1)+b(v_2)=\\frac{v_1^2}{2}+\\frac{v_2^2}{2}" }, { "math_id": 20, "text": "v_1, v_2" }, { "math_id": 21, "text": "\\mathbb{E}[R]=\\mathbb{E}[\\frac{v_1^2}{2}+\\frac{v_2^2}{2}]=\\mathbb{E}[v^2]=\\int\\limits_{0}^{1} v^2dv =\\frac{1}{3}" }, { "math_id": 22, "text": "\\frac{1}{3}" }, { "math_id": 23, "text": "n" }, { "math_id": 24, "text": "v_i" }, { "math_id": 25, "text": "F" }, { "math_id": 26, "text": "\\beta" }, { "math_id": 27, "text": "\n\\beta(v_i) \\in arg\\max_{b \\in \\mathbb{R}} \\left\\{\\mathbb{P}(\\forall j \\neq i: \\beta(v_j) \\leq b)v_i - b\\right\\}\n" }, { "math_id": 28, "text": " \\mathbb{P}(\\forall j \\neq i: \\beta(v_j) \\leq \\beta(v_i)) = F(v_i)^{n - 1} " }, { "math_id": 29, "text": "\n\\begin{align}\n\\int_0^{v_i} F(\\tau)^{n - 1}d\\tau &= (F(v_i)^{n - 1} \\cdot v_i - \\beta(v_i)) - (F^{n - 1}(0) \\cdot 0 - \\beta(0)) \\\\\n\\beta(v_i) &= F^{n - 1}(v_i)v_i - \\int_0^{v_i} F(\\tau)^{n - 1}d\\tau \\\\\n\\beta(v_i) &= \\int_0^{v_i} \\tau dF^{n - 1}(\\tau)\n\\end{align}\n" }, { "math_id": 30, "text": " \\beta(v_i) " }, { "math_id": 31, "text": "b_i(v_i)=\\left(\\frac{n-1}{n}\\right){v_i}^{n}" }, { "math_id": 32, "text": "b_1(v_1)=\\left(\\frac{n-1}{n}\\right){v_1}^{n}=\\left(\\frac{2}{3}\\right){0.25}^{3} = 0.0104" }, { "math_id": 33, "text": "b_2(v_2)=\\left(\\frac{n-1}{n}\\right){v_2}^{n}=\\left(\\frac{2}{3}\\right){0.50}^{3} = 0.0833" }, { "math_id": 34, "text": "b_3(v_3)=\\left(\\frac{n-1}{n}\\right){v_3}^{n}=\\left(\\frac{2}{3}\\right){0.75}^{3} = 0.2813" }, { "math_id": 35, "text": "b_1real(v_1=0.25)= $10.4" }, { "math_id": 36, "text": "b_2real(v_2=0.50)= $83.3" }, { "math_id": 37, "text": "b_3real(v_3=0.75)= $281.3" } ]
https://en.wikipedia.org/wiki?curid=12079734
1208345
Scale-invariant feature transform
Feature detection algorithm in computer vision The scale-invariant feature transform (SIFT) is a computer vision algorithm to detect, describe, and match local "features" in images, invented by David Lowe in 1999. Applications include object recognition, robotic mapping and navigation, image stitching, 3D modeling, gesture recognition, video tracking, individual identification of wildlife and match moving. SIFT keypoints of objects are first extracted from a set of reference images and stored in a database. An object is recognized in a new image by individually comparing each feature from the new image to this database and finding candidate matching features based on Euclidean distance of their feature vectors. From the full set of matches, subsets of keypoints that agree on the object and its location, scale, and orientation in the new image are identified to filter out good matches. The determination of consistent clusters is performed rapidly by using an efficient hash table implementation of the generalised Hough transform. Each cluster of 3 or more features that agree on an object and its pose is then subject to further detailed model verification and subsequently outliers are discarded. Finally the probability that a particular set of features indicates the presence of an object is computed, given the accuracy of fit and number of probable false matches. Object matches that pass all these tests can be identified as correct with high confidence. Although the SIFT algorithm was previously protected by a patent, its patent expired in 2020. Overview. For any object in an image, we can extract important points in the image to provide a "feature description" of the object. This description, extracted from a training image, can then be used to locate the object in a new (previously unseen) image containing other objects. In order to do this reliably, the features should be detectable even if the image is scaled, or if it has noise and different illumination. Such points usually lie on high-contrast regions of the image, such as object edges. Another important characteristic of these features is that the relative positions between them in the original scene should not change between images. For example, if only the four corners of a door were used as features, they would work regardless of the door's position; but if points in the frame were also used, the recognition would fail if the door is opened or closed. Similarly, features located in articulated or flexible objects would typically not work if any change in their internal geometry happens between two images in the set being processed. In practice, SIFT detects and uses a much larger number of features from the images, which reduces the contribution of the errors caused by these local variations in the average error of all feature matching errors. SIFT can robustly identify objects even among clutter and under partial occlusion, because the SIFT feature descriptor is invariant to uniform scaling, orientation, illumination changes, and partially invariant to affine distortion. This section summarizes the original SIFT algorithm and mentions a few competing techniques available for object recognition under clutter and partial occlusion. The SIFT descriptor is based on image measurements in terms of "receptive fields" over which "local scale invariant reference frames" are established by "local scale selection". A general theoretical explanation about this is given in the Scholarpedia article on SIFT. Types of features. The detection and description of local image features can help in object recognition. The SIFT features are local and based on the appearance of the object at particular interest points, and are invariant to image scale and rotation. They are also robust to changes in illumination, noise, and minor changes in viewpoint. In addition to these properties, they are highly distinctive, relatively easy to extract and allow for correct object identification with low probability of mismatch. They are relatively easy to match against a (large) database of local features but, however, the high dimensionality can be an issue, and generally probabilistic algorithms such as k-d trees with best bin first search are used. Object description by set of SIFT features is also robust to partial occlusion; as few as 3 SIFT features from an object are enough to compute its location and pose. Recognition can be performed in close-to-real time, at least for small databases and on modern computer hardware. Stages. Scale-invariant feature detection. Lowe's method for image feature generation transforms an image into a large collection of feature vectors, each of which is invariant to image translation, scaling, and rotation, partially invariant to illumination changes, and robust to local geometric distortion. These features share similar properties with neurons in the primary visual cortex that encode basic forms, color, and movement for object detection in primate vision. Key locations are defined as maxima and minima of the result of difference of Gaussians function applied in scale space to a series of smoothed and resampled images. Low-contrast candidate points and edge response points along an edge are discarded. Dominant orientations are assigned to localized key points. These steps ensure that the key points are more stable for matching and recognition. SIFT descriptors robust to local affine distortion are then obtained by considering pixels around a radius of the key location, blurring, and resampling local image orientation planes. Feature matching and indexing. Indexing consists of storing SIFT keys and identifying matching keys from the new image. Lowe used a modification of the k-d tree algorithm called the best-bin-first search method that can identify the nearest neighbors with high probability using only a limited amount of computation. The BBF algorithm uses a modified search ordering for the k-d tree algorithm so that bins in feature space are searched in the order of their closest distance from the query location. This search order requires the use of a heap-based priority queue for efficient determination of the search order. We obtain a candidate for each keypoint by identifying its nearest neighbor in the database of keypoints from training images. The nearest neighbors are defined as the keypoints with minimum Euclidean distance from the given descriptor vector. The way Lowe determined whether a given candidate should be kept or 'thrown out' is by checking the ratio between the distance from this given candidate and the distance from the closest keypoint which is not of the same object class as the candidate at hand (candidate feature vector / closest different class feature vector), the idea is that we can only be sure of candidates in which features/keypoints from distinct object classes don't "clutter" it (not geometrically clutter in the feature space necessarily but more so clutter along the right half (&gt;0) of the real line), this is an obvious consequence of using Euclidean distance as our nearest neighbor measure. The ratio threshold for rejection is whenever it is above 0.8. This method eliminated 90% of false matches while discarding less than 5% of correct matches. To further improve the efficiency of the best-bin-first algorithm search was cut off after checking the first 200 nearest neighbor candidates. For a database of 100,000 keypoints, this provides a speedup over exact nearest neighbor search by about 2 orders of magnitude, yet results in less than a 5% loss in the number of correct matches. Cluster identification by Hough transform voting. Hough transform is used to cluster reliable model hypotheses to search for keys that agree upon a particular model pose. Hough transform identifies clusters of features with a consistent interpretation by using each feature to vote for all object poses that are consistent with the feature. When clusters of features are found to vote for the same pose of an object, the probability of the interpretation being correct is much higher than for any single feature. An entry in a hash table is created predicting the model location, orientation, and scale from the match hypothesis. The hash table is searched to identify all clusters of at least 3 entries in a bin, and the bins are sorted into decreasing order of size. Each of the SIFT keypoints specifies 2D location, scale, and orientation, and each matched keypoint in the database has a record of its parameters relative to the training image in which it was found. The similarity transform implied by these 4 parameters is only an approximation to the full 6 degree-of-freedom pose space for a 3D object and also does not account for any non-rigid deformations. Therefore, Lowe used broad bin sizes of 30 degrees for orientation, a factor of 2 for scale, and 0.25 times the maximum projected training image dimension (using the predicted scale) for location. The SIFT key samples generated at the larger scale are given twice the weight of those at the smaller scale. This means that the larger scale is in effect able to filter the most likely neighbors for checking at the smaller scale. This also improves recognition performance by giving more weight to the least-noisy scale. To avoid the problem of boundary effects in bin assignment, each keypoint match votes for the 2 closest bins in each dimension, giving a total of 16 entries for each hypothesis and further broadening the pose range. Model verification by linear least squares. Each identified cluster is then subject to a verification procedure in which a linear least squares solution is performed for the parameters of the affine transformation relating the model to the image. The affine transformation of a model point [x y]T to an image point [u v]T can be written as below formula_0 where the model translation is [tx ty]T and the affine rotation, scale, and stretch are represented by the parameters m1, m2, m3 and m4. To solve for the transformation parameters the equation above can be rewritten to gather the unknowns into a column vector. formula_1 This equation shows a single match, but any number of further matches can be added, with each match contributing two more rows to the first and last matrix. At least 3 matches are needed to provide a solution. We can write this linear system as formula_2 where "A" is a known "m"-by-"n" matrix (usually with "m" &gt; "n"), x is an unknown "n"-dimensional parameter vector, and b is a known "m"-dimensional measurement vector. Therefore, the minimizing vector formula_3 is a solution of the normal equation formula_4 The solution of the system of linear equations is given in terms of the matrix formula_5, called the pseudoinverse of "A", by formula_6 which minimizes the sum of the squares of the distances from the projected model locations to the corresponding image locations. Outlier detection. Outliers can now be removed by checking for agreement between each image feature and the model, given the parameter solution. Given the linear least squares solution, each match is required to agree within half the error range that was used for the parameters in the Hough transform bins. As outliers are discarded, the linear least squares solution is re-solved with the remaining points, and the process iterated. If fewer than 3 points remain after discarding outliers, then the match is rejected. In addition, a top-down matching phase is used to add any further matches that agree with the projected model position, which may have been missed from the Hough transform bin due to the similarity transform approximation or other errors. The final decision to accept or reject a model hypothesis is based on a detailed probabilistic model. This method first computes the expected number of false matches to the model pose, given the projected size of the model, the number of features within the region, and the accuracy of the fit. A Bayesian probability analysis then gives the probability that the object is present based on the actual number of matching features found. A model is accepted if the final probability for a correct interpretation is greater than 0.98. Lowe's SIFT based object recognition gives excellent results except under wide illumination variations and under non-rigid transformations. Algorithm. Scale-space extrema detection. We begin by detecting points of interest, which are termed "keypoints" in the SIFT framework. The image is convolved with Gaussian filters at different scales, and then the difference of successive Gaussian-blurred images are taken. Keypoints are then taken as maxima/minima of the Difference of Gaussians (DoG) that occur at multiple scales. Specifically, a DoG image formula_7 is given by formula_8, where formula_9 is the convolution of the original image formula_10 with the Gaussian blur formula_11 at scale formula_12, i.e., formula_13 Hence a DoG image between scales formula_14 and formula_15 is just the difference of the Gaussian-blurred images at scales formula_14 and formula_15. For scale space extrema detection in the SIFT algorithm, the image is first convolved with Gaussian-blurs at different scales. The convolved images are grouped by octave (an octave corresponds to doubling the value of formula_16), and the value of formula_17 is selected so that we obtain a fixed number of convolved images per octave. Then the Difference-of-Gaussian images are taken from adjacent Gaussian-blurred images per octave. Once DoG images have been obtained, keypoints are identified as local minima/maxima of the DoG images across scales. This is done by comparing each pixel in the DoG images to its eight neighbors at the same scale and nine corresponding neighboring pixels in each of the neighboring scales. If the pixel value is the maximum or minimum among all compared pixels, it is selected as a candidate keypoint. This keypoint detection step is a variation of one of the blob detection methods developed by Lindeberg by detecting scale-space extrema of the scale normalized Laplacian; that is, detecting points that are local extrema with respect to both space and scale, in the discrete case by comparisons with the nearest 26 neighbors in a discretized scale-space volume. The difference of Gaussians operator can be seen as an approximation to the Laplacian, with the implicit normalization in the pyramid also constituting a discrete approximation of the scale-normalized Laplacian. Another real-time implementation of scale-space extrema of the Laplacian operator has been presented by Lindeberg and Bretzner based on a hybrid pyramid representation, which was used for human-computer interaction by real-time gesture recognition in Bretzner et al. (2002). Keypoint localization. Scale-space extrema detection produces too many keypoint candidates, some of which are unstable. The next step in the algorithm is to perform a detailed fit to the nearby data for accurate location, scale, and ratio of principal curvatures. This information allows the rejection of points which are low contrast (and are therefore sensitive to noise) or poorly localized along an edge. Interpolation of nearby data for accurate position. First, for each candidate keypoint, interpolation of nearby data is used to accurately determine its position. The initial approach was to just locate each keypoint at the location and scale of the candidate keypoint. The new approach calculates the interpolated location of the extremum, which substantially improves matching and stability. The interpolation is done using the quadratic Taylor expansion of the Difference-of-Gaussian scale-space function, formula_7 with the candidate keypoint as the origin. This Taylor expansion is given by: formula_18 where D and its derivatives are evaluated at the candidate keypoint and formula_19 is the offset from this point. The location of the extremum, formula_20, is determined by taking the derivative of this function with respect to formula_21 and setting it to zero. If the offset formula_20 is larger than formula_22 in any dimension, then that's an indication that the extremum lies closer to another candidate keypoint. In this case, the candidate keypoint is changed and the interpolation performed instead about that point. Otherwise the offset is added to its candidate keypoint to get the interpolated estimate for the location of the extremum. A similar subpixel determination of the locations of scale-space extrema is performed in the real-time implementation based on hybrid pyramids developed by Lindeberg and his co-workers. Discarding low-contrast keypoints. To discard the keypoints with low contrast, the value of the second-order Taylor expansion formula_23 is computed at the offset formula_20. If this value is less than formula_24, the candidate keypoint is discarded. Otherwise it is kept, with final scale-space location formula_25, where formula_26 is the original location of the keypoint. Eliminating edge responses. The DoG function will have strong responses along edges, even if the candidate keypoint is not robust to small amounts of noise. Therefore, in order to increase stability, we need to eliminate the keypoints that have poorly determined locations but have high edge responses. For poorly defined peaks in the DoG function, the principal curvature across the edge would be much larger than the principal curvature along it. Finding these principal curvatures amounts to solving for the eigenvalues of the second-order Hessian matrix, H: formula_27 The eigenvalues of H are proportional to the principal curvatures of D. It turns out that the ratio of the two eigenvalues, say formula_28 is the larger one, and formula_29 the smaller one, with ratio formula_30, is sufficient for SIFT's purposes. The trace of H, i.e., formula_31, gives us the sum of the two eigenvalues, while its determinant, i.e., formula_32, yields the product. The ratio formula_33 can be shown to be equal to formula_34, which depends only on the ratio of the eigenvalues rather than their individual values. R is minimum when the eigenvalues are equal to each other. Therefore, the higher the absolute difference between the two eigenvalues, which is equivalent to a higher absolute difference between the two principal curvatures of D, the higher the value of R. It follows that, for some threshold eigenvalue ratio formula_35, if R for a candidate keypoint is larger than formula_36, that keypoint is poorly localized and hence rejected. The new approach uses formula_37. This processing step for suppressing responses at edges is a transfer of a corresponding approach in the Harris operator for corner detection. The difference is that the measure for thresholding is computed from the Hessian matrix instead of a second-moment matrix. Orientation assignment. In this step, each keypoint is assigned one or more orientations based on local image gradient directions. This is the key step in achieving invariance to rotation as the keypoint descriptor can be represented relative to this orientation and therefore achieve invariance to image rotation. First, the Gaussian-smoothed image formula_38 at the keypoint's scale formula_16 is taken so that all computations are performed in a scale-invariant manner. For an image sample formula_39 at scale formula_16, the gradient magnitude, formula_40, and orientation, formula_41, are precomputed using pixel differences: formula_42 formula_43 The magnitude and direction calculations for the gradient are done for every pixel in a neighboring region around the keypoint in the Gaussian-blurred image L. An orientation histogram with 36 bins is formed, with each bin covering 10 degrees. Each sample in the neighboring window added to a histogram bin is weighted by its gradient magnitude and by a Gaussian-weighted circular window with a formula_16 that is 1.5 times that of the scale of the keypoint. The peaks in this histogram correspond to dominant orientations. Once the histogram is filled, the orientations corresponding to the highest peak and local peaks that are within 80% of the highest peaks are assigned to the keypoint. In the case of multiple orientations being assigned, an additional keypoint is created having the same location and scale as the original keypoint for each additional orientation. Keypoint descriptor. Previous steps found keypoint locations at particular scales and assigned orientations to them. This ensured invariance to image location, scale and rotation. Now we want to compute a descriptor vector for each keypoint such that the descriptor is highly distinctive and partially invariant to the remaining variations such as illumination, 3D viewpoint, etc. This step is performed on the image closest in scale to the keypoint's scale. First a set of orientation histograms is created on 4×4 pixel neighborhoods with 8 bins each. These histograms are computed from magnitude and orientation values of samples in a 16×16 region around the keypoint such that each histogram contains samples from a 4×4 subregion of the original neighborhood region. The image gradient magnitudes and orientations are sampled around the keypoint location, using the scale of the keypoint to select the level of Gaussian blur for the image. In order to achieve orientation invariance, the coordinates of the descriptor and the gradient orientations are rotated relative to the keypoint orientation. The magnitudes are further weighted by a Gaussian function with formula_16 equal to one half the width of the descriptor window. The descriptor then becomes a vector of all the values of these histograms. Since there are 4 × 4 = 16 histograms each with 8 bins the vector has 128 elements. This vector is then normalized to unit length in order to enhance invariance to affine changes in illumination. To reduce the effects of non-linear illumination a threshold of 0.2 is applied and the vector is again normalized. The thresholding process, also referred to as clamping, can improve matching results even when non-linear illumination effects are not present. The threshold of 0.2 was empirically chosen, and by replacing the fixed threshold with one systematically calculated, matching results can be improved. Although the dimension of the descriptor, i.e. 128, seems high, descriptors with lower dimension than this don't perform as well across the range of matching tasks and the computational cost remains low due to the approximate BBF (see below) method used for finding the nearest neighbor. Longer descriptors continue to do better but not by much and there is an additional danger of increased sensitivity to distortion and occlusion. It is also shown that feature matching accuracy is above 50% for viewpoint changes of up to 50 degrees. Therefore, SIFT descriptors are invariant to minor affine changes. To test the distinctiveness of the SIFT descriptors, matching accuracy is also measured against varying number of keypoints in the testing database, and it is shown that matching accuracy decreases only very slightly for very large database sizes, thus indicating that SIFT features are highly distinctive. Comparison of SIFT features with other local features. There has been an extensive study done on the performance evaluation of different local descriptors, including SIFT, using a range of detectors. The main results are summarized below: The evaluations carried out suggests strongly that SIFT-based descriptors, which are region-based, are the most robust and distinctive, and are therefore best suited for feature matching. However, most recent feature descriptors such as SURF have not been evaluated in this study. SURF has later been shown to have similar performance to SIFT, while at the same time being much faster. Other studies conclude that when speed is not critical, SIFT outperforms SURF. Specifically, disregarding discretization effects the pure image descriptor in SIFT is significantly better than the pure image descriptor in SURF, whereas the scale-space extrema of the determinant of the Hessian underlying the pure interest point detector in SURF constitute significantly better interest points compared to the scale-space extrema of the Laplacian to which the interest point detector in SIFT constitutes a numerical approximation. The performance of image matching by SIFT descriptors can be improved in the sense of achieving higher efficiency scores and lower 1-precision scores by replacing the scale-space extrema of the difference-of-Gaussians operator in original SIFT by scale-space extrema of the determinant of the Hessian, or more generally considering a more general family of generalized scale-space interest points. Recently, a slight variation of the descriptor employing an irregular histogram grid has been proposed that significantly improves its performance. Instead of using a 4×4 grid of histogram bins, all bins extend to the center of the feature. This improves the descriptor's robustness to scale changes. The SIFT-Rank descriptor was shown to improve the performance of the standard SIFT descriptor for affine feature matching. A SIFT-Rank descriptor is generated from a standard SIFT descriptor, by setting each histogram bin to its rank in a sorted array of bins. The Euclidean distance between SIFT-Rank descriptors is invariant to arbitrary monotonic changes in histogram bin values, and is related to Spearman's rank correlation coefficient. Applications. Object recognition using SIFT features. Given SIFT's ability to find distinctive keypoints that are invariant to location, scale and rotation, and robust to affine transformations (changes in scale, rotation, shear, and position) and changes in illumination, they are usable for object recognition. The steps are given below. SIFT features can essentially be applied to any task that requires identification of matching locations between images. Work has been done on applications such as recognition of particular object categories in 2D images, 3D reconstruction, motion tracking and segmentation, robot localization, image panorama stitching and epipolar calibration. Some of these are discussed in more detail below. Robot localization and mapping. In this application, a trinocular stereo system is used to determine 3D estimates for keypoint locations. Keypoints are used only when they appear in all 3 images with consistent disparities, resulting in very few outliers. As the robot moves, it localizes itself using feature matches to the existing 3D map, and then incrementally adds features to the map while updating their 3D positions using a Kalman filter. This provides a robust and accurate solution to the problem of robot localization in unknown environments. Recent 3D solvers leverage the use of keypoint directions to solve trinocular geometry from three keypoints and absolute pose from only two keypoints, an often disregarded but useful measurement available in SIFT. These orientation measurements reduce the number of required correspondences, further increasing robustness exponentially. Panorama stitching. SIFT feature matching can be used in image stitching for fully automated panorama reconstruction from non-panoramic images. The SIFT features extracted from the input images are matched against each other to find "k" nearest-neighbors for each feature. These correspondences are then used to find "m" candidate matching images for each image. Homographies between pairs of images are then computed using RANSAC and a probabilistic model is used for verification. Because there is no restriction on the input images, graph search is applied to find connected components of image matches such that each connected component will correspond to a panorama. Finally for each connected component bundle adjustment is performed to solve for joint camera parameters, and the panorama is rendered using multi-band blending. Because of the SIFT-inspired object recognition approach to panorama stitching, the resulting system is insensitive to the ordering, orientation, scale and illumination of the images. The input images can contain multiple panoramas and noise images (some of which may not even be part of the composite image), and panoramic sequences are recognized and rendered as output. 3D scene modeling, recognition and tracking. This application uses SIFT features for 3D object recognition and 3D modeling in context of augmented reality, in which synthetic objects with accurate pose are superimposed on real images. SIFT matching is done for a number of 2D images of a scene or object taken from different angles. This is used with bundle adjustment initialized from an essential matrix or trifocal tensor to build a sparse 3D model of the viewed scene and to simultaneously recover camera poses and calibration parameters. Then the position, orientation and size of the virtual object are defined relative to the coordinate frame of the recovered model. For online match moving, SIFT features again are extracted from the current video frame and matched to the features already computed for the world model, resulting in a set of 2D-to-3D correspondences. These correspondences are then used to compute the current camera pose for the virtual projection and final rendering. A regularization technique is used to reduce the jitter in the virtual projection. The use of SIFT directions have also been used to increase robustness of this process. 3D extensions of SIFT have also been evaluated for true 3D object recognition and retrieval. 3D SIFT-like descriptors for human action recognition. Extensions of the SIFT descriptor to 2+1-dimensional spatio-temporal data in context of human action recognition in video sequences have been studied. The computation of local position-dependent histograms in the 2D SIFT algorithm are extended from two to three dimensions to describe SIFT features in a spatio-temporal domain. For application to human action recognition in a video sequence, sampling of the training videos is carried out either at spatio-temporal interest points or at randomly determined locations, times and scales. The spatio-temporal regions around these interest points are then described using the 3D SIFT descriptor. These descriptors are then clustered to form a spatio-temporal Bag of words model. 3D SIFT descriptors extracted from the test videos are then matched against these "words" for human action classification. The authors report much better results with their 3D SIFT descriptor approach than with other approaches like simple 2D SIFT descriptors and Gradient Magnitude. Analyzing the Human Brain in 3D Magnetic Resonance Images. The Feature-based Morphometry (FBM) technique uses extrema in a difference of Gaussian scale-space to analyze and classify 3D magnetic resonance images (MRIs) of the human brain. FBM models the image probabilistically as a collage of independent features, conditional on image geometry and group labels, e.g. healthy subjects and subjects with Alzheimer's disease (AD). Features are first extracted in individual images from a 4D difference of Gaussian scale-space, then modeled in terms of their appearance, geometry and group co-occurrence statistics across a set of images. FBM was validated in the analysis of AD using a set of ~200 volumetric MRIs of the human brain, automatically identifying established indicators of AD in the brain and classifying mild AD in new images with a rate of 80%. Competing methods. Alternative methods for scale-invariant object recognition under clutter / partial occlusion include the following. RIFT is a rotation-invariant generalization of SIFT. The RIFT descriptor is constructed using circular normalized patches divided into concentric rings of equal width and within each ring a gradient orientation histogram is computed. To maintain rotation invariance, the orientation is measured at each point relative to the direction pointing outward from the center. RootSIFT is a variant of SIFT that modifies descriptor normalization. Because SIFT descriptors are histograms (and so are probability distributions), Euclidean distance is not an accurate way to measure their similarity. Better similarity metrics turn out to be ones tailored to probability distributions, such as Bhattacharyya coefficient (also called Hellinger kernel). For this purpose, the originally formula_44-normalized descriptor is first formula_45-normalized and the square root of each element is computed, followed by formula_44-renormalization. After these algebraic manipulations, RootSIFT descriptors can be normally compared using Euclidean distance, which is equivalent to using the Hellinger kernel on the original SIFT descriptors. This normalization scheme termed “L1-sqrt” was previously introduced for the block normalization of HOG features whose rectangular block arrangement descriptor variant (R-HOG) is conceptually similar to the SIFT descriptor. G-RIF: Generalized Robust Invariant Feature is a general context descriptor which encodes edge orientation, edge density and hue information in a unified form combining perceptual information with spatial encoding. The object recognition scheme uses neighboring context based voting to estimate object models. "SURF: Speeded Up Robust Features" is a high-performance scale- and rotation-invariant interest point detector / descriptor claimed to approximate or even outperform previously proposed schemes with respect to repeatability, distinctiveness, and robustness. SURF relies on integral images for image convolutions to reduce computation time, builds on the strengths of the leading existing detectors and descriptors (using a fast Hessian matrix-based measure for the detector and a distribution-based descriptor). It describes a distribution of Haar wavelet responses within the interest point neighborhood. Integral images are used for speed and only 64 dimensions are used reducing the time for feature computation and matching. The indexing step is based on the sign of the Laplacian, which increases the matching speed and the robustness of the descriptor. PCA-SIFT and GLOH are variants of SIFT. PCA-SIFT descriptor is a vector of image gradients in x and y direction computed within the support region. The gradient region is sampled at 39×39 locations, therefore the vector is of dimension 3042. The dimension is reduced to 36 with PCA. Gradient location-orientation histogram (GLOH) is an extension of the SIFT descriptor designed to increase its robustness and distinctiveness. The SIFT descriptor is computed for a log-polar location grid with three bins in radial direction (the radius set to 6, 11, and 15) and 8 in angular direction, which results in 17 location bins. The central bin is not divided in angular directions. The gradient orientations are quantized in 16 bins resulting in 272-bin histogram. The size of this descriptor is reduced with PCA. The covariance matrix for PCA is estimated on image patches collected from various images. The 128 largest eigenvectors are used for description. Gauss-SIFT is a pure image descriptor defined by performing all image measurements underlying the pure image descriptor in SIFT by Gaussian derivative responses as opposed to derivative approximations in an image pyramid as done in regular SIFT. In this way, discretization effects over space and scale can be reduced to a minimum allowing for potentially more accurate image descriptors. In Lindeberg (2015) such pure Gauss-SIFT image descriptors were combined with a set of generalized scale-space interest points comprising the Laplacian of the Gaussian, the determinant of the Hessian, four new unsigned or signed Hessian feature strength measures as well as Harris-Laplace and Shi-and-Tomasi interests points. In an extensive experimental evaluation on a poster dataset comprising multiple views of 12 posters over scaling transformations up to a factor of 6 and viewing direction variations up to a slant angle of 45 degrees, it was shown that substantial increase in performance of image matching (higher efficiency scores and lower 1-precision scores) could be obtained by replacing Laplacian of Gaussian interest points by determinant of the Hessian interest points. Since difference-of-Gaussians interest points constitute a numerical approximation of Laplacian of the Gaussian interest points, this shows that a substantial increase in matching performance is possible by replacing the difference-of-Gaussians interest points in SIFT by determinant of the Hessian interest points. Additional increase in performance can furthermore be obtained by considering the unsigned Hessian feature strength measure formula_46. A quantitative comparison between the Gauss-SIFT descriptor and a corresponding Gauss-SURF descriptor did also show that Gauss-SIFT does generally perform significantly better than Gauss-SURF for a large number of different scale-space interest point detectors. This study therefore shows that discregarding discretization effects the pure image descriptor in SIFT is significantly better than the pure image descriptor in SURF, whereas the underlying interest point detector in SURF, which can be seen as numerical approximation to scale-space extrema of the determinant of the Hessian, is significantly better than the underlying interest point detector in SIFT. Wagner et al. developed two object recognition algorithms especially designed with the limitations of current mobile phones in mind. In contrast to the classic SIFT approach, Wagner et al. use the FAST corner detector for feature detection. The algorithm also distinguishes between the off-line preparation phase where features are created at different scale levels and the on-line phase where features are only created at the current fixed scale level of the phone's camera image. In addition, features are created from a fixed patch size of 15×15 pixels and form a SIFT descriptor with only 36 dimensions. The approach has been further extended by integrating a Scalable Vocabulary Tree in the recognition pipeline. This allows the efficient recognition of a larger number of objects on mobile phones. The approach is mainly restricted by the amount of available RAM. KAZE and A-KAZE "(KAZE Features and Accelerated-Kaze Features)" is a new 2D feature detection and description method that perform better compared to SIFT and SURF. It gains a lot of popularity due to its open source code. KAZE was originally made by Pablo F. Alcantarilla, Adrien Bartoli and Andrew J. Davison. References. &lt;templatestyles src="Reflist/styles.css" /&gt; External links. Related studies: Tutorials: Implementations:
[ { "math_id": 0, "text": "\n\\begin{bmatrix} u \\\\ v \\end{bmatrix} = \\begin{bmatrix} m_1 & m_2 \\\\ m_3 & m_4 \\end{bmatrix} \\begin{bmatrix} x \\\\ y \\end{bmatrix} + \\begin{bmatrix} t_x \\\\ t_y \\end{bmatrix}\n" }, { "math_id": 1, "text": "\n\\begin{bmatrix} x & y & 0 & 0 & 1 & 0 \\\\ 0 & 0 & x & y & 0 & 1 \\\\ ....\\\\ ....\\end{bmatrix} \\begin{bmatrix}m1 \\\\ m2 \\\\ m3 \\\\ m4 \\\\ t_x \\\\ t_y \\end{bmatrix} = \\begin{bmatrix} u \\\\ v \\\\ . \\\\ . \\end{bmatrix}\n" }, { "math_id": 2, "text": "A\\hat{\\mathbf{x}} \\approx \\mathbf{b}," }, { "math_id": 3, "text": "\\hat{\\mathbf{x}}" }, { "math_id": 4, "text": " A^T \\! A \\hat{\\mathbf{x}} = A^T \\mathbf{b}. " }, { "math_id": 5, "text": "(A^TA)^{-1}A^T" }, { "math_id": 6, "text": " \\hat{\\mathbf{x}} = (A^T\\!A)^{-1} A^T \\mathbf{b}. " }, { "math_id": 7, "text": "D \\left( x, y, \\sigma \\right)" }, { "math_id": 8, "text": "D \\left( x, y, \\sigma \\right) = L \\left( x, y, k_i\\sigma \\right) - L \\left( x, y, k_j\\sigma \\right)" }, { "math_id": 9, "text": "L \\left( x, y, k\\sigma \\right)" }, { "math_id": 10, "text": "I \\left( x, y \\right)" }, { "math_id": 11, "text": "G \\left( x, y, k\\sigma \\right)" }, { "math_id": 12, "text": "k\\sigma" }, { "math_id": 13, "text": "L \\left( x, y, k\\sigma \\right) = G \\left( x, y, k\\sigma \\right) * I \\left( x, y \\right)" }, { "math_id": 14, "text": "k_i\\sigma" }, { "math_id": 15, "text": "k_j\\sigma" }, { "math_id": 16, "text": "\\sigma" }, { "math_id": 17, "text": "k_i" }, { "math_id": 18, "text": "D(\\textbf{x}) = D + \\frac{\\partial D}{\\partial \\textbf{x}}^T\\textbf{x} + \\frac{1}{2}\\textbf{x}^T \\frac{\\partial^2 D}{\\partial \\textbf{x}^2} \\textbf{x}" }, { "math_id": 19, "text": "\\textbf{x} = \\left( x, y, \\sigma \\right)^T" }, { "math_id": 20, "text": "\\hat{\\textbf{x}}" }, { "math_id": 21, "text": "\\textbf{x}" }, { "math_id": 22, "text": "0.5" }, { "math_id": 23, "text": "D(\\textbf{x})" }, { "math_id": 24, "text": "0.03" }, { "math_id": 25, "text": "\\textbf{y} + \\hat{\\textbf{x}}" }, { "math_id": 26, "text": "\\textbf{y}" }, { "math_id": 27, "text": " \\textbf{H} = \\begin{bmatrix}\n D_{xx} & D_{xy} \\\\\n D_{xy} & D_{yy}\n\\end{bmatrix} " }, { "math_id": 28, "text": "\\alpha" }, { "math_id": 29, "text": "\\beta" }, { "math_id": 30, "text": "r = \\alpha/\\beta" }, { "math_id": 31, "text": "D_{xx} + D_{yy}" }, { "math_id": 32, "text": "D_{xx} D_{yy} - D_{xy}^2" }, { "math_id": 33, "text": " \\text{R} = \\operatorname{Tr}(\\textbf{H})^2 / \\operatorname{Det}(\\textbf{H})" }, { "math_id": 34, "text": "(r+1)^2/r" }, { "math_id": 35, "text": "r_{\\text{th}}" }, { "math_id": 36, "text": "(r_{\\text{th}} + 1)^2/r_{\\text{th}}" }, { "math_id": 37, "text": "r_{\\text{th}} = 10" }, { "math_id": 38, "text": "L \\left( x, y, \\sigma \\right)" }, { "math_id": 39, "text": "L \\left( x, y \\right)" }, { "math_id": 40, "text": "m \\left( x, y \\right)" }, { "math_id": 41, "text": "\\theta \\left( x, y \\right)" }, { "math_id": 42, "text": "m \\left( x, y \\right) = \\sqrt{\\left( L \\left( x+1, y \\right) - L \\left( x-1, y \\right) \\right)^2 + \\left( L \\left( x, y+1 \\right) - L \\left( x, y-1 \\right) \\right)^2}" }, { "math_id": 43, "text": "\\theta \\left( x, y \\right) = \\mathrm{atan2}\\left(L \\left( x, y+1 \\right) - L \\left( x, y-1 \\right), L \\left( x+1, y \\right) - L \\left( x-1, y \\right) \\right)" }, { "math_id": 44, "text": "\\ell^2" }, { "math_id": 45, "text": "\\ell^1" }, { "math_id": 46, "text": "D_1 L = \\operatorname{det} H L - k \\, \\operatorname{trace}^2 H L \\, \\mbox{if} \\operatorname{det} H L - k \\, \\operatorname{trace}^2 H L >0 \\, \\mbox{or 0 otherwise}" } ]
https://en.wikipedia.org/wiki?curid=1208345
12083818
Filling area conjecture
In differential geometry, Mikhail Gromov's filling area conjecture asserts that the hemisphere has minimum area among the orientable surfaces that fill a closed curve of given length without introducing shortcuts between its points. Definitions and statement of the conjecture. Every smooth surface "M" or curve in Euclidean space is a metric space, in which the (intrinsic) distance "d""M"("x","y") between two points "x", "y" of "M" is defined as the infimum of the lengths of the curves that go from "x" to "y" "along" "M". For example, on a closed curve formula_0 of length 2"L", for each point "x" of the curve there is a unique other point of the curve (called the antipodal of "x") at distance "L" from "x". A compact surface "M" fills a closed curve C if its border (also called boundary, denoted ∂"M") is the curve C. The filling "M" is said to be isometric if for any two points "x","y" of the boundary curve "C", the distance "d""M"("x","y") between them along "M" is the same (not less) than the distance "d""C"("x","y") along the boundary. In other words, to fill a curve isometrically is to fill it without introducing shortcuts. Question: "How small can be the area of a surface that isometrically fills its boundary curve, of given length?" For example, in three-dimensional Euclidean space, the circle formula_1 (of length 2π) is filled by the flat disk formula_2 which is not an isometric filling, because any straight chord along it is a shortcut. In contrast, the hemisphere formula_3 is an isometric filling of the same circle "C", which has twice the area of the flat disk. Is this the minimum possible area? The surface can be imagined as made of a flexible but non-stretchable material, that allows it to be moved around and bended in Euclidean space. None of these transformations modifies the area of the surface nor the length of the curves drawn on it, which are the magnitudes relevant to the problem. The surface can be removed from Euclidean space altogether, obtaining a Riemannian surface, which is an abstract smooth surface with a Riemannian metric that encodes the lengths and area. Reciprocally, according to the Nash-Kuiper theorem, any Riemannian surface with boundary can be embedded in Euclidean space preserving the lengths and area specified by the Riemannian metric. Thus the filling problem can be stated equivalently as a question about Riemannian surfaces, that are not placed in Euclidean space in any particular way. Conjecture (Gromov's filling area conjecture, 1983): "The hemisphere has minimum area among the orientable compact Riemannian surfaces that fill isometrically their boundary curve, of given length." Gromov's proof for the case of Riemannian disks. In the same paper where Gromov stated the conjecture, he proved that "the hemisphere has least area among the Riemannian surfaces that isometrically fill a circle of given length, and are homeomorphic to a disk." Proof: Let formula_4 be a Riemannian disk that isometrically fills its boundary of length formula_5. Glue each point formula_6 with its antipodal point formula_7, defined as the unique point of formula_8 that is at the maximum possible distance formula_9 from formula_10. Gluing in this way we obtain a closed Riemannian surface formula_11 that is homeomorphic to the real projective plane and whose systole (the length of the shortest non-contractible curve) is equal to formula_9. (And reciprocally, if we cut open a projective plane along a shortest noncontractible loop of length formula_9, we obtain a disk that fills isometrically its boundary of length formula_5.) Thus the minimum area that the isometric filling formula_4 can have is equal to the minimum area that a Riemannian projective plane of systole formula_9 can have. But then Pu's systolic inequality asserts precisely that a Riemannian projective plane of given systole has minimum area if and only if it is round (that is, obtained from a Euclidean sphere by identifying each point with its opposite). The area of this round projective plane equals the area of the hemisphere (because each of them has half the area of the sphere). The proof of Pu's inequality relies, in turn, on the uniformization theorem. Fillings with Finsler metrics. In 2001, Sergei Ivanov presented another way to prove that the hemisphere has smallest area among isometric fillings homeomorphic to a disk. His argument does not employ the uniformization theorem and is based instead on the topological fact that two curves on a disk must cross if their four endpoints are on the boundary and interlaced. Moreover, Ivanov's proof applies more generally to disks with Finsler metrics, which differ from Riemannian metrics in that they need not satisfy the Pythagorean equation at the infinitesimal level. The area of a Finsler surface can be defined in various inequivalent ways, and the one employed here is the Holmes–Thompson area, which coincides with the usual area when the metric is Riemannian. What Ivanov proved is that "The hemisphere has minimum Holmes–Thompson area among Finsler disks that isometrically fill a closed curve of given length." Unlike the Riemannian case, there is a great variety of Finsler disks that isometrically fill a closed curve and have the same Holmes–Thompson area as the hemisphere. If the Hausdorff area is used instead, then the minimality of the hemisphere still holds, but the hemisphere becomes the unique minimizer. This follows from Ivanov's theorem since the Hausdorff area of a Finsler manifold is never less than the Holmes–Thompson area, and the two areas are equal if and only if the metric is Riemannian. Non-minimality of the hemisphere among rational fillings with Finsler metrics. A Euclidean disk that fills a circle can be replaced, without decreasing the distances between boundary points, by a Finsler disk that fills the same circle "N"=10 times (in the sense that its boundary wraps around the circle "N" times), but whose Holmes–Thompson area is less than "N" times the area of the disk. For the hemisphere, a similar replacement can be found. In other words, the filling area conjecture is false if Finsler 2-chains with "rational coefficients" are allowed as fillings, instead of orientable surfaces (which can be considered as 2-chains with "integer coefficients"). Riemannian fillings of genus one and hyperellipticity. An orientable Riemannian surface of genus one that isometrically fills the circle cannot have less area than the hemisphere. The proof in this case again starts by gluing antipodal points of the boundary. The non-orientable closed surface obtained in this way has an orientable double cover of genus two, and is therefore hyperelliptic. The proof then exploits a formula by J. Hersch from integral geometry. Namely, consider the family of figure-8 loops on a football, with the self-intersection point at the equator. Hersch's formula expresses the area of a metric in the conformal class of the football, as an average of the energies of the figure-8 loops from the family. An application of Hersch's formula to the hyperelliptic quotient of the Riemann surface proves the filling area conjecture in this case. Almost flat manifolds are minimal fillings of their boundary distances. If a Riemannian manifold "M" (of any dimension) is almost flat (more precisely, "M" is a region of formula_12 with a Riemannian metric that is formula_13-near the standard Euclidean metric), then "M" is a volume minimizer: it cannot be replaced by an orientable Riemannian manifold that fills the same boundary and has less volume without reducing the distance between some boundary points. This implies that if a piece of sphere is sufficiently small (and therefore, nearly flat), then it is a volume minimizer. If this theorem can be extended to large regions (namely, to the whole hemisphere), then the filling area conjecture is true. It has been conjectured that all simple Riemannian manifolds (those that are convex at their boundary, and where every two points are joined by a unique geodesic) are volume minimizers. The proof that each almost flat manifold "M" is a volume minimizer involves embedding "M" in formula_14, and then showing that any isometric replacement of "M" can also be mapped into the same space formula_15, and projected onto "M", without increasing its volume. This implies that the replacement has not less volume than the original manifold "M". References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": " C " }, { "math_id": 1, "text": " C = \\{(x,y,0):\\ x^2+y^2=1\\} " }, { "math_id": 2, "text": " D = \\{(x,y,0):\\ x^2+y^2\\leq 1\\} " }, { "math_id": 3, "text": " H = \\{(x,y,z):\\ x^2+y^2+z^2=1\\text{ and }z\\geq 0\\}" }, { "math_id": 4, "text": " M " }, { "math_id": 5, "text": " 2L " }, { "math_id": 6, "text": " x\\in \\partial M" }, { "math_id": 7, "text": " -x " }, { "math_id": 8, "text": " \\partial M " }, { "math_id": 9, "text": " L " }, { "math_id": 10, "text": " x " }, { "math_id": 11, "text": " M' " }, { "math_id": 12, "text": " \\mathbb R^n " }, { "math_id": 13, "text": " C^2 " }, { "math_id": 14, "text": " L^\\infty(\\partial M)" }, { "math_id": 15, "text": " L^\\infty (\\partial M)" } ]
https://en.wikipedia.org/wiki?curid=12083818
1208391
Primary decomposition
In algebra, expression of an ideal as the intersection of ideals of a specific type In mathematics, the Lasker–Noether theorem states that every Noetherian ring is a Lasker ring, which means that every ideal can be decomposed as an intersection, called primary decomposition, of finitely many "primary ideals" (which are related to, but not quite the same as, powers of prime ideals). The theorem was first proven by Emanuel Lasker (1905) for the special case of polynomial rings and convergent power series rings, and was proven in its full generality by Emmy Noether (1921). The Lasker–Noether theorem is an extension of the fundamental theorem of arithmetic, and more generally the fundamental theorem of finitely generated abelian groups to all Noetherian rings. The theorem plays an important role in algebraic geometry, by asserting that every algebraic set may be uniquely decomposed into a finite union of irreducible components. It has a straightforward extension to modules stating that every submodule of a finitely generated module over a Noetherian ring is a finite intersection of primary submodules. This contains the case for rings as a special case, considering the ring as a module over itself, so that ideals are submodules. This also generalizes the primary decomposition form of the structure theorem for finitely generated modules over a principal ideal domain, and for the special case of polynomial rings over a field, it generalizes the decomposition of an algebraic set into a finite union of (irreducible) varieties. The first algorithm for computing primary decompositions for polynomial rings over a field of characteristic 0 was published by Noether's student Grete Hermann (1926). The decomposition does not hold in general for non-commutative Noetherian rings. Noether gave an example of a non-commutative Noetherian ring with a right ideal that is not an intersection of primary ideals. Primary decomposition of an ideal. Let formula_0 be a Noetherian commutative ring. An ideal formula_1 of formula_0 is called primary if it is a proper ideal and for each pair of elements formula_2 and formula_3 in formula_0 such that formula_4 is in formula_1, either formula_2 or some power of formula_3 is in formula_1; equivalently, every zero-divisor in the quotient formula_5 is nilpotent. The radical of a primary ideal formula_6 is a prime ideal and formula_6 is said to be formula_7-primary for formula_8. Let formula_1 be an ideal in formula_0. Then formula_1 has an irredundant primary decomposition into primary ideals: formula_9. Irredundancy means: Moreover, this decomposition is unique in the two ways: Primary ideals which correspond to non-minimal prime ideals over formula_1 are in general not unique (see an example below). For the existence of the decomposition, see #Primary decomposition from associated primes below. The elements of formula_14 are called the prime divisors of formula_1 or the primes belonging to formula_1. In the language of module theory, as discussed below, the set formula_14 is also the set of associated primes of the formula_0-module formula_5. Explicitly, that means that there exist elements formula_18 in formula_0 such that formula_19 By a way of shortcut, some authors call an associated prime of formula_5 simply an associated prime of formula_1 (note this practice will conflict with the usage in the module theory). In the case of the ring of integers formula_20, the Lasker–Noether theorem is equivalent to the fundamental theorem of arithmetic. If an integer formula_21 has prime factorization formula_22, then the primary decomposition of the ideal formula_23 generated by formula_21 in formula_20, is formula_24 Similarly, in a unique factorization domain, if an element has a prime factorization formula_25 where formula_26 is a unit, then the primary decomposition of the principal ideal generated by formula_27 is formula_28 Examples. The examples of the section are designed for illustrating some properties of primary decompositions, which may appear as surprising or counter-intuitive. All examples are ideals in a polynomial ring over a field "k". Intersection vs. product. The primary decomposition in formula_29 of the ideal formula_30 is formula_31 Because of the generator of degree one, "I" is not the product of two larger ideals. A similar example is given, in two indeterminates by formula_32 Primary vs. prime power. In formula_33, the ideal formula_34 is a primary ideal that has formula_35 as associated prime. It is not a power of its associated prime. Non-uniqueness and embedded prime. For every positive integer "n", a primary decomposition in formula_33 of the ideal formula_36 is formula_37 The associated primes are formula_38 Example: Let "N" = "R" = "k"["x", "y"] for some field "k", and let "M" be the ideal ("xy", "y"2). Then "M" has two different minimal primary decompositions "M" = ("y") ∩ ("x", "y"2) = ("y") ∩ ("x" + "y", "y"2). The minimal prime is ("y") and the embedded prime is ("x", "y"). Non-associated prime between two associated primes. In formula_39 the ideal formula_40 has the (non-unique) primary decomposition formula_41 The associated prime ideals are formula_42 and formula_43 is a non associated prime ideal such that formula_44 A complicated example. Unless for very simple examples, a primary decomposition may be hard to compute and may have a very complicated output. The following example has been designed for providing such a complicated output, and, nevertheless, being accessible to hand-written computation. Let formula_45 be two homogeneous polynomials in "x", "y", whose coefficients formula_46 are polynomials in other indeterminates formula_47 over a field "k". That is, "P" and "Q" belong to formula_48 and it is in this ring that a primary decomposition of the ideal formula_49 is searched. For computing the primary decomposition, we suppose first that 1 is a greatest common divisor of "P" and "Q". This condition implies that "I" has no primary component of height one. As "I" is generated by two elements, this implies that it is a complete intersection (more precisely, it defines an algebraic set, which is a complete intersection), and thus all primary components have height two. Therefore, the associated primes of "I" are exactly the primes ideals of height two that contain "I". It follows that formula_50 is an associated prime of "I". Let formula_51 be the homogeneous resultant in "x", "y" of "P" and "Q". As the greatest common divisor of "P" and "Q" is a constant, the resultant "D" is not zero, and resultant theory implies that "I" contains all products of "D" by a monomial in "x", "y" of degree "m" + "n" – 1. As formula_52 all these monomials belong to the primary component contained in formula_53 This primary component contains "P" and "Q", and the behavior of primary decompositions under localization shows that this primary component is formula_54 In short, we have a primary component, with the very simple associated prime formula_55 such all its generating sets involve all indeterminates. The other primary component contains "D". One may prove that if "P" and "Q" are sufficiently generic (for example if the coefficients of "P" and "Q" are distinct indeterminates), then there is only another primary component, which is a prime ideal, and is generated by "P", "Q" and "D". Geometric interpretation. In algebraic geometry, an affine algebraic set "V"("I") is defined as the set of the common zeros of an ideal "I" of a polynomial ring formula_56 An irredundant primary decomposition formula_57 of "I" defines a decomposition of "V"("I") into a union of algebraic sets "V"("Q""i"), which are irreducible, as not being the union of two smaller algebraic sets. If formula_58 is the associated prime of formula_10, then formula_59 and Lasker–Noether theorem shows that "V"("I") has a unique irredundant decomposition into irreducible algebraic varieties formula_60 where the union is restricted to minimal associated primes. These minimal associated primes are the primary components of the radical of "I". For this reason, the primary decomposition of the radical of "I" is sometimes called the "prime decomposition" of "I". The components of a primary decomposition (as well as of the algebraic set decomposition) corresponding to minimal primes are said "isolated", and the others are said "&lt;templatestyles src="Template:Visible anchor/styles.css" /&gt;embedded". For the decomposition of algebraic varieties, only the minimal primes are interesting, but in intersection theory, and, more generally in scheme theory, the complete primary decomposition has a geometric meaning. Primary decomposition from associated primes. Nowadays, it is common to do primary decomposition of ideals and modules within the theory of associated primes. Bourbaki's influential textbook "Algèbre commutative", in particular, takes this approach. Let "R" be a ring and "M" a module over it. By definition, an associated prime is a prime ideal which is the annihilator of a nonzero element of "M"; that is, formula_61 for some formula_62 (this implies formula_63). Equivalently, a prime ideal formula_7 is an associated prime of "M" if there is an injection of "R"-modules formula_64. A maximal element of the set of annihilators of nonzero elements of "M" can be shown to be a prime ideal and thus, when "R" is a Noetherian ring, there exists an associated prime of "M" if and only if "M" is nonzero. The set of associated primes of "M" is denoted by formula_65 or formula_66. Directly from the definition, If "M" is a finitely generated module over "R", then there is a finite ascending sequence of submodules formula_74 such that each quotient "M""i" /"M""i−1" is isomorphic to formula_75 for some prime ideals formula_76, each of which is necessarily in the support of "M". Moreover every associated prime of "M" occurs among the set of primes formula_76; i.e., formula_77. (In general, these inclusions are not the equalities.) In particular, formula_66 is a finite set when "M" is finitely generated. Let formula_78 be a finitely generated module over a Noetherian ring "R" and "N" a submodule of "M". Given formula_79, the set of associated primes of formula_80, there exist submodules formula_81 such that formula_82 and formula_83 A submodule "N" of "M" is called "formula_7-primary" if formula_85. A submodule of the "R"-module "R" is formula_7-primary as a submodule if and only if it is a formula_7-primary ideal; thus, when formula_86, the above decomposition is precisely a primary decomposition of an ideal. Taking formula_84, the above decomposition says the set of associated primes of a finitely generated module "M" is the same as formula_87 when formula_88 (without finite generation, there can be infinitely many associated primes.) Properties of associated primes. Let formula_0 be a Noetherian ring. Then formula_102. Non-Noetherian case. The next theorem gives necessary and sufficient conditions for a ring to have primary decompositions for its ideals. The proof is given at Chapter 4 of Atiyah–Macdonald as a series of exercises. There is the following uniqueness theorem for an ideal having a primary decomposition. Now, for any commutative ring "R", an ideal "I" and a minimal prime "P" over "I", the pre-image of "I" "R""P" under the localization map is the smallest "P"-primary ideal containing "I". Thus, in the setting of preceding theorem, the primary ideal "Q" corresponding to a minimal prime "P" is also the smallest "P"-primary ideal containing "I" and is called the "P"-primary component of "I". For example, if the power "P""n" of a prime "P" has a primary decomposition, then its "P"-primary component is the "n"-th symbolic power of "P". Additive theory of ideals. This result is the first in an area now known as the additive theory of ideals, which studies the ways of representing an ideal as the intersection of a special class of ideals. The decision on the "special class", e.g., primary ideals, is a problem in itself. In the case of non-commutative rings, the class of tertiary ideals is a useful substitute for the class of primary ideals. Notes. &lt;templatestyles src="Reflist/styles.css" /&gt; &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "R" }, { "math_id": 1, "text": "I" }, { "math_id": 2, "text": "x" }, { "math_id": 3, "text": "y" }, { "math_id": 4, "text": "xy" }, { "math_id": 5, "text": "R/I" }, { "math_id": 6, "text": "Q" }, { "math_id": 7, "text": "\\mathfrak{p}" }, { "math_id": 8, "text": "\\mathfrak{p} = \\sqrt{Q}" }, { "math_id": 9, "text": "I = Q_1 \\cap \\cdots \\cap Q_n\\ " }, { "math_id": 10, "text": "Q_i" }, { "math_id": 11, "text": "i" }, { "math_id": 12, "text": "\\cap_{j \\ne i} Q_j \\not\\subset Q_i" }, { "math_id": 13, "text": "\\sqrt{Q_i}" }, { "math_id": 14, "text": "\\{ \\sqrt{Q_i} \\mid i \\}" }, { "math_id": 15, "text": "\\mathfrak{p} = \\sqrt{Q_i}" }, { "math_id": 16, "text": "I R_{\\mathfrak{p}}" }, { "math_id": 17, "text": "R \\to R_{\\mathfrak{p}}" }, { "math_id": 18, "text": "g_1, \\dots, g_n" }, { "math_id": 19, "text": "\\sqrt{Q_i} = \\{ f \\in R \\mid fg_i \\in I \\}." }, { "math_id": 20, "text": "\\mathbb Z" }, { "math_id": 21, "text": "n" }, { "math_id": 22, "text": "n = \\pm p_1^{d_1} \\cdots p_r^{d_r}" }, { "math_id": 23, "text": "\\langle n \\rangle" }, { "math_id": 24, "text": "\\langle n\\rangle = \\langle p_1^{d_1} \\rangle \\cap \\cdots \\cap \\langle p_r^{d_r}\\rangle." }, { "math_id": 25, "text": "f = u p_1^{d_1} \\cdots p_r^{d_r}," }, { "math_id": 26, "text": "u" }, { "math_id": 27, "text": "f" }, { "math_id": 28, "text": "\\langle f\\rangle = \\langle p_1^{d_1} \\rangle \\cap \\cdots \\cap \\langle p_r^{d_r}\\rangle." }, { "math_id": 29, "text": "k[x,y,z]" }, { "math_id": 30, "text": "I=\\langle x,yz \\rangle" }, { "math_id": 31, "text": "I = \\langle x,yz \\rangle = \\langle x,y \\rangle \\cap \\langle x,z \\rangle." }, { "math_id": 32, "text": "I = \\langle x,y(y+1) \\rangle = \\langle x,y \\rangle \\cap \\langle x,y+1 \\rangle." }, { "math_id": 33, "text": "k[x,y]" }, { "math_id": 34, "text": "\\langle x,y^2 \\rangle" }, { "math_id": 35, "text": "\\langle x,y \\rangle" }, { "math_id": 36, "text": "I=\\langle x^2, xy \\rangle" }, { "math_id": 37, "text": "I = \\langle x^2,xy \\rangle = \\langle x \\rangle \\cap \\langle x^2, xy, y^n \\rangle." }, { "math_id": 38, "text": "\\langle x \\rangle \\subset \\langle x,y \\rangle." }, { "math_id": 39, "text": "k[x,y,z]," }, { "math_id": 40, "text": "I=\\langle x^2, xy, xz \\rangle" }, { "math_id": 41, "text": "I = \\langle x^2,xy, xz \\rangle = \\langle x \\rangle \\cap \\langle x^2, y^2, z^2, xy, xz, yz \\rangle." }, { "math_id": 42, "text": "\\langle x \\rangle \\subset \\langle x,y,z \\rangle," }, { "math_id": 43, "text": "\\langle x, y \\rangle" }, { "math_id": 44, "text": "\\langle x \\rangle \\subset \\langle x,y \\rangle \\subset \\langle x,y,z \\rangle." }, { "math_id": 45, "text": "\n\\begin {align}\nP&=a_0x^m + a_1x^{m-1}y +\\cdots +a_my^m \\\\\nQ&=b_0x^n + b_1x^{n-1}y +\\cdots +b_ny^n\n\\end {align}" }, { "math_id": 46, "text": "a_1, \\ldots, a_m, b_0, \\ldots, b_n" }, { "math_id": 47, "text": "z_1, \\ldots, z_h" }, { "math_id": 48, "text": "R=k[x,y,z_1, \\ldots, z_h]," }, { "math_id": 49, "text": "I=\\langle P,Q\\rangle" }, { "math_id": 50, "text": "\\langle x,y\\rangle" }, { "math_id": 51, "text": "D\\in k[z_1, \\ldots, z_h]" }, { "math_id": 52, "text": "D\\not\\in \\langle x,y\\rangle," }, { "math_id": 53, "text": "\\langle x,y\\rangle." }, { "math_id": 54, "text": "\\{t|\\exists e, D^et \\in I\\}." }, { "math_id": 55, "text": "\\langle x,y\\rangle," }, { "math_id": 56, "text": "R=k[x_1,\\ldots, x_n]." }, { "math_id": 57, "text": "I=Q_1\\cap\\cdots\\cap Q_r" }, { "math_id": 58, "text": "P_i" }, { "math_id": 59, "text": "V(P_i)=V(Q_i)," }, { "math_id": 60, "text": "V(I)=\\bigcup V(P_i)," }, { "math_id": 61, "text": "\\mathfrak{p} = \\operatorname{Ann}(m)" }, { "math_id": 62, "text": "m\\in M" }, { "math_id": 63, "text": "m \\ne 0" }, { "math_id": 64, "text": "R/\\mathfrak{p} \\hookrightarrow M" }, { "math_id": 65, "text": "\\operatorname{Ass}_R(M)" }, { "math_id": 66, "text": "\\operatorname{Ass}(M)" }, { "math_id": 67, "text": "M = \\bigoplus_i M_i" }, { "math_id": 68, "text": "\\operatorname{Ass}(M) = \\bigcup_i \\operatorname{Ass}(M_i)" }, { "math_id": 69, "text": "0 \\to N \\to M \\to L \\to 0" }, { "math_id": 70, "text": "\\operatorname{Ass}(N) \\subset \\operatorname{Ass}(M) \\subset \\operatorname{Ass}(N) \\cup \\operatorname{Ass}(L)" }, { "math_id": 71, "text": "\\operatorname{Ass}(M) \\subset \\operatorname{Supp}(M)" }, { "math_id": 72, "text": "\\operatorname{Supp}" }, { "math_id": 73, "text": "\\operatorname{Supp}(M)" }, { "math_id": 74, "text": "0=M_0\\subsetneq M_1\\subsetneq\\cdots\\subsetneq M_{n-1}\\subsetneq M_n=M\\," }, { "math_id": 75, "text": "R/\\mathfrak{p}_i" }, { "math_id": 76, "text": "\\mathfrak{p}_i" }, { "math_id": 77, "text": "\\operatorname{Ass}(M) \\subset \\{ \\mathfrak{p}_1, \\dots, \\mathfrak{p}_n \\} \\subset \\operatorname{Supp}(M)" }, { "math_id": 78, "text": "M" }, { "math_id": 79, "text": "\\operatorname{Ass}(M/N) = \\{ \\mathfrak{p}_1, \\dots, \\mathfrak{p}_n \\}" }, { "math_id": 80, "text": "M/N" }, { "math_id": 81, "text": "Q_i \\subset M" }, { "math_id": 82, "text": "\\operatorname{Ass}(M/Q_i) = \\{ \\mathfrak{p}_i \\}" }, { "math_id": 83, "text": "N = \\bigcap_{i=1}^n Q_i." }, { "math_id": 84, "text": "N = 0" }, { "math_id": 85, "text": "\\operatorname{Ass}(M/N) = \\{ \\mathfrak{p} \\}" }, { "math_id": 86, "text": "M = R" }, { "math_id": 87, "text": "\\{ \\operatorname{Ass}(M/Q_i) | i \\}" }, { "math_id": 88, "text": "0 = \\cap_1^n Q_i" }, { "math_id": 89, "text": "m \\mapsto rm, M \\to M" }, { "math_id": 90, "text": "\\Phi \\subset \\operatorname{Ass}(M)" }, { "math_id": 91, "text": "N \\subset M" }, { "math_id": 92, "text": "\\operatorname{Ass}(N) = \\operatorname{Ass}(M) - \\Phi" }, { "math_id": 93, "text": "\\operatorname{Ass}(M/N) = \\Phi" }, { "math_id": 94, "text": "S \\subset R" }, { "math_id": 95, "text": "\\Phi" }, { "math_id": 96, "text": "S" }, { "math_id": 97, "text": "\\mathfrak{p} \\mapsto S^{-1}\\mathfrak{p}, \\, \\operatorname{Ass}_R(M)\\cap \\Phi \\to \\operatorname{Ass}_{S^{-1}R}(S^{-1} M)" }, { "math_id": 98, "text": "\\operatorname{Ass}_R(M)\\cap \\Phi = \\operatorname{Ass}_R(S^{-1}M)" }, { "math_id": 99, "text": "\\mathrm{Ass}_R(R/J)." }, { "math_id": 100, "text": "\\mathrm{Ass}(M)" }, { "math_id": 101, "text": "A \\to B" }, { "math_id": 102, "text": "\\operatorname{Ass}_B(E \\otimes_A F) = \\bigcup_{\\mathfrak{p} \\in \\operatorname{Ass}(E)} \\operatorname{Ass}_B(F/\\mathfrak{p}F)" } ]
https://en.wikipedia.org/wiki?curid=1208391
12083997
Delimited continuation
In programming languages, a delimited continuation, composable continuation or partial continuation, is a "slice" of a continuation frame that has been reified into a function. Unlike regular continuations, delimited continuations return a value, and thus may be reused and composed. Control delimiters, the basis of delimited continuations, were introduced by Matthias Felleisen in 1988 though early allusions to composable and delimited continuations can be found in Carolyn Talcott's Stanford 1984 dissertation, Felleisen "et al.", Felleisen's 1987 dissertation, and algorithms for functional backtracking, e.g., for pattern matching, for parsing, in the Algebraic Logic Functional programming language, and in the functional implementations of Prolog where the failure continuation is often kept implicit and the reason of being for the success continuation is that it is composable. History. Delimited continuations were first introduced by Felleisen in 1988 with an operator called formula_0, first introduced in a tech report in 1987, along with a prompt construct formula_1. The operator was designed to be a generalization of control operators that had been described in the literature such as codice_0 from Scheme, ISWIM's J operator, John C. Reynolds' codice_1 operator, and others. Subsequently, many competing delimited control operators were invented by the programming languages research community such as codice_2 and codice_3, codice_4 and codice_5,codice_6, codice_7, and others. Examples. Various operators for delimited continuations have been proposed in the research literature. One independent proposal is based on continuation-passing style (CPS) -- i.e., not on continuation frames—and offers two control operators, codice_4 and codice_5, that give rise to static rather than to dynamic delimited continuations. The codice_5 operator sets the limit for the continuation while the codice_4 operator captures or reifies the current continuation up to the innermost enclosing codice_5. For example, consider the following snippet in Scheme: (* 2 (reset (+ 1 (shift k (k 5))))) The codice_5 delimits the continuation that codice_4 captures (named by codice_15 in this example). When this snippet is executed, the use of codice_4 will bind codice_15 to the continuation codice_18 where codice_19 represents the part of the computation that is to be filled with a value. This continuation directly corresponds to the code that surrounds the codice_4 up to the codice_5. Because the body of shift (i.e., codice_22) immediately invokes the continuation, this code is equivalent to the following: (* 2 (+ 1 5)) In general, these operators can encode more interesting behavior by, for example, returning the captured continuation codice_15 as a value or invoking codice_15 multiple times. The codice_4 operator passes the captured continuation codice_15 to the code in its body, which can either invoke it, produce it as a result, or ignore it entirely. Whatever result that codice_4 produces is provided to the innermost codice_5, discarding the continuation in between the codice_5 and codice_4. However, if the continuation is invoked, then it effectively re-installs the continuation after returning to the codice_5. When the entire computation within codice_5 is completed, the result is returned by the delimited continuation. For example, in this Scheme code: (reset (* 2 (shift k CODE))) whenever codice_33 invokes codice_34, codice_35 is evaluated and returned. This is equivalent to the following: (let ((k (lambda (x) (* 2 x)))) CODE) Furthermore, once the entire computation within codice_4 is completed, the continuation is discarded, and execution restarts outside codice_5. Therefore, (reset (* 2 (shift k (k (k 4))))) invokes codice_38 first (which returns 8), and then codice_39 (which returns 16). At this point, the codice_4 expression has terminated, and the rest of the codice_5 expression is discarded. Therefore, the final result is 16. Everything that happens outside the codice_5 expression is hidden, i.e. not influenced by the control transfer. For example, this returns 17: (+ 1 (reset (* 2 (shift k (k (k 4)))))) Delimited continuations were first described independently by Felleisen "et al." and Johnson. They have since been used in a large number of domains, particularly in defining new control operators; see Queinnec for a survey. Let's take a look at a more complicated example. Let codice_43 be the empty list: (reset (begin (shift k (cons 1 (k (void)))) ;; (1) null)) The context captured by codice_4 is codice_45, where codice_46 is the hole where codice_15's parameter will be injected. The first call of codice_15 inside codice_4 evaluates to this context with codice_50 = codice_51 replacing the hole, so the value of codice_52 is codice_53 = codice_43. The body of codice_4, namely codice_56 = codice_57, becomes the overall value of the codice_5 expression as the final result. Making this example more complicated, add a line: (reset (begin (shift k (cons 1 (k (void)))) (shift k (cons 2 (k (void)))) null)) If we comment out the first codice_4, we already know the result, it is codice_60; so we can as well rewrite the expression like this: (reset (begin (shift k (cons 1 (k (void)))) (list 2))) This is pretty familiar, and can be rewritten as codice_61, that is, codice_62. We can define codice_63 using this trick: (define (yield x) (shift k (cons x (k (void))))) and use it in building lists: (reset (begin (yield 1) (yield 2) (yield 3) null)) ;; (list 1 2 3) If we replace codice_64 with codice_65, we can build lazy streams: (define (stream-yield x) (shift k (stream-cons x (k (void))))) (define lazy-example (reset (begin (stream-yield 1) (stream-yield 2) (stream-yield 3) stream-null))) We can generalize this and convert lists to stream, in one fell swoop: (define (list-&gt;stream xs) (reset (begin (for-each stream-yield xs) stream-null))) In a more complicated example below the continuation can be safely wrapped into a body of a lambda, and be used as such: (define (for-each-&gt;stream-maker for-each) (lambda (collection) (reset (begin (for-each (lambda (element) (shift k (stream-cons element (k 'ignored)))) collection) stream-null)))) The part between codice_5 and codice_4 includes control functions like codice_68 and codice_69; this is impossible to rephrase using lambdas. Delimited continuations are also useful in linguistics: see Continuations in linguistics for details. A worked-out illustration of the codice_70 idiom: the generalized curry function. The generalized curry function is given an uncurried function codice_71 and its arity (say, 3), and it returns the value of codice_72. This example is due to Olivier Danvy and was worked out in the mid-1980s. Here is a unit-test function to illustrate what the generalized curry function is expected to do: (define test-curry (lambda (candidate) (and (= (candidate + 0) (= ((candidate + 1) 1) (+ 1)) (= (((candidate + 2) 1) 10) (+ 1 10)) (= ((((candidate + 3) 1) 10) 100) (+ 1 10 100))) (= (((((candidate + 4) 1) 10) 100) 1000) (+ 1 10 100 1000)))) These unit tests verify whether currying the variadic function codice_73 into an n-ary curried function and applying the result to n arguments yields the same result as applying codice_73 to these n arguments, for n = 0, 1, 2, 3, and 4. The following recursive function is accumulator-based and eventually reverses the accumulator before applying the given uncurried function. In each instance of the induction step, the function codice_75 is explicitly applied to an argument in the curried application: (define curry_a (lambda (f n) (if (&lt; n 0) (error 'curry_a "negative input: ~s" n) (letrec ([visit (lambda (i a) (if (= i 0) (apply f (reverse a)) (lambda (v) (visit (- i 1) (cons v a)))))]) (visit n '()))))) For example, evaluating reduces to evaluating which reduces to evaluating which beta-reduces to evaluating which reduces to evaluating which beta-reduces to evaluating which reduces to evaluating which reduces to evaluating which is equivalent to which delta-reduces to the result, codice_76. The following recursive function is continuation-based and involves no list reversal. Likewise, in each instance of the induction step, the function codice_75 is explicitly applied to an argument in the curried application: (define curry_c (lambda (f n) (if (&lt; n 0) (error 'curry_c "negative input: ~s" n) (letrec ([visit (lambda (i c) (if (= i 0) (c '()) (lambda (v) (visit (- i 1) (lambda (vs) (c (cons v vs)))))))]) (visit n (lambda (vs) (apply f vs))))))) So evaluating reduces to evaluating which reduces to evaluating which beta-reduces to evaluating which reduces to evaluating which beta-reduces to evaluating which reduces to evaluating which beta-reduces to evaluating which beta-reduces to evaluating which beta-reduces to evaluating which is equivalent to which delta-reduces to the result, codice_76. The following recursive function, codice_79, is the direct-style counterpart of codice_80 and features the codice_70 idiom, using Andrzej Filinski's implementation of shift and reset in terms of a global mutable cell and of codice_0. In each instance of the induction step, the continuation abstraction is implicitly applied to an argument in the curried application: (define curry_d (lambda (f n) (if (&lt; n 0) (error 'curry_d "negative input: ~s" n) (letrec ([visit (lambda (i) (if (= i 0) (cons (shift k k) (visit (- i 1)))))]) (reset (apply f (visit n))))))) The heart of the matter is the observational equivalence between codice_83 and codice_84 where codice_85 is fresh and the ellipses represent a pure context, i.e., one without control effects. So evaluating reduces to evaluating which reduces to evaluating which is observationally equivalent to which beta-reduces to evaluating which reduces to evaluating which is observationally equivalent to which beta-reduces to evaluating which reduces to evaluating which is equivalent to which delta-reduces to evaluating which yields the result, codice_76. The definition of codice_79 also illustrates static delimited continuations. This static extent needs to be explicitly encoded if one wants to use codice_3 and codice_2: (define curry_cp (lambda (f n) (if (&lt; n 0) (error 'curry_cp "negative input: ~s" n) (letrec ([visit (lambda (i) (if (= i 0) (cons (control k (lambda (x) (prompt (k x)))) (visit (- i 1)))))]) (prompt (apply f (visit n))))))) References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\mathcal{F}" }, { "math_id": 1, "text": "\\#" } ]
https://en.wikipedia.org/wiki?curid=12083997
1208420
Three-body problem
Physics problem related to laws of motion and gravity In physics, specifically classical mechanics, the three-body problem involves taking the initial positions and velocities (or momenta) of three point masses that orbit each other in space and calculating their subsequent trajectories using Newton's laws of motion and Newton's law of universal gravitation. Unlike the two-body problem, the three-body problem has no general closed-form solution. When three bodies orbit each other, the resulting dynamical system is chaotic for most initial conditions, and the only way to predict the motions of the bodies is to calculate them using numerical methods. The three-body problem is a special case of the n-body problem. Historically, the first specific three-body problem to receive extended study was the one involving the Earth, the Moon, and the Sun. In an extended modern sense, a three-body problem is any problem in classical mechanics or quantum mechanics that models the motion of three particles. Mathematical description. The mathematical statement of the three-body problem can be given in terms of the Newtonian equations of motion for vector positions formula_0 of three gravitationally interacting bodies with masses formula_1: formula_2 where formula_3 is the gravitational constant. This is a set of nine second-order differential equations. The problem can also be stated equivalently in the Hamiltonian formalism, in which case it is described by a set of 18 first-order differential equations, one for each component of the positions formula_4 and momenta formula_5: formula_6 where formula_7 is the Hamiltonian: formula_8 In this case formula_7 is simply the total energy of the system, gravitational plus kinetic. Restricted three-body problem. In the "restricted three-body problem", a body of negligible mass (the "planetoid") moves under the influence of two massive bodies. Having negligible mass, the planetoid exerts force on the two massive bodies that may be neglected; therefore the resulting system can be analyzed and described as a two-body motion problem. With respect to a rotating reference frame, the two co-orbiting bodies are stationary, and the third can be stationary as well at the Lagrangian points, or move around them, for instance on a horseshoe orbit. It can be useful to consider the effective potential. Usually this two-body motion is taken to consist of circular orbits around the center of mass, and the planetoid is assumed to move in the plane defined by the circular orbits. The restricted three-body problem is easier to analyze theoretically than the full problem. It is of practical interest as well since it accurately describes many real-world problems, the most important example being the Earth–Moon–Sun system. For these reasons, it has occupied an important role in the historical development of the three-body problem. Mathematically, the problem is stated as follows. Let formula_9 be the masses of the two massive bodies, with (planar) coordinates formula_10 and formula_11, and let formula_12 be the coordinates of the planetoid. For simplicity, choose units such that the distance between the two massive bodies, as well as the gravitational constant, are both equal to formula_13. Then, the motion of the planetoid is given by formula_14 where formula_15. In this form the equations of motion carry an explicit time dependence through the coordinates formula_16. However, this time dependence can be removed through a transformation to a rotating reference frame, which simplifies any subsequent analysis. Solutions. General solution. There is no general closed-form solution to the three-body problem. In other words, it does not have a general solution that can be expressed in terms of a finite number of standard mathematical operations. Moreover, the motion of three bodies is generally non-repeating, except in special cases. However, in 1912 the Finnish mathematician Karl Fritiof Sundman proved that there exists an analytic solution to the three-body problem in the form of a Puiseux series, specifically a power series in terms of powers of "t"1/3. This series converges for all real t, except for initial conditions corresponding to zero angular momentum. In practice, the latter restriction is insignificant since initial conditions with zero angular momentum are rare, having Lebesgue measure zero. An important issue in proving this result is the fact that the radius of convergence for this series is determined by the distance to the nearest singularity. Therefore, it is necessary to study the possible singularities of the three-body problems. As is briefly discussed below, the only singularities in the three-body problem are binary collisions (collisions between two particles at an instant) and triple collisions (collisions between three particles at an instant). Collisions of any number are somewhat improbable, since it has been shown that they correspond to a set of initial conditions of measure zero. But there is no criterion known to be put on the initial state in order to avoid collisions for the corresponding solution. So Sundman's strategy consisted of the following steps: This finishes the proof of Sundman's theorem. The corresponding series converges extremely slowly. That is, obtaining a value of meaningful precision requires so many terms that this solution is of little practical use. Indeed, in 1930, David Beloriszky calculated that if Sundman's series were to be used for astronomical observations, then the computations would involve at least 10 terms. Special-case solutions. In 1767, Leonhard Euler found three families of periodic solutions in which the three masses are collinear at each instant. In 1772, Lagrange found a family of solutions in which the three masses form an equilateral triangle at each instant. Together with Euler's collinear solutions, these solutions form the central configurations for the three-body problem. These solutions are valid for any mass ratios, and the masses move on Keplerian ellipses. These four families are the only known solutions for which there are explicit analytic formulae. In the special case of the circular restricted three-body problem, these solutions, viewed in a frame rotating with the primaries, become points called Lagrangian points and labeled L1, L2, L3, L4, and L5, with L4 and L5 being symmetric instances of Lagrange's solution. In work summarized in 1892–1899, Henri Poincaré established the existence of an infinite number of periodic solutions to the restricted three-body problem, together with techniques for continuing these solutions into the general three-body problem. In 1893, Meissel stated what is now called the Pythagorean three-body problem: three masses in the ratio 3:4:5 are placed at rest at the vertices of a 3:4:5 right triangle, with the heaviest body at the right angle and the lightest at the smaller acute angle. Burrau further investigated this problem in 1913. In 1967 Victor Szebehely and C. Frederick Peters established eventual escape of the lightest body for this problem using numerical integration, while at the same time finding a nearby periodic solution. In the 1970s, Michel Hénon and Roger A. Broucke each found a set of solutions that form part of the same family of solutions: the Broucke–Hénon–Hadjidemetriou family. In this family, the three objects all have the same mass and can exhibit both retrograde and direct forms. In some of Broucke's solutions, two of the bodies follow the same path. In 1993, physicist Cris Moore at the Santa Fe Institute found a zero angular momentum solution with three equal masses moving around a figure-eight shape. In 2000, mathematicians Alain Chenciner and Richard Montgomery proved its formal existence. The solution has been shown numerically to be stable for small perturbations of the mass and orbital parameters, which makes it possible for such orbits to be observed in the physical universe. But it has been argued that this is unlikely since the domain of stability is small. For instance, the probability of a binary–binary scattering event resulting in a figure-8 orbit has been estimated to be a small fraction of a percent. In 2013, physicists Milovan Šuvakov and Veljko Dmitrašinović at the Institute of Physics in Belgrade discovered 13 new families of solutions for the equal-mass zero-angular-momentum three-body problem. In 2015, physicist Ana Hudomal discovered 14 new families of solutions for the equal-mass zero-angular-momentum three-body problem. In 2017, researchers Xiaoming Li and Shijun Liao found 669 new periodic orbits of the equal-mass zero-angular-momentum three-body problem. This was followed in 2018 by an additional 1,223 new solutions for a zero-angular-momentum system of unequal masses. In 2018, Li and Liao reported 234 solutions to the unequal-mass "free-fall" three-body problem. The free-fall formulation starts with all three bodies at rest. Because of this, the masses in a free-fall configuration do not orbit in a closed "loop", but travel forward and backward along an open "track". In 2023, Ivan Hristov, Radoslava Hristova, Dmitrašinović and Kiyotaka Tanikawa published a search for "periodic free-fall orbits" three-body problem, limited to the equal-mass case, and found 12,409 distinct solutions. Numerical approaches. Using a computer, the problem may be solved to arbitrarily high precision using numerical integration although high precision requires a large amount of CPU time. There have been attempts of creating computer programs that numerically solve the three-body problem (and by extension, the n-body problem) involving both electromagnetic and gravitational interactions, and incorporating modern theories of physics such as special relativity. In addition, using the theory of random walks, an approximate probability of different outcomes may be computed. History. The gravitational problem of three bodies in its traditional sense dates in substance from 1687, when Isaac Newton published his "Philosophiæ Naturalis Principia Mathematica," in which Newton attempted to figure out if any long term stability is possible especially for such a system like that of our Earth, the Moon, and the Sun. Guided by major Renaissance astronomers Nicolaus Copernicus, Tycho Brahe and Johannes Kepler he introduced later generations to the beginning of the gravitational three-body problem. In Proposition 66 of Book 1 of the "Principia", and its 22 Corollaries, Newton took the first steps in the definition and study of the problem of the movements of three massive bodies subject to their mutually perturbing gravitational attractions. In Propositions 25 to 35 of Book 3, Newton also took the first steps in applying his results of Proposition 66 to the lunar theory, the motion of the Moon under the gravitational influence of Earth and the Sun. Later, this problem was also applied to other planets' interactions with the Earth and the Sun. The physical problem was first addressed by Amerigo Vespucci and subsequently by Galileo Galilei, as well as Simon Stevin, but they did not realize what they contributed. Though Galileo determined that the speed of fall of all bodies changes uniformly and in the same way, he did not apply it to planetary motions. Whereas in 1499, Vespucci used knowledge of the position of the Moon to determine his position in Brazil. It became of technical importance in the 1720s, as an accurate solution would be applicable to navigation, specifically for the determination of longitude at sea, solved in practice by John Harrison's invention of the marine chronometer. However the accuracy of the lunar theory was low, due to the perturbing effect of the Sun and planets on the motion of the Moon around Earth. Jean le Rond d'Alembert and Alexis Clairaut, who developed a longstanding rivalry, both attempted to analyze the problem in some degree of generality; they submitted their competing first analyses to the Académie Royale des Sciences in 1747. It was in connection with their research, in Paris during the 1740s, that the name "three-body problem" () began to be commonly used. An account published in 1761 by Jean le Rond d'Alembert indicates that the name was first used in 1747. From the end of the 19th century to early 20th century, the approach to solve the three-body problem with the usage of short-range attractive two-body forces was developed by scientists, which offered P.F. Bedaque, H.-W. Hammer and U. van Kolck an idea to renormalize the short-range three-body problem, providing scientists a rare example of a renormalization group limit cycle at the beginning of the 21st century. George William Hill worked on the restricted problem in the late 19th century with an application of motion of Venus and Mercury. At the beginning of the 20th century, Karl Sundman approached the problem mathematically and systematically by providing a functional theoretical proof to the problem valid for all values of time. It was the first time scientists theoretically solved the three-body problem. However, because there was not a qualitative enough solution of this system, and it was too slow for scientists to practically apply it, this solution still left some issues unresolved. In the 1970s, implication to three-body from two-body forces had been discovered by V. Efimov, which was named the Efimov effect. In 2017, Shijun Liao and Xiaoming Li applied a new strategy of numerical simulation for chaotic systems called the clean numerical simulation (CNS), with the use of a national supercomputer, to successfully gain 695 families of periodic solutions of the three-body system with equal mass. In 2019, Breen et al. announced a fast neural network solver for the three-body problem, trained using a numerical integrator. In September 2023, several possible solutions have been found to the problem according to reports. Other problems involving three bodies. The term "three-body problem" is sometimes used in the more general sense to refer to any physical problem involving the interaction of three bodies. A quantum-mechanical analogue of the gravitational three-body problem in classical mechanics is the helium atom, in which a helium nucleus and two electrons interact according to the inverse-square Coulomb interaction. Like the gravitational three-body problem, the helium atom cannot be solved exactly. In both classical and quantum mechanics, however, there exist nontrivial interaction laws besides the inverse-square force that do lead to exact analytic three-body solutions. One such model consists of a combination of harmonic attraction and a repulsive inverse-cube force. This model is considered nontrivial since it is associated with a set of nonlinear differential equations containing singularities (compared with, e.g., harmonic interactions alone, which lead to an easily solved system of linear differential equations). In these two respects it is analogous to (insoluble) models having Coulomb interactions, and as a result has been suggested as a tool for intuitively understanding physical systems like the helium atom. Within the point vortex model, the motion of vortices in a two-dimensional ideal fluid is described by equations of motion that contain only first-order time derivatives. I.e. in contrast to Newtonian mechanics, it is the "velocity" and not the acceleration that is determined by their relative positions. As a consequence, the three-vortex problem is still integrable, while at least four vortices are required to obtain chaotic behavior. One can draw parallels between the motion of a passive tracer particle in the velocity field of three vortices and the restricted three-body problem of Newtonian mechanics. The gravitational three-body problem has also been studied using general relativity. Physically, a relativistic treatment becomes necessary in systems with very strong gravitational fields, such as near the event horizon of a black hole. However, the relativistic problem is considerably more difficult than in Newtonian mechanics, and sophisticated numerical techniques are required. Even the full two-body problem (i.e. for arbitrary ratio of masses) does not have a rigorous analytic solution in general relativity. n-body problem. The three-body problem is a special case of the n-body problem, which describes how n objects move under one of the physical forces, such as gravity. These problems have a global analytical solution in the form of a convergent power series, as was proven by Karl F. Sundman for "n" 3 and by Qiudong Wang for "n" &gt; 3 (see n-body problem for details). However, the Sundman and Wang series converge so slowly that they are useless for practical purposes; therefore, it is currently necessary to approximate solutions by numerical analysis in the form of numerical integration or, for some cases, classical trigonometric series approximations (see n-body simulation). Atomic systems, e.g. atoms, ions, and molecules, can be treated in terms of the quantum n-body problem. Among classical physical systems, the n-body problem usually refers to a galaxy or to a cluster of galaxies; planetary systems, such as stars, planets, and their satellites, can also be treated as n-body systems. Some applications are conveniently treated by perturbation theory, in which the system is considered as a two-body problem plus additional forces causing deviations from a hypothetical unperturbed two-body trajectory. See also. &lt;templatestyles src="Div col/styles.css"/&gt; References. &lt;templatestyles src="Reflist/styles.css" /&gt; Further reading. &lt;templatestyles src="Refbegin/styles.css" /&gt;
[ { "math_id": 0, "text": "\\mathbf{r_i} = (x_i, y_i, z_i)" }, { "math_id": 1, "text": "m_i" }, { "math_id": 2, "text": "\\begin{align}\n \\ddot\\mathbf{r}_{\\mathbf{1}} &= -G m_2 \\frac{\\mathbf{r_1} - \\mathbf{r_2}}{|\\mathbf{r_1} - \\mathbf{r_2}|^3} - G m_3 \\frac{\\mathbf{r_1} - \\mathbf{r_3}}{|\\mathbf{r_1} - \\mathbf{r_3}|^3}, \\\\\n \\ddot\\mathbf{r}_{\\mathbf{2}} &= -G m_3 \\frac{\\mathbf{r_2} - \\mathbf{r_3}}{|\\mathbf{r_2} - \\mathbf{r_3}|^3} - G m_1 \\frac{\\mathbf{r_2} - \\mathbf{r_1}}{|\\mathbf{r_2} - \\mathbf{r_1}|^3}, \\\\\n \\ddot\\mathbf{r}_{\\mathbf{3}} &= -G m_1 \\frac{\\mathbf{r_3} - \\mathbf{r_1}}{|\\mathbf{r_3} - \\mathbf{r_1}|^3} - G m_2 \\frac{\\mathbf{r_3} - \\mathbf{r_2}}{|\\mathbf{r_3} - \\mathbf{r_2}|^3}.\n\\end{align}" }, { "math_id": 3, "text": "G" }, { "math_id": 4, "text": "\\mathbf{r_i}" }, { "math_id": 5, "text": "\\mathbf{p_i}" }, { "math_id": 6, "text": "\n\\frac{d \\mathbf{r_i}}{dt} = \\frac{\\partial \\mathcal{H}}{\\partial \\mathbf{p_i}}, \\qquad \\frac{d\\mathbf{p_i}}{dt} = -\\frac{\\partial \\mathcal{H}}{\\partial \\mathbf{r_i}},\n" }, { "math_id": 7, "text": "\\mathcal{H}" }, { "math_id": 8, "text": "\n\\mathcal{H} = -\\frac{G m_1 m_2}{|\\mathbf{r_1} - \\mathbf{r_2}|}-\\frac{G m_2 m_3}{|\\mathbf{r_3} - \\mathbf{r_2}|} -\\frac{G m_3 m_1}{|\\mathbf{r_3} - \\mathbf{r_1}|} + \\frac{\\mathbf{p_1}^2}{2m_1} + \\frac{\\mathbf{p_2}^2}{2m_2} + \\frac{\\mathbf{p_3}^2}{2m_3}.\n" }, { "math_id": 9, "text": "m_{1,2}" }, { "math_id": 10, "text": "(x_1, y_1)" }, { "math_id": 11, "text": "(x_2, y_2)" }, { "math_id": 12, "text": "(x, y)" }, { "math_id": 13, "text": "1" }, { "math_id": 14, "text": "\n\\begin{align}\n\\frac{d^2 x}{dt^2} = -m_1 \\frac{x - x_1}{r_1^3} - m_2 \\frac{x - x_2}{r_2^3}, \\\\\n\\frac{d^2 y}{dt^2} = -m_1 \\frac{y - y_1}{r_1^3} - m_2 \\frac{y - y_2}{r_2^3},\n\\end{align}\n" }, { "math_id": 15, "text": "r_i = \\sqrt{(x - x_i)^2 + (y - y_i)^2}" }, { "math_id": 16, "text": "x_i(t), y_i(t)" }, { "math_id": 17, "text": "\\sigma = \\frac{e^\\frac{\\pi s}{2\\beta} - 1}{e^\\frac{\\pi s}{2\\beta} + 1}." } ]
https://en.wikipedia.org/wiki?curid=1208420
1208480
Variational Bayesian methods
Mathematical methods used in Bayesian inference and machine learning Variational Bayesian methods are a family of techniques for approximating intractable integrals arising in Bayesian inference and machine learning. They are typically used in complex statistical models consisting of observed variables (usually termed "data") as well as unknown parameters and latent variables, with various sorts of relationships among the three types of random variables, as might be described by a graphical model. As typical in Bayesian inference, the parameters and latent variables are grouped together as "unobserved variables". Variational Bayesian methods are primarily used for two purposes: In the former purpose (that of approximating a posterior probability), variational Bayes is an alternative to Monte Carlo sampling methods—particularly, Markov chain Monte Carlo methods such as Gibbs sampling—for taking a fully Bayesian approach to statistical inference over complex distributions that are difficult to evaluate directly or sample. In particular, whereas Monte Carlo techniques provide a numerical approximation to the exact posterior using a set of samples, variational Bayes provides a locally-optimal, exact analytical solution to an approximation of the posterior. Variational Bayes can be seen as an extension of the expectation–maximization (EM) algorithm from maximum likelihood (ML) or maximum a posteriori (MAP) estimation of the single most probable value of each parameter to fully Bayesian estimation which computes (an approximation to) the entire posterior distribution of the parameters and latent variables. As in EM, it finds a set of optimal parameter values, and it has the same alternating structure as does EM, based on a set of interlocked (mutually dependent) equations that cannot be solved analytically. For many applications, variational Bayes produces solutions of comparable accuracy to Gibbs sampling at greater speed. However, deriving the set of equations used to update the parameters iteratively often requires a large amount of work compared with deriving the comparable Gibbs sampling equations. This is the case even for many models that are conceptually quite simple, as is demonstrated below in the case of a basic non-hierarchical model with only two parameters and no latent variables. Mathematical derivation. Problem. In variational inference, the posterior distribution over a set of unobserved variables formula_0 given some data formula_1 is approximated by a so-called variational distribution, formula_2 formula_3 The distribution formula_4 is restricted to belong to a family of distributions of simpler form than formula_5 (e.g. a family of Gaussian distributions), selected with the intention of making formula_4 similar to the true posterior, formula_5. The similarity (or dissimilarity) is measured in terms of a dissimilarity function formula_6 and hence inference is performed by selecting the distribution formula_4 that minimizes formula_6. KL divergence. The most common type of variational Bayes uses the Kullback–Leibler divergence (KL-divergence) of "Q" from "P" as the choice of dissimilarity function. This choice makes this minimization tractable. The KL-divergence is defined as formula_7 Note that "Q" and "P" are reversed from what one might expect. This use of reversed KL-divergence is conceptually similar to the expectation–maximization algorithm. (Using the KL-divergence in the other way produces the expectation propagation algorithm.) Intractability. Variational techniques are typically used to form an approximation for: formula_8 The marginalization over formula_9 to calculate formula_10 in the denominator is typically intractable, because, for example, the search space of formula_9 is combinatorially large. Therefore, we seek an approximation, using formula_11. Evidence lower bound. Given that formula_12, the KL-divergence above can also be written as formula_13 Because formula_14 is a constant with respect to formula_9 and formula_15 because formula_4 is a distribution, we have formula_16 which, according to the definition of expected value (for a discrete random variable), can be written as follows formula_17 which can be rearranged to become formula_18 As the "log-evidence" formula_19 is fixed with respect to formula_20, maximizing the final term formula_21 minimizes the KL divergence of formula_20 from formula_22. By appropriate choice of formula_20, formula_21 becomes tractable to compute and to maximize. Hence we have both an analytical approximation formula_20 for the posterior formula_5, and a lower bound formula_21 for the log-evidence formula_19 (since the KL-divergence is non-negative). The lower bound formula_21 is known as the (negative) variational free energy in analogy with thermodynamic free energy because it can also be expressed as a negative energy formula_23 plus the entropy of formula_20. The term formula_21 is also known as Evidence Lower Bound, abbreviated as ELBO, to emphasize that it is a lower (worst-case) bound on the log-evidence of the data. Proofs. By the generalized Pythagorean theorem of Bregman divergence, of which KL-divergence is a special case, it can be shown that: formula_24 where formula_25 is a convex set and the equality holds if: formula_26 In this case, the global minimizer formula_27 with formula_28 can be found as follows: formula_29 in which the normalizing constant is: formula_30 The term formula_31 is often called the evidence lower bound (ELBO) in practice, since formula_32, as shown above. By interchanging the roles of formula_33 and formula_34 we can iteratively compute the approximated formula_35 and formula_36 of the true model's marginals formula_37 and formula_38 respectively. Although this iterative scheme is guaranteed to converge monotonically, the converged formula_39 is only a local minimizer of formula_40. If the constrained space formula_25 is confined within independent space, i.e. formula_41the above iterative scheme will become the so-called mean field approximation formula_42as shown below. Mean field approximation. The variational distribution formula_4 is usually assumed to factorize over some partition of the latent variables, i.e. for some partition of the latent variables formula_43 into formula_44, formula_45 It can be shown using the calculus of variations (hence the name "variational Bayes") that the "best" distribution formula_46 for each of the factors formula_47 (in terms of the distribution minimizing the KL divergence, as described above) satisfies: formula_48 where formula_49 is the expectation of the logarithm of the joint probability of the data and latent variables, taken with respect to formula_50 over all variables not in the partition: refer to Lemma 4.1 of for a derivation of the distribution formula_51. In practice, we usually work in terms of logarithms, i.e.: formula_52 The constant in the above expression is related to the normalizing constant (the denominator in the expression above for formula_46) and is usually reinstated by inspection, as the rest of the expression can usually be recognized as being a known type of distribution (e.g. Gaussian, gamma, etc.). Using the properties of expectations, the expression formula_49 can usually be simplified into a function of the fixed hyperparameters of the prior distributions over the latent variables and of expectations (and sometimes higher moments such as the variance) of latent variables not in the current partition (i.e. latent variables not included in formula_53). This creates circular dependencies between the parameters of the distributions over variables in one partition and the expectations of variables in the other partitions. This naturally suggests an iterative algorithm, much like EM (the expectation–maximization algorithm), in which the expectations (and possibly higher moments) of the latent variables are initialized in some fashion (perhaps randomly), and then the parameters of each distribution are computed in turn using the current values of the expectations, after which the expectation of the newly computed distribution is set appropriately according to the computed parameters. An algorithm of this sort is guaranteed to converge. In other words, for each of the partitions of variables, by simplifying the expression for the distribution over the partition's variables and examining the distribution's functional dependency on the variables in question, the family of the distribution can usually be determined (which in turn determines the value of the constant). The formula for the distribution's parameters will be expressed in terms of the prior distributions' hyperparameters (which are known constants), but also in terms of expectations of functions of variables in other partitions. Usually these expectations can be simplified into functions of expectations of the variables themselves (i.e. the means); sometimes expectations of squared variables (which can be related to the variance of the variables), or expectations of higher powers (i.e. higher moments) also appear. In most cases, the other variables' distributions will be from known families, and the formulas for the relevant expectations can be looked up. However, those formulas depend on those distributions' parameters, which depend in turn on the expectations about other variables. The result is that the formulas for the parameters of each variable's distributions can be expressed as a series of equations with mutual, nonlinear dependencies among the variables. Usually, it is not possible to solve this system of equations directly. However, as described above, the dependencies suggest a simple iterative algorithm, which in most cases is guaranteed to converge. An example will make this process clearer. A duality formula for variational inference. The following theorem is referred to as a duality formula for variational inference. It explains some important properties of the variational distributions used in variational Bayes methods. Theorem Consider two probability spaces formula_54 and formula_55 with formula_56. Assume that there is a common dominating probability measure formula_57 such that formula_58 and formula_59. Let formula_60 denote any real-valued random variable on formula_54 that satisfies formula_61. Then the following equality holds formula_62 Further, the supremum on the right-hand side is attained if and only if it holds formula_63 almost surely with respect to probability measure formula_20, where formula_64 and formula_65 denote the Radon–Nikodym derivatives of the probability measures formula_22 and formula_20 with respect to formula_57, respectively. A basic example. Consider a simple non-hierarchical Bayesian model consisting of a set of i.i.d. observations from a Gaussian distribution, with unknown mean and variance. In the following, we work through this model in great detail to illustrate the workings of the variational Bayes method. For mathematical convenience, in the following example we work in terms of the precision — i.e. the reciprocal of the variance (or in a multivariate Gaussian, the inverse of the covariance matrix) — rather than the variance itself. (From a theoretical standpoint, precision and variance are equivalent since there is a one-to-one correspondence between the two.) The mathematical model. We place conjugate prior distributions on the unknown mean formula_66 and precision formula_67, i.e. the mean also follows a Gaussian distribution while the precision follows a gamma distribution. In other words: formula_68 The hyperparameters formula_69 and formula_70 in the prior distributions are fixed, given values. They can be set to small positive numbers to give broad prior distributions indicating ignorance about the prior distributions of formula_66 and formula_67. We are given formula_71 data points formula_72 and our goal is to infer the posterior distribution formula_73 of the parameters formula_66 and formula_74 The joint probability. The joint probability of all variables can be rewritten as formula_75 where the individual factors are formula_76 where formula_77 Factorized approximation. Assume that formula_78, i.e. that the posterior distribution factorizes into independent factors for formula_66 and formula_67. This type of assumption underlies the variational Bayesian method. The true posterior distribution does not in fact factor this way (in fact, in this simple case, it is known to be a Gaussian-gamma distribution), and hence the result we obtain will be an approximation. Derivation of "q"("μ"). Then formula_79 In the above derivation, formula_80, formula_81 and formula_82 refer to values that are constant with respect to formula_66. Note that the term formula_83 is not a function of formula_66 and will have the same value regardless of the value of formula_66. Hence in line 3 we can absorb it into the constant term at the end. We do the same thing in line 7. The last line is simply a quadratic polynomial in formula_66. Since this is the logarithm of formula_84, we can see that formula_84 itself is a Gaussian distribution. With a certain amount of tedious math (expanding the squares inside of the braces, separating out and grouping the terms involving formula_66 and formula_85 and completing the square over formula_66), we can derive the parameters of the Gaussian distribution: formula_86 Note that all of the above steps can be shortened by using the formula for the sum of two quadratics. In other words: formula_87 Derivation of q(τ). The derivation of formula_88 is similar to above, although we omit some of the details for the sake of brevity. formula_89 Exponentiating both sides, we can see that formula_88 is a gamma distribution. Specifically: formula_90 Algorithm for computing the parameters. Let us recap the conclusions from the previous sections: formula_91 and formula_90 In each case, the parameters for the distribution over one of the variables depend on expectations taken with respect to the other variable. We can expand the expectations, using the standard formulas for the expectations of moments of the Gaussian and gamma distributions: formula_92 Applying these formulas to the above equations is trivial in most cases, but the equation for formula_93 takes more work: formula_94 We can then write the parameter equations as follows, without any expectations: formula_95 Note that there are circular dependencies among the formulas for formula_96and formula_93. This naturally suggests an EM-like algorithm: We then have values for the hyperparameters of the approximating distributions of the posterior parameters, which we can use to compute any properties we want of the posterior — e.g. its mean and variance, a 95% highest-density region (the smallest interval that includes 95% of the total probability), etc. It can be shown that this algorithm is guaranteed to converge to a local maximum. Note also that the posterior distributions have the same form as the corresponding prior distributions. We did "not" assume this; the only assumption we made was that the distributions factorize, and the form of the distributions followed naturally. It turns out (see below) that the fact that the posterior distributions have the same form as the prior distributions is not a coincidence, but a general result whenever the prior distributions are members of the exponential family, which is the case for most of the standard distributions. Further discussion. Step-by-step recipe. The above example shows the method by which the variational-Bayesian approximation to a posterior probability density in a given Bayesian network is derived: Most important points. Due to all of the mathematical manipulations involved, it is easy to lose track of the big picture. The important things are: Compared with expectation–maximization (EM). Variational Bayes (VB) is often compared with expectation–maximization (EM). The actual numerical procedure is quite similar, in that both are alternating iterative procedures that successively converge on optimum parameter values. The initial steps to derive the respective procedures are also vaguely similar, both starting out with formulas for probability densities and both involving significant amounts of mathematical manipulations. However, there are a number of differences. Most important is "what" is being computed. A more complex example. Imagine a Bayesian Gaussian mixture model described as follows: formula_107 Note: The interpretation of the above variables is as follows: The joint probability of all variables can be rewritten as formula_120 where the individual factors are formula_121 where formula_122 Assume that formula_123. Then formula_124 where we have defined formula_125 Exponentiating both sides of the formula for formula_126 yields formula_127 Requiring that this be normalized ends up requiring that the formula_128 sum to 1 over all values of formula_129, yielding formula_130 where formula_131 In other words, formula_132 is a product of single-observation multinomial distributions, and factors over each individual formula_133, which is distributed as a single-observation multinomial distribution with parameters formula_134 for formula_116. Furthermore, we note that formula_135 which is a standard result for categorical distributions. Now, considering the factor formula_136, note that it automatically factors into formula_137 due to the structure of the graphical model defining our Gaussian mixture model, which is specified above. Then, formula_138 Taking the exponential of both sides, we recognize formula_139 as a Dirichlet distribution formula_140 where formula_141 where formula_142 Finally formula_143 Grouping and reading off terms involving formula_144 and formula_145, the result is a Gaussian-Wishart distribution given by formula_146 given the definitions formula_147 Finally, notice that these functions require the values of formula_134, which make use of formula_128, which is defined in turn based on formula_148, formula_149, and formula_150. Now that we have determined the distributions over which these expectations are taken, we can derive formulas for them: formula_151 These results lead to formula_152 These can be converted from proportional to absolute values by normalizing over formula_129 so that the corresponding values sum to 1. Note that: This suggests an iterative procedure that alternates between two steps: Note that these steps correspond closely with the standard EM algorithm to derive a maximum likelihood or maximum a posteriori (MAP) solution for the parameters of a Gaussian mixture model. The responsibilities formula_134 in the E step correspond closely to the posterior probabilities of the latent variables given the data, i.e. formula_163; the computation of the statistics formula_157, formula_158, and formula_159 corresponds closely to the computation of corresponding "soft-count" statistics over the data; and the use of those statistics to compute new values of the parameters corresponds closely to the use of soft counts to compute new parameter values in normal EM over a Gaussian mixture model. Exponential-family distributions. Note that in the previous example, once the distribution over unobserved variables was assumed to factorize into distributions over the "parameters" and distributions over the "latent data", the derived "best" distribution for each variable was in the same family as the corresponding prior distribution over the variable. This is a general result that holds true for all prior distributions derived from the exponential family. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\mathbf{Z} = \\{Z_1 \\dots Z_n\\}" }, { "math_id": 1, "text": "\\mathbf{X}" }, { "math_id": 2, "text": "Q(\\mathbf{Z}):" }, { "math_id": 3, "text": "P(\\mathbf{Z}\\mid \\mathbf{X}) \\approx Q(\\mathbf{Z})." }, { "math_id": 4, "text": "Q(\\mathbf{Z})" }, { "math_id": 5, "text": "P(\\mathbf{Z}\\mid \\mathbf{X})" }, { "math_id": 6, "text": "d(Q; P)" }, { "math_id": 7, "text": "D_{\\mathrm{KL}}(Q \\parallel P) \\triangleq \\sum_\\mathbf{Z} Q(\\mathbf{Z}) \\log \\frac{Q(\\mathbf{Z})}{P(\\mathbf{Z}\\mid \\mathbf{X})}." }, { "math_id": 8, "text": "P(\\mathbf Z \\mid \\mathbf X) = \\frac{P(\\mathbf X \\mid \\mathbf Z)P(\\mathbf Z)}{P(\\mathbf X)} = \\frac{P(\\mathbf X \\mid \\mathbf Z)P(\\mathbf Z)}{\\int_{\\mathbf Z} P(\\mathbf X,\\mathbf Z') \\,d\\mathbf Z'}" }, { "math_id": 9, "text": "\\mathbf Z" }, { "math_id": 10, "text": "P(\\mathbf X)" }, { "math_id": 11, "text": "Q(\\mathbf Z) \\approx P(\\mathbf Z \\mid \\mathbf X)" }, { "math_id": 12, "text": "P(\\mathbf Z \\mid \\mathbf X) = \\frac{P(\\mathbf X, \\mathbf Z)}{P(\\mathbf X)}" }, { "math_id": 13, "text": "\nD_{\\mathrm{KL}}(Q \\parallel P) \n= \\sum_\\mathbf{Z} Q(\\mathbf{Z}) \\left[ \\log \\frac{Q(\\mathbf{Z})}{P(\\mathbf{Z},\\mathbf{X})} + \\log P(\\mathbf{X}) \\right]\n= \\sum_\\mathbf{Z} Q(\\mathbf{Z}) \\left[ \\log Q(\\mathbf{Z}) - \\log P(\\mathbf{Z},\\mathbf{X}) \\right] + \\sum_\\mathbf{Z} Q(\\mathbf{Z}) \\left[ \\log P(\\mathbf{X}) \\right] \n" }, { "math_id": 14, "text": "P(\\mathbf{X})" }, { "math_id": 15, "text": "\\sum_\\mathbf{Z} Q(\\mathbf{Z}) = 1" }, { "math_id": 16, "text": "\nD_{\\mathrm{KL}}(Q \\parallel P) = \\sum_\\mathbf{Z} Q(\\mathbf{Z}) \\left[ \\log Q(\\mathbf{Z}) - \\log P(\\mathbf{Z},\\mathbf{X}) \\right] + \\log P(\\mathbf{X}) \n" }, { "math_id": 17, "text": "\nD_{\\mathrm{KL}}(Q \\parallel P) \n= \\mathbb{E}_{\\mathbf Q } \\left[ \\log Q(\\mathbf{Z}) - \\log P(\\mathbf{Z},\\mathbf{X}) \\right] + \\log P(\\mathbf{X}) \n" }, { "math_id": 18, "text": "\n\\log P(\\mathbf{X}) =\nD_{\\mathrm{KL}}(Q \\parallel P) - \\mathbb{E}_{\\mathbf Q } \\left[ \\log Q(\\mathbf{Z}) - \\log P(\\mathbf{Z},\\mathbf{X}) \\right] = D_{\\mathrm{KL}}(Q\\parallel P) + \\mathcal{L}(Q)\n" }, { "math_id": 19, "text": "\\log P(\\mathbf{X})" }, { "math_id": 20, "text": "Q" }, { "math_id": 21, "text": "\\mathcal{L}(Q)" }, { "math_id": 22, "text": "P" }, { "math_id": 23, "text": "\\operatorname{E}_{Q}[\\log P(\\mathbf{Z},\\mathbf{X})]" }, { "math_id": 24, "text": " \nD_{\\mathrm{KL}}(Q\\parallel P) \\geq D_{\\mathrm{KL}}(Q\\parallel Q^{*}) + D_{\\mathrm{KL}}(Q^{*}\\parallel P), \\forall Q^{*} \\in\\mathcal{C}\n" }, { "math_id": 25, "text": "\\mathcal{C}" }, { "math_id": 26, "text": " Q = Q^{*} \\triangleq \\arg\\min_{Q\\in\\mathcal{C}}D_{\\mathrm{KL}}(Q\\parallel P). " }, { "math_id": 27, "text": "Q^{*}(\\mathbf{Z}) = q^{*}(\\mathbf{Z}_1\\mid\\mathbf{Z}_2)q^{*}(\\mathbf{Z}_2) = q^{*}(\\mathbf{Z}_2\\mid\\mathbf{Z}_1)q^{*}(\\mathbf{Z}_1)," }, { "math_id": 28, "text": "\\mathbf{Z}=\\{\\mathbf{Z_1},\\mathbf{Z_2}\\}," }, { "math_id": 29, "text": " q^{*}(\\mathbf{Z}_2) \n= \\frac{P(\\mathbf{X})}{\\zeta(\\mathbf{X})}\\frac{P(\\mathbf{Z}_2\\mid\\mathbf{X})}{\\exp(D_{\\mathrm{KL}}(q^{*}(\\mathbf{Z}_1\\mid\\mathbf{Z}_2)\\parallel P(\\mathbf{Z}_1\\mid\\mathbf{Z}_2,\\mathbf{X})))} \n= \\frac{1}{\\zeta(\\mathbf{X})}\\exp\\mathbb{E}_{q^{*}(\\mathbf{Z}_1\\mid\\mathbf{Z}_2)}\\left(\\log\\frac{P(\\mathbf{Z},\\mathbf{X})}{q^{*}(\\mathbf{Z}_1\\mid\\mathbf{Z}_2)}\\right)," }, { "math_id": 30, "text": "\\zeta(\\mathbf{X}) \n=P(\\mathbf{X})\\int_{\\mathbf{Z}_2}\\frac{P(\\mathbf{Z}_2\\mid\\mathbf{X})}{\\exp(D_{\\mathrm{KL}}(q^{*}(\\mathbf{Z}_1\\mid\\mathbf{Z}_2)\\parallel P(\\mathbf{Z}_1\\mid\\mathbf{Z}_2,\\mathbf{X})))}\n= \\int_{\\mathbf{Z}_{2}}\\exp\\mathbb{E}_{q^{*}(\\mathbf{Z}_1\\mid\\mathbf{Z}_2)}\\left(\\log\\frac{P(\\mathbf{Z},\\mathbf{X})}{q^{*}(\\mathbf{Z}_1\\mid\\mathbf{Z}_2)}\\right)." }, { "math_id": 31, "text": "\\zeta(\\mathbf{X})" }, { "math_id": 32, "text": "P(\\mathbf{X})\\geq\\zeta(\\mathbf{X})=\\exp(\\mathcal{L}(Q^{*}))" }, { "math_id": 33, "text": "\\mathbf{Z}_1" }, { "math_id": 34, "text": "\\mathbf{Z}_2," }, { "math_id": 35, "text": "q^{*}(\\mathbf{Z}_1)" }, { "math_id": 36, "text": "q^{*}(\\mathbf{Z}_2)" }, { "math_id": 37, "text": "P(\\mathbf{Z}_1\\mid\\mathbf{X})" }, { "math_id": 38, "text": "P(\\mathbf{Z}_2\\mid\\mathbf{X})," }, { "math_id": 39, "text": "Q^{*}" }, { "math_id": 40, "text": "D_{\\mathrm{KL}}(Q\\parallel P)" }, { "math_id": 41, "text": "q^{*}(\\mathbf{Z}_1\\mid\\mathbf{Z}_2) = q^{*}(\\mathbf{Z_1})," }, { "math_id": 42, "text": "Q^{*}(\\mathbf{Z}) = q^{*}(\\mathbf{Z}_1)q^{*}(\\mathbf{Z}_2)," }, { "math_id": 43, "text": "\\mathbf{Z}" }, { "math_id": 44, "text": "\\mathbf{Z}_1 \\dots \\mathbf{Z}_M" }, { "math_id": 45, "text": "Q(\\mathbf{Z}) = \\prod_{i=1}^M q_i(\\mathbf{Z}_i\\mid \\mathbf{X})" }, { "math_id": 46, "text": "q_j^{*}" }, { "math_id": 47, "text": "q_j" }, { "math_id": 48, "text": "q_j^{*}(\\mathbf{Z}_j\\mid \\mathbf{X}) = \\frac{e^{\\operatorname{E}_{q^*_{-j}} [\\ln p(\\mathbf{Z}, \\mathbf{X})]}}{\\int e^{\\operatorname{E}_{q^*_{-j}} [\\ln p(\\mathbf{Z}, \\mathbf{X})]}\\, d\\mathbf{Z}_j}" }, { "math_id": 49, "text": "\\operatorname{E}_{q^*_{-j}} [\\ln p(\\mathbf{Z}, \\mathbf{X})]" }, { "math_id": 50, "text": "q^*" }, { "math_id": 51, "text": "q_j^{*}(\\mathbf{Z}_j\\mid \\mathbf{X})" }, { "math_id": 52, "text": "\\ln q_j^{*}(\\mathbf{Z}_j\\mid \\mathbf{X}) = \\operatorname{E}_{q^*_{-j}} [\\ln p(\\mathbf{Z}, \\mathbf{X})] + \\text{constant}" }, { "math_id": 53, "text": "\\mathbf{Z}_j" }, { "math_id": 54, "text": "(\\Theta,\\mathcal{F},P)" }, { "math_id": 55, "text": "(\\Theta,\\mathcal{F},Q)" }, { "math_id": 56, "text": "Q \\ll P" }, { "math_id": 57, "text": "\\lambda" }, { "math_id": 58, "text": "P \\ll \\lambda" }, { "math_id": 59, "text": "Q \\ll \\lambda" }, { "math_id": 60, "text": "h" }, { "math_id": 61, "text": "h \\in L_1(P)" }, { "math_id": 62, "text": " \\log E_P[\\exp h] = \\text{sup}_{Q \\ll P} \\{ E_Q[h] - D_\\text{KL}(Q \\parallel P)\\}." }, { "math_id": 63, "text": " \\frac{q(\\theta)}{p(\\theta)} = \\frac{\\exp h(\\theta)}{E_P[\\exp h]}," }, { "math_id": 64, "text": "p(\\theta) = dP/d\\lambda" }, { "math_id": 65, "text": "q(\\theta) = dQ/d\\lambda" }, { "math_id": 66, "text": "\\mu" }, { "math_id": 67, "text": "\\tau" }, { "math_id": 68, "text": "\n\\begin{align}\n\\tau & \\sim \\operatorname{Gamma}(a_0, b_0) \\\\\n\\mu|\\tau & \\sim \\mathcal{N}(\\mu_0, (\\lambda_0 \\tau)^{-1}) \\\\\n\\{x_1, \\dots, x_N\\} & \\sim \\mathcal{N}(\\mu, \\tau^{-1}) \\\\\nN &= \\text{number of data points}\n\\end{align}\n" }, { "math_id": 69, "text": "\\mu_0, \\lambda_0, a_0" }, { "math_id": 70, "text": "b_0" }, { "math_id": 71, "text": "N" }, { "math_id": 72, "text": "\\mathbf{X} = \\{x_1, \\ldots, x_N\\}" }, { "math_id": 73, "text": "q(\\mu, \\tau)=p(\\mu,\\tau\\mid x_1, \\ldots, x_N)" }, { "math_id": 74, "text": "\\tau." }, { "math_id": 75, "text": "p(\\mathbf{X},\\mu,\\tau) = p(\\mathbf{X}\\mid \\mu,\\tau) p(\\mu\\mid \\tau) p(\\tau)" }, { "math_id": 76, "text": "\n\\begin{align}\np(\\mathbf{X}\\mid \\mu,\\tau) & = \\prod_{n=1}^N \\mathcal{N}(x_n\\mid \\mu,\\tau^{-1}) \\\\\np(\\mu\\mid \\tau) & = \\mathcal{N} \\left (\\mu\\mid \\mu_0, (\\lambda_0 \\tau)^{-1} \\right ) \\\\\np(\\tau) & = \\operatorname{Gamma}(\\tau\\mid a_0, b_0)\n\\end{align}\n" }, { "math_id": 77, "text": "\n\\begin{align}\n\\mathcal{N}(x\\mid \\mu,\\sigma^2) & = \\frac{1}{\\sqrt{2\\pi\\sigma^2}} e^{\\frac{-(x-\\mu)^2}{2\\sigma^2}} \\\\\n\\operatorname{Gamma}(\\tau\\mid a,b) & = \\frac{1}{\\Gamma(a)} b^a \\tau^{a-1} e^{-b \\tau}\n\\end{align}\n" }, { "math_id": 78, "text": "q(\\mu,\\tau) = q(\\mu)q(\\tau)" }, { "math_id": 79, "text": "\n\\begin{align}\n\\ln q_\\mu^*(\\mu) &= \\operatorname{E}_\\tau\\left[\\ln p(\\mathbf{X}\\mid \\mu,\\tau) + \\ln p(\\mu\\mid \\tau) + \\ln p(\\tau)\\right] + C \\\\\n &= \\operatorname{E}_\\tau\\left[\\ln p(\\mathbf{X}\\mid \\mu,\\tau)\\right] + \\operatorname{E}_\\tau\\left[\\ln p(\\mu\\mid \\tau)\\right] + \\operatorname{E}_{\\tau}\\left[\\ln p(\\tau)\\right] + C \\\\\n &= \\operatorname{E}_\\tau\\left[\\ln \\prod_{n=1}^N \\mathcal{N} \\left (x_n\\mid \\mu,\\tau^{-1} \\right )\\right] + \\operatorname{E}_\\tau\\left[\\ln \\mathcal{N} \\left (\\mu\\mid \\mu_0, (\\lambda_0 \\tau)^{-1} \\right )\\right] + C_2 \\\\\n &= \\operatorname{E}_\\tau\\left[\\ln \\prod_{n=1}^N \\sqrt{\\frac{\\tau}{2\\pi}} e^{-\\frac{(x_n-\\mu)^2\\tau}{2}}\\right] + \\operatorname{E}_{\\tau}\\left[\\ln \\sqrt{\\frac{\\lambda_0 \\tau}{2\\pi}} e^{-\\frac{(\\mu-\\mu_0)^2\\lambda_0 \\tau}{2}}\\right] + C_2 \\\\\n &= \\operatorname{E}_{\\tau}\\left[\\sum_{n=1}^N \\left(\\frac{1}{2}(\\ln\\tau - \\ln 2\\pi) - \\frac{(x_n-\\mu)^2\\tau}{2}\\right)\\right] + \\operatorname{E}_{\\tau}\\left[\\frac{1}{2}(\\ln \\lambda_0 + \\ln \\tau - \\ln 2\\pi) - \\frac{(\\mu-\\mu_0)^2\\lambda_0 \\tau}{2}\\right] + C_2 \\\\\n &= \\operatorname{E}_{\\tau}\\left[\\sum_{n=1}^N -\\frac{(x_n-\\mu)^2\\tau}{2}\\right] + \\operatorname{E}_{\\tau}\\left[-\\frac{(\\mu-\\mu_0)^2\\lambda_0 \\tau}{2}\\right] + \\operatorname{E}_{\\tau}\\left[\\sum_{n=1}^N \\frac{1}{2}(\\ln\\tau - \\ln 2\\pi)\\right] + \\operatorname{E}_{\\tau}\\left[\\frac{1}{2}(\\ln \\lambda_0 + \\ln \\tau - \\ln 2\\pi)\\right] + C_2 \\\\\n &= \\operatorname{E}_{\\tau}\\left[\\sum_{n=1}^N -\\frac{(x_n-\\mu)^2\\tau}{2}\\right] + \\operatorname{E}_{\\tau}\\left[-\\frac{(\\mu-\\mu_0)^2\\lambda_0 \\tau}{2}\\right] + C_3 \\\\\n &= - \\frac{\\operatorname{E}_{\\tau}[\\tau]}{2} \\left\\{ \\sum_{n=1}^N (x_n-\\mu)^2 + \\lambda_0(\\mu-\\mu_0)^2 \\right\\} + C_3\n\\end{align}\n" }, { "math_id": 80, "text": "C" }, { "math_id": 81, "text": "C_2" }, { "math_id": 82, "text": "C_3" }, { "math_id": 83, "text": "\\operatorname{E}_{\\tau}[\\ln p(\\tau)]" }, { "math_id": 84, "text": "q_\\mu^*(\\mu)" }, { "math_id": 85, "text": "\\mu^2" }, { "math_id": 86, "text": "\\begin{align}\n\\ln q_\\mu^*(\\mu) &= -\\frac{\\operatorname{E}_{\\tau}[\\tau]}{2} \\left\\{ \\sum_{n=1}^N (x_n-\\mu)^2 + \\lambda_0(\\mu-\\mu_0)^2 \\right\\} + C_3 \\\\\n &= -\\frac{\\operatorname{E}_{\\tau}[\\tau]}{2} \\left\\{ \\sum_{n=1}^N (x_n^2-2x_n\\mu + \\mu^2) + \\lambda_0(\\mu^2-2\\mu_0\\mu + \\mu_0^2) \\right \\} + C_3 \\\\\n &= -\\frac{\\operatorname{E}_{\\tau}[\\tau]}{2} \\left\\{ \\left(\\sum_{n=1}^N x_n^2\\right)-2\\left(\\sum_{n=1}^N x_n\\right)\\mu + \\left ( \\sum_{n=1}^N \\mu^2 \\right) + \\lambda_0\\mu^2-2\\lambda_0\\mu_0\\mu + \\lambda_0\\mu_0^2 \\right\\} + C_3 \\\\\n &= -\\frac{\\operatorname{E}_{\\tau}[\\tau]}{2} \\left\\{ (\\lambda_0+N)\\mu^2 -2\\left(\\lambda_0\\mu_0 + \\sum_{n=1}^N x_n\\right)\\mu + \\left(\\sum_{n=1}^N x_n^2\\right) + \\lambda_0\\mu_0^2 \\right\\} + C_3 \\\\\n &= -\\frac{\\operatorname{E}_{\\tau}[\\tau]}{2} \\left\\{ (\\lambda_0+N)\\mu^2 -2\\left(\\lambda_0\\mu_0 + \\sum_{n=1}^N x_n\\right)\\mu \\right\\} + C_4 \\\\\n &= -\\frac{\\operatorname{E}_{\\tau}[\\tau]}{2} \\left\\{ (\\lambda_0+N)\\mu^2 -2\\left(\\frac{\\lambda_0\\mu_0 + \\sum_{n=1}^N x_n}{\\lambda_0+N} \\right)(\\lambda_0+N) \\mu \\right\\} + C_4 \\\\\n &= -\\frac{\\operatorname{E}_{\\tau}[\\tau]}{2} \\left\\{ (\\lambda_0+N)\\left(\\mu^2 -2\\left(\\frac{\\lambda_0\\mu_0 + \\sum_{n=1}^N x_n}{\\lambda_0+N}\\right) \\mu\\right) \\right\\} + C_4 \\\\\n &= -\\frac{\\operatorname{E}_{\\tau}[\\tau]}{2} \\left\\{ (\\lambda_0+N)\\left(\\mu^2 -2\\left(\\frac{\\lambda_0\\mu_0 + \\sum_{n=1}^N x_n}{\\lambda_0+N}\\right) \\mu + \\left(\\frac{\\lambda_0\\mu_0 + \\sum_{n=1}^N x_n}{\\lambda_0+N}\\right)^2 - \\left(\\frac{\\lambda_0\\mu_0 + \\sum_{n=1}^N x_n}{\\lambda_0+N}\\right)^2\\right) \\right\\} + C_4 \\\\\n &= -\\frac{\\operatorname{E}_{\\tau}[\\tau]}{2} \\left\\{ (\\lambda_0+N)\\left(\\mu^2 -2\\left(\\frac{\\lambda_0\\mu_0 + \\sum_{n=1}^N x_n}{\\lambda_0+N}\\right) \\mu + \\left(\\frac{\\lambda_0\\mu_0 + \\sum_{n=1}^N x_n}{\\lambda_0+N}\\right)^2 \\right) \\right\\} + C_5 \\\\\n &= -\\frac{\\operatorname{E}_{\\tau}[\\tau]}{2} \\left\\{ (\\lambda_0+N)\\left(\\mu-\\frac{\\lambda_0\\mu_0 + \\sum_{n=1}^N x_n}{\\lambda_0+N}\\right)^2 \\right\\} + C_5 \\\\\n &= -\\frac{1}{2} (\\lambda_0+N)\\operatorname{E}_{\\tau}[\\tau] \\left(\\mu-\\frac{\\lambda_0\\mu_0 + \\sum_{n=1}^N x_n}{\\lambda_0+N}\\right)^2 + C_5\n\\end{align}" }, { "math_id": 87, "text": "\n\\begin{align}\nq_\\mu^*(\\mu) &\\sim \\mathcal{N}(\\mu\\mid \\mu_N,\\lambda_N^{-1}) \\\\\n\\mu_N &= \\frac{\\lambda_0 \\mu_0 + N \\bar{x}}{\\lambda_0 + N} \\\\\n\\lambda_N &= (\\lambda_0 + N) \\operatorname{E}_{\\tau}[\\tau] \\\\\n\\bar{x} &= \\frac{1}{N}\\sum_{n=1}^N x_n\n\\end{align}\n" }, { "math_id": 88, "text": "q_\\tau^*(\\tau)" }, { "math_id": 89, "text": "\n\\begin{align}\n\\ln q_\\tau^*(\\tau) &= \\operatorname{E}_{\\mu}[\\ln p(\\mathbf{X}\\mid \\mu,\\tau) + \\ln p(\\mu\\mid \\tau)] + \\ln p(\\tau) + \\text{constant} \\\\\n &= (a_0 - 1) \\ln \\tau - b_0 \\tau + \\frac{1}{2} \\ln \\tau + \\frac{N}{2} \\ln \\tau - \\frac{\\tau}{2} \\operatorname{E}_\\mu \\left [ \\sum_{n=1}^N (x_n-\\mu)^2 + \\lambda_0(\\mu - \\mu_0)^2 \\right ] + \\text{constant}\n\\end{align}\n" }, { "math_id": 90, "text": "\n\\begin{align}\nq_\\tau^*(\\tau) &\\sim \\operatorname{Gamma}(\\tau\\mid a_N, b_N) \\\\\na_N &= a_0 + \\frac{N+1}{2} \\\\\nb_N &= b_0 + \\frac{1}{2} \\operatorname{E}_\\mu \\left[\\sum_{n=1}^N (x_n-\\mu)^2 + \\lambda_0(\\mu - \\mu_0)^2\\right]\n\\end{align}\n" }, { "math_id": 91, "text": "\n\\begin{align}\nq_\\mu^*(\\mu) &\\sim \\mathcal{N}(\\mu\\mid\\mu_N,\\lambda_N^{-1}) \\\\\n\\mu_N &= \\frac{\\lambda_0 \\mu_0 + N \\bar{x}}{\\lambda_0 + N} \\\\\n\\lambda_N &= (\\lambda_0 + N) \\operatorname{E}_{\\tau}[\\tau] \\\\\n\\bar{x} &= \\frac{1}{N}\\sum_{n=1}^N x_n\n\\end{align}\n" }, { "math_id": 92, "text": "\n\\begin{align}\n\\operatorname{E}[\\tau\\mid a_N, b_N] &= \\frac{a_N}{b_N} \\\\\n\\operatorname{E} \\left [\\mu\\mid\\mu_N,\\lambda_N^{-1} \\right ] &= \\mu_N \\\\\n\\operatorname{E}\\left[X^2 \\right] &= \\operatorname{Var}(X) + (\\operatorname{E}[X])^2 \\\\\n\\operatorname{E} \\left [\\mu^2\\mid\\mu_N,\\lambda_N^{-1} \\right ] &= \\lambda_N^{-1} + \\mu_N^2\n\\end{align}\n" }, { "math_id": 93, "text": "b_N" }, { "math_id": 94, "text": "\n\\begin{align}\nb_N &= b_0 + \\frac{1}{2} \\operatorname{E}_\\mu \\left[\\sum_{n=1}^N (x_n-\\mu)^2 + \\lambda_0(\\mu - \\mu_0)^2\\right] \\\\\n &= b_0 + \\frac{1}{2} \\operatorname{E}_\\mu \\left[ (\\lambda_0+N)\\mu^2 -2 \\left (\\lambda_0\\mu_0 + \\sum_{n=1}^N x_n \\right )\\mu + \\left(\\sum_{n=1}^N x_n^2 \\right ) + \\lambda_0\\mu_0^2 \\right] \\\\\n &= b_0 + \\frac{1}{2} \\left[ (\\lambda_0+N)\\operatorname{E}_\\mu[\\mu^2] -2 \\left (\\lambda_0\\mu_0 + \\sum_{n=1}^N x_n \\right)\\operatorname{E}_\\mu [\\mu] + \\left (\\sum_{n=1}^N x_n^2 \\right ) + \\lambda_0\\mu_0^2 \\right] \\\\\n &= b_0 + \\frac{1}{2} \\left[ (\\lambda_0+N) \\left (\\lambda_N^{-1} + \\mu_N^2 \\right ) -2 \\left (\\lambda_0\\mu_0 + \\sum_{n=1}^N x_n \\right)\\mu_N + \\left(\\sum_{n=1}^N x_n^2 \\right) + \\lambda_0\\mu_0^2 \\right] \\\\\n\\end{align}\n" }, { "math_id": 95, "text": "\\begin{align}\n\\mu_N &= \\frac{\\lambda_0 \\mu_0 + N \\bar{x}}{\\lambda_0 + N} \\\\\n\\lambda_N &= (\\lambda_0 + N) \\frac{a_N}{b_N} \\\\\n\\bar{x} &= \\frac{1}{N}\\sum_{n=1}^N x_n \\\\\na_N &= a_0 + \\frac{N+1}{2} \\\\\nb_N &= b_0 + \\frac{1}{2} \\left[ (\\lambda_0+N) \\left (\\lambda_N^{-1} + \\mu_N^2 \\right ) -2 \\left (\\lambda_0\\mu_0 + \\sum_{n=1}^N x_n \\right )\\mu_N + \\left (\\sum_{n=1}^N x_n^2 \\right ) + \\lambda_0\\mu_0^2 \\right]\n\\end{align}" }, { "math_id": 96, "text": "\\lambda_N" }, { "math_id": 97, "text": "\\sum_{n=1}^N x_n" }, { "math_id": 98, "text": "\\sum_{n=1}^N x_n^2." }, { "math_id": 99, "text": "\\mu_N" }, { "math_id": 100, "text": "a_N." }, { "math_id": 101, "text": "\\lambda_N," }, { "math_id": 102, "text": "b_N," }, { "math_id": 103, "text": "\\boldsymbol\\Theta" }, { "math_id": 104, "text": "p(\\mathbf{Z},\\boldsymbol\\Theta\\mid\\mathbf{X})" }, { "math_id": 105, "text": "\\mathbf{Z}_1,\\ldots,\\mathbf{Z}_M" }, { "math_id": 106, "text": "\\ln q_j^{*}(\\mathbf{Z}_j\\mid \\mathbf{X}) = \\operatorname{E}_{i \\neq j} [\\ln p(\\mathbf{Z}, \\mathbf{X})] + \\text{constant}" }, { "math_id": 107, "text": "\n\\begin{align}\n\\mathbf{\\pi} & \\sim \\operatorname{SymDir}(K, \\alpha_0) \\\\\n\\mathbf{\\Lambda}_{i=1 \\dots K} & \\sim \\mathcal{W}(\\mathbf{W}_0, \\nu_0) \\\\\n\\mathbf{\\mu}_{i=1 \\dots K} & \\sim \\mathcal{N}(\\mathbf{\\mu}_0, (\\beta_0 \\mathbf{\\Lambda}_i)^{-1}) \\\\\n\\mathbf{z}[i = 1 \\dots N] & \\sim \\operatorname{Mult}(1, \\mathbf{\\pi}) \\\\\n\\mathbf{x}_{i=1 \\dots N} & \\sim \\mathcal{N}(\\mathbf{\\mu}_{z_i}, {\\mathbf{\\Lambda}_{z_i}}^{-1}) \\\\\nK &= \\text{number of mixing components} \\\\\nN &= \\text{number of data points}\n\\end{align}\n" }, { "math_id": 108, "text": "K" }, { "math_id": 109, "text": "\\alpha_0" }, { "math_id": 110, "text": "\\mathcal{W}()" }, { "math_id": 111, "text": "\\mathcal{N}()" }, { "math_id": 112, "text": "\\mathbf{X} = \\{\\mathbf{x}_1, \\dots, \\mathbf{x}_N\\}" }, { "math_id": 113, "text": "D" }, { "math_id": 114, "text": "\\mathbf{Z} = \\{\\mathbf{z}_1, \\dots, \\mathbf{z}_N\\}" }, { "math_id": 115, "text": "z_{nk}" }, { "math_id": 116, "text": "k = 1 \\dots K" }, { "math_id": 117, "text": "\\mathbf{\\pi}" }, { "math_id": 118, "text": "\\mathbf{\\mu}_{i=1 \\dots K}" }, { "math_id": 119, "text": "\\mathbf{\\Lambda}_{i=1 \\dots K}" }, { "math_id": 120, "text": "p(\\mathbf{X},\\mathbf{Z},\\mathbf{\\pi},\\mathbf{\\mu},\\mathbf{\\Lambda}) = p(\\mathbf{X}\\mid \\mathbf{Z},\\mathbf{\\mu},\\mathbf{\\Lambda}) p(\\mathbf{Z}\\mid \\mathbf{\\pi}) p(\\mathbf{\\pi}) p(\\mathbf{\\mu}\\mid \\mathbf{\\Lambda}) p(\\mathbf{\\Lambda})" }, { "math_id": 121, "text": "\n\\begin{align}\np(\\mathbf{X}\\mid \\mathbf{Z},\\mathbf{\\mu},\\mathbf{\\Lambda}) & = \\prod_{n=1}^N \\prod_{k=1}^K \\mathcal{N}(\\mathbf{x}_n\\mid \\mathbf{\\mu}_k,\\mathbf{\\Lambda}_k^{-1})^{z_{nk}} \\\\\np(\\mathbf{Z}\\mid \\mathbf{\\pi}) & = \\prod_{n=1}^N \\prod_{k=1}^K \\pi_k^{z_{nk}} \\\\\np(\\mathbf{\\pi}) & = \\frac{\\Gamma(K\\alpha_0)}{\\Gamma(\\alpha_0)^K} \\prod_{k=1}^K \\pi_k^{\\alpha_0-1} \\\\\np(\\mathbf{\\mu}\\mid \\mathbf{\\Lambda}) & = \\prod_{k=1}^K \\mathcal{N}(\\mathbf{\\mu}_k\\mid \\mathbf{\\mu}_0,(\\beta_0 \\mathbf{\\Lambda}_k)^{-1}) \\\\\np(\\mathbf{\\Lambda}) & = \\prod_{k=1}^K \\mathcal{W}(\\mathbf{\\Lambda}_k\\mid \\mathbf{W}_0, \\nu_0)\n\\end{align}\n" }, { "math_id": 122, "text": "\n\\begin{align}\n\\mathcal{N}(\\mathbf{x}\\mid \\mathbf{\\mu},\\mathbf{\\Sigma}) & = \\frac{1}{(2\\pi)^{D/2}} \\frac{1}{|\\mathbf{\\Sigma}|^{1/2}} \\exp \\left\\{ -\\frac{1}{2}(\\mathbf{x}-\\mathbf{\\mu})^{\\rm T} \\mathbf{\\Sigma}^{-1}(\\mathbf{x}-\\mathbf{\\mu}) \\right\\} \\\\\n\\mathcal{W}(\\mathbf{\\Lambda}\\mid \\mathbf{W},\\nu) & = B(\\mathbf{W},\\nu) |\\mathbf{\\Lambda}|^{(\\nu-D-1)/2} \\exp \\left(-\\frac{1}{2} \\operatorname{Tr}(\\mathbf{W}^{-1}\\mathbf{\\Lambda}) \\right) \\\\\nB(\\mathbf{W},\\nu) & = |\\mathbf{W}|^{-\\nu/2} \\left\\{ 2^{\\nu D/2} \\pi^{D(D-1)/4} \\prod_{i=1}^{D} \\Gamma\\left(\\frac{\\nu + 1 - i}{2}\\right) \\right\\}^{-1} \\\\\nD & = \\text{dimensionality of each data point}\n\\end{align}\n" }, { "math_id": 123, "text": "q(\\mathbf{Z},\\mathbf{\\pi},\\mathbf{\\mu},\\mathbf{\\Lambda}) = q(\\mathbf{Z})q(\\mathbf{\\pi},\\mathbf{\\mu},\\mathbf{\\Lambda})" }, { "math_id": 124, "text": "\n\\begin{align}\n\\ln q^*(\\mathbf{Z}) &= \\operatorname{E}_{\\mathbf{\\pi},\\mathbf{\\mu},\\mathbf{\\Lambda}}[\\ln p(\\mathbf{X},\\mathbf{Z},\\mathbf{\\pi},\\mathbf{\\mu},\\mathbf{\\Lambda})] + \\text{constant} \\\\\n &= \\operatorname{E}_{\\mathbf{\\pi}}[\\ln p(\\mathbf{Z}\\mid \\mathbf{\\pi})] + \\operatorname{E}_{\\mathbf{\\mu},\\mathbf{\\Lambda}}[\\ln p(\\mathbf{X}\\mid \\mathbf{Z},\\mathbf{\\mu},\\mathbf{\\Lambda})] + \\text{constant} \\\\\n &= \\sum_{n=1}^N \\sum_{k=1}^K z_{nk} \\ln \\rho_{nk} + \\text{constant}\n\\end{align}\n" }, { "math_id": 125, "text": "\\ln \\rho_{nk} = \\operatorname{E}[\\ln \\pi_k] + \\frac{1}{2} \\operatorname{E}[\\ln |\\mathbf{\\Lambda}_k|] - \\frac{D}{2} \\ln(2\\pi) - \\frac{1}{2} \\operatorname{E}_{\\mathbf{\\mu}_k,\\mathbf{\\Lambda}_k} [(\\mathbf{x}_n - \\mathbf{\\mu}_k)^{\\rm T} \\mathbf{\\Lambda}_k (\\mathbf{x}_n - \\mathbf{\\mu}_k)]" }, { "math_id": 126, "text": "\\ln q^*(\\mathbf{Z})" }, { "math_id": 127, "text": "q^*(\\mathbf{Z}) \\propto \\prod_{n=1}^N \\prod_{k=1}^K \\rho_{nk}^{z_{nk}}" }, { "math_id": 128, "text": "\\rho_{nk}" }, { "math_id": 129, "text": "k" }, { "math_id": 130, "text": "q^*(\\mathbf{Z}) = \\prod_{n=1}^N \\prod_{k=1}^K r_{nk}^{z_{nk}}" }, { "math_id": 131, "text": "r_{nk} = \\frac{\\rho_{nk}}{\\sum_{j=1}^K \\rho_{nj}}" }, { "math_id": 132, "text": "q^*(\\mathbf{Z})" }, { "math_id": 133, "text": "\\mathbf{z}_n" }, { "math_id": 134, "text": "r_{nk}" }, { "math_id": 135, "text": "\\operatorname{E}[z_{nk}] = r_{nk} \\, " }, { "math_id": 136, "text": "q(\\mathbf{\\pi},\\mathbf{\\mu},\\mathbf{\\Lambda})" }, { "math_id": 137, "text": "q(\\mathbf{\\pi}) \\prod_{k=1}^K q(\\mathbf{\\mu}_k,\\mathbf{\\Lambda}_k)" }, { "math_id": 138, "text": "\n\\begin{align}\n\\ln q^*(\\mathbf{\\pi}) &= \\ln p(\\mathbf{\\pi}) + \\operatorname{E}_{\\mathbf{Z}}[\\ln p(\\mathbf{Z}\\mid \\mathbf{\\pi})] + \\text{constant} \\\\\n &= (\\alpha_0 - 1) \\sum_{k=1}^K \\ln \\pi_k + \\sum_{n=1}^N \\sum_{k=1}^K r_{nk} \\ln \\pi_k + \\text{constant}\n\\end{align}\n" }, { "math_id": 139, "text": "q^*(\\mathbf{\\pi})" }, { "math_id": 140, "text": "q^*(\\mathbf{\\pi}) \\sim \\operatorname{Dir}(\\mathbf{\\alpha}) \\, " }, { "math_id": 141, "text": "\\alpha_k = \\alpha_0 + N_k \\, " }, { "math_id": 142, "text": "N_k = \\sum_{n=1}^N r_{nk} \\, " }, { "math_id": 143, "text": "\\ln q^*(\\mathbf{\\mu}_k,\\mathbf{\\Lambda}_k) = \\ln p(\\mathbf{\\mu}_k,\\mathbf{\\Lambda}_k) + \\sum_{n=1}^N \\operatorname{E}[z_{nk}] \\ln \\mathcal{N}(\\mathbf{x}_n\\mid \\mathbf{\\mu}_k,\\mathbf{\\Lambda}_k^{-1}) + \\text{constant}" }, { "math_id": 144, "text": "\\mathbf{\\mu}_k" }, { "math_id": 145, "text": "\\mathbf{\\Lambda}_k" }, { "math_id": 146, "text": "q^*(\\mathbf{\\mu}_k,\\mathbf{\\Lambda}_k) = \\mathcal{N}(\\mathbf{\\mu}_k\\mid \\mathbf{m}_k,(\\beta_k \\mathbf{\\Lambda}_k)^{-1}) \\mathcal{W}(\\mathbf{\\Lambda}_k\\mid \\mathbf{W}_k,\\nu_k)" }, { "math_id": 147, "text": "\n\\begin{align}\n\\beta_k &= \\beta_0 + N_k \\\\\n\\mathbf{m}_k &= \\frac{1}{\\beta_k} (\\beta_0 \\mathbf{\\mu}_0 + N_k {\\bar{\\mathbf{x}}}_k) \\\\\n\\mathbf{W}_k^{-1} &= \\mathbf{W}_0^{-1} + N_k \\mathbf{S}_k + \\frac{\\beta_0 N_k}{\\beta_0 + N_k} ({\\bar{\\mathbf{x}}}_k - \\mathbf{\\mu}_0)({\\bar{\\mathbf{x}}}_k - \\mathbf{\\mu}_0)^{\\rm T} \\\\\n\\nu_k &= \\nu_0 + N_k \\\\\nN_k &= \\sum_{n=1}^N r_{nk} \\\\\n{\\bar{\\mathbf{x}}}_k &= \\frac{1}{N_k} \\sum_{n=1}^N r_{nk} \\mathbf{x}_n \\\\\n\\mathbf{S}_k &= \\frac{1}{N_k} \\sum_{n=1}^N r_{nk} (\\mathbf{x}_n - {\\bar{\\mathbf{x}}}_k) (\\mathbf{x}_n - {\\bar{\\mathbf{x}}}_k)^{\\rm T}\n\\end{align}\n" }, { "math_id": 148, "text": "\\operatorname{E}[\\ln \\pi_k]" }, { "math_id": 149, "text": "\\operatorname{E}[\\ln |\\mathbf{\\Lambda}_k|]" }, { "math_id": 150, "text": "\\operatorname{E}_{\\mathbf{\\mu}_k,\\mathbf{\\Lambda}_k} [(\\mathbf{x}_n - \\mathbf{\\mu}_k)^{\\rm T} \\mathbf{\\Lambda}_k (\\mathbf{x}_n - \\mathbf{\\mu}_k)]" }, { "math_id": 151, "text": "\n\\begin{align}\n\\operatorname{E}_{\\mathbf{\\mu}_k,\\mathbf{\\Lambda}_k} [(\\mathbf{x}_n - \\mathbf{\\mu}_k)^{\\rm T} \\mathbf{\\Lambda}_k (\\mathbf{x}_n - \\mathbf{\\mu}_k)] & = D\\beta_k^{-1} + \\nu_k (\\mathbf{x}_n - \\mathbf{m}_k)^{\\rm T} \\mathbf{W}_k (\\mathbf{x}_n - \\mathbf{m}_k) \\\\\n\\ln {\\widetilde{\\Lambda}}_k &\\equiv \\operatorname{E}[\\ln |\\mathbf{\\Lambda}_k|] = \\sum_{i=1}^D \\psi \\left(\\frac{\\nu_k + 1 - i}{2}\\right) + D \\ln 2 + \\ln |\\mathbf{W}_k| \\\\\n\\ln {\\widetilde{\\pi}}_k &\\equiv \\operatorname{E}\\left[\\ln |\\pi_k|\\right] = \\psi(\\alpha_k) - \\psi\\left(\\sum_{i=1}^K \\alpha_i\\right)\n\\end{align}\n" }, { "math_id": 152, "text": "r_{nk} \\propto {\\widetilde{\\pi}}_k {\\widetilde{\\Lambda}}_k^{1/2} \\exp \\left\\{ - \\frac{D}{2 \\beta_k} - \\frac{\\nu_k}{2} (\\mathbf{x}_n - \\mathbf{m}_k)^{\\rm T} \\mathbf{W}_k (\\mathbf{x}_n - \\mathbf{m}_k) \\right\\}" }, { "math_id": 153, "text": "\\beta_k" }, { "math_id": 154, "text": "\\mathbf{m}_k" }, { "math_id": 155, "text": "\\mathbf{W}_k" }, { "math_id": 156, "text": "\\nu_k" }, { "math_id": 157, "text": "N_k" }, { "math_id": 158, "text": "{\\bar{\\mathbf{x}}}_k" }, { "math_id": 159, "text": "\\mathbf{S}_k" }, { "math_id": 160, "text": "\\alpha_{1 \\dots K}" }, { "math_id": 161, "text": "{\\widetilde{\\pi}}_k" }, { "math_id": 162, "text": "{\\widetilde{\\Lambda}}_k" }, { "math_id": 163, "text": "p(\\mathbf{Z}\\mid \\mathbf{X})" } ]
https://en.wikipedia.org/wiki?curid=1208480
12085248
Pumping (oil well)
In the context of oil wells, pumping is a routine operation involving injecting fluids into the well. Pumping may either be done by rigging up to the kill wing valve on the Xmas tree or, if an intervention rig up is present pumping into the riser through a T-piece (a small section of riser with a connection on the side). Pumping is most routinely done to protect the well against scale and hydrates through the pumping of scale inhibitors and methanol. Pumping of kill weight brine may be done for the purposes of well kills and more exotic chemicals may be pumped from surface for cleaning the lower completion or stimulating the reservoir (though these types are jobs are more frequently done with coiled tubing for extra precision). Importance of knowing quantity. Work involving wells is fraught with difficulties as there is often very little information about the real time condition of the completion. This lack of knowledge also covers potential damage and even loss of well integrity. Therefore, it is essential for the operator to pay attention to the pressures as recorded and to the quantity pumped. A premature increase in pressure is sign of a potential blockage and continuing to pump risks burst pressure retaining components. Pumping more than an anticipated amount of fluid is a sign of a loss of integrity and a potential leak path somewhere. In either of these two situations, pumping must be stopped and the potential causes analysed. Compressed volumes. It is vital to know the effective capacity of the completion being filled in order to understand what are sensible volumes. If pumping is to continue until reaching a desired pressurisation, then the compressibility of the fluid will become significant. It is therefore important to know how much the fluid will compress under pressure to know how much extra fluid is expected to be required. As a rule of thumb in the oilfield, compression is governed by the equation: formula_0 where ΔV is the change in volume, P is the pressure at surface and V is the volume of fluid unpressurised. k is a compression factor approximately 3.5×10−6 psi−1. For example, a volume of 300 bbl is to be filled with brine and pressurised to 3000 psi at the surface. The compression is formula_1 formula_2 formula_3 Therefore, it is expected that 303.15 bbl are required to accomplish this task. If 3000 psi is achieved prior to this quantity being pumped, a blockage is to be suspected. If after pumping 303 bbl, pressurisation is not achieved, a leak is to be suspected.
[ { "math_id": 0, "text": "\\Delta V=P \\times V \\times k" }, { "math_id": 1, "text": "\\Delta V=PVk" }, { "math_id": 2, "text": "\\Delta V=3000 psi \\times 300 bbl \\times 3.5 \\times 10^{-6} psi^{-1}" }, { "math_id": 3, "text": "\\Delta V = 3.15 bbl" } ]
https://en.wikipedia.org/wiki?curid=12085248
12085484
Slepian's lemma
In probability theory, Slepian's lemma (1962), named after David Slepian, is a Gaussian comparison inequality. It states that for Gaussian random variables formula_0 and formula_1 in formula_2 satisfying formula_3, formula_4 the following inequality holds for all real numbers formula_5: formula_6 or equivalently, formula_7 While this intuitive-seeming result is true for Gaussian processes, it is not in general true for other random variables—not even those with expectation 0. As a corollary, if formula_8 is a centered stationary Gaussian process such that formula_9 for all formula_10, it holds for any real number formula_11 that formula_12 History. Slepian's lemma was first proven by Slepian in 1962, and has since been used in reliability theory, extreme value theory and areas of pure probability. It has also been re-proven in several different forms.
[ { "math_id": 0, "text": "X = (X_1,\\dots,X_n)" }, { "math_id": 1, "text": "Y = (Y_1,\\dots,Y_n)" }, { "math_id": 2, "text": "\\mathbb{R}^n" }, { "math_id": 3, "text": "\\operatorname E[X] = \\operatorname E[Y] = 0" }, { "math_id": 4, "text": "\\operatorname E[X_i^2]= \\operatorname E[Y_i^2], \\quad i=1,\\dots,n, \\text{ and } \\operatorname E[X_iX_j] \\le \\operatorname E[Y_i Y_j] \\text{ for } i \\neq j." }, { "math_id": 5, "text": "u_1,\\ldots,u_n" }, { "math_id": 6, "text": "\\Pr\\left[\\bigcap_{i=1}^n \\{X_i \\le u_i\\}\\right] \\le \\Pr\\left[\\bigcap_{i=1}^n \\{Y_i \\le u_i\\}\\right], " }, { "math_id": 7, "text": "\\Pr\\left[\\bigcup_{i=1}^n \\{X_i > u_i\\}\\right] \\ge \\Pr\\left[\\bigcup_{i=1}^n \\{Y_i > u_i\\}\\right]. " }, { "math_id": 8, "text": "(X_t)_{t \\ge 0}" }, { "math_id": 9, "text": "\\operatorname E[X_0 X_t] \\geq 0" }, { "math_id": 10, "text": "t" }, { "math_id": 11, "text": "c" }, { "math_id": 12, "text": "\\Pr\\left[\\sup_{t \\in [0,T+S]} X_t \\leq c\\right] \\ge \\Pr\\left[\\sup_{t \\in [0,T]} X_t \\leq c\\right] \\Pr \\left[\\sup_{t \\in [0,S]} X_t \\leq c\\right], \\quad T,S > 0. " } ]
https://en.wikipedia.org/wiki?curid=12085484
12086637
Borel right process
In the mathematical theory of probability, a Borel right process, named after Émile Borel, is a particular kind of continuous-time random process. Let formula_0 be a locally compact, separable, metric space. We denote by formula_1 the Borel subsets of formula_0. Let formula_2 be the space of right continuous maps from formula_3 to formula_0 that have left limits in formula_0, and for each formula_4, denote by formula_5 the coordinate map at formula_6; for each formula_7, formula_8 is the value of formula_9 at formula_6. We denote the universal completion of formula_1 by formula_10. For each formula_11, let formula_12 formula_13 and then, let formula_14 formula_15 For each Borel measurable function formula_16 on formula_17, define, for each formula_18, formula_19 Since formula_20 and the mapping given by formula_21 is right continuous, we see that for any uniformly continuous function formula_22, we have the mapping given by formula_23 is right continuous. Therefore, together with the monotone class theorem, for any universally measurable function formula_22, the mapping given by formula_24, is jointly measurable, that is, formula_25 measurable, and subsequently, the mapping is also formula_26-measurable for all finite measures formula_27 on formula_28 and formula_29 on formula_10. Here, formula_26 is the completion of formula_30 with respect to the product measure formula_31. Thus, for any bounded universally measurable function formula_22 on formula_0, the mapping formula_32 is Lebeague measurable, and hence, for each formula_33, one can define formula_34 There is enough joint measurability to check that formula_35 is a Markov resolvent on formula_36, which uniquely associated with the Markovian semigroup formula_37. Consequently, one may apply Fubini's theorem to see that formula_38 The following are the defining properties of Borel right processes: For each probability measure formula_29 on formula_39, there exists a probability measure formula_40 on formula_41 such that formula_42 is a Markov process with initial measure formula_29 and transition semigroup formula_37. Let formula_22 be formula_43-excessive for the resolvent on formula_44. Then, for each probability measure formula_29 on formula_45, a mapping given by formula_46 is formula_47 almost surely right continuous on formula_3.
[ { "math_id": 0, "text": "E" }, { "math_id": 1, "text": "\\mathcal E" }, { "math_id": 2, "text": "\\Omega" }, { "math_id": 3, "text": "[0,\\infty)" }, { "math_id": 4, "text": "t \\in [0,\\infty)" }, { "math_id": 5, "text": "X_t" }, { "math_id": 6, "text": "t" }, { "math_id": 7, "text": "\\omega \\in \\Omega " }, { "math_id": 8, "text": "X_t(\\omega) \\in E" }, { "math_id": 9, "text": "\\omega" }, { "math_id": 10, "text": "\\mathcal E^*" }, { "math_id": 11, "text": "t\\in[0,\\infty)" }, { "math_id": 12, "text": "\n\\mathcal F_t = \\sigma\\left\\{ X_s^{-1}(B) : s\\in[0,t], B \\in \\mathcal E\\right\\},\n" }, { "math_id": 13, "text": "\n\\mathcal F_t^* = \\sigma\\left\\{ X_s^{-1}(B) : s\\in[0,t], B \\in \\mathcal E^*\\right\\},\n" }, { "math_id": 14, "text": "\n\\mathcal F_\\infty = \\sigma\\left\\{ X_s^{-1}(B) : s\\in[0,\\infty), B \\in \\mathcal E\\right\\}, \n" }, { "math_id": 15, "text": "\n\\mathcal F_\\infty^* = \\sigma\\left\\{ X_s^{-1}(B) : s\\in[0,\\infty), B \\in \\mathcal E^*\\right\\}.\n" }, { "math_id": 16, "text": " f " }, { "math_id": 17, "text": " E" }, { "math_id": 18, "text": "x \\in E" }, { "math_id": 19, "text": "\nU^\\alpha f(x) = \\mathbf E^x\\left[ \\int_0^\\infty e^{-\\alpha t} f(X_t)\\, dt \\right].\n" }, { "math_id": 20, "text": "P_tf(x) = \\mathbf E^x\\left[f(X_t)\\right]" }, { "math_id": 21, "text": "t \\rightarrow X_t" }, { "math_id": 22, "text": "f" }, { "math_id": 23, "text": "t \\rightarrow P_tf(x)" }, { "math_id": 24, "text": "(t,x) \\rightarrow P_tf(x)" }, { "math_id": 25, "text": "\\mathcal B([0,\\infty))\\otimes \\mathcal E^* " }, { "math_id": 26, "text": "\\left(\\mathcal B([0,\\infty))\\otimes \\mathcal E^*\\right)^{\\lambda\\otimes \\mu}" }, { "math_id": 27, "text": "\\lambda" }, { "math_id": 28, "text": "\\mathcal B([0,\\infty))" }, { "math_id": 29, "text": "\\mu" }, { "math_id": 30, "text": "\\mathcal B([0,\\infty))\\otimes \\mathcal E^*" }, { "math_id": 31, "text": "\\lambda \\otimes \\mu" }, { "math_id": 32, "text": "t\\rightarrow P_tf(x)" }, { "math_id": 33, "text": "\\alpha \\in [0,\\infty) " }, { "math_id": 34, "text": " \nU^\\alpha f(x) = \\int_0^\\infty e^{-\\alpha t}P_tf(x) dt. \n" }, { "math_id": 35, "text": "\\{U^\\alpha : \\alpha \\in (0,\\infty)\n\\}" }, { "math_id": 36, "text": "(E,\\mathcal E^*)" }, { "math_id": 37, "text": "\\{ P_t : t \\in [0,\\infty) \\}" }, { "math_id": 38, "text": " \nU^\\alpha f(x) = \\mathbf E^x\\left[ \\int_0^\\infty e^{-\\alpha t} f(X_t) dt \\right]. \n" }, { "math_id": 39, "text": "(E, \\mathcal E)" }, { "math_id": 40, "text": "\\mathbf P^\\mu" }, { "math_id": 41, "text": "(\\Omega, \\mathcal F^*)" }, { "math_id": 42, "text": "(X_t, \\mathcal F_t^*, P^\\mu)" }, { "math_id": 43, "text": "\\alpha" }, { "math_id": 44, "text": "(E, \\mathcal E^*)" }, { "math_id": 45, "text": "(E,\\mathcal E)" }, { "math_id": 46, "text": "t \\rightarrow f(X_t)" }, { "math_id": 47, "text": "P^\\mu" } ]
https://en.wikipedia.org/wiki?curid=12086637
12087798
Johnson circles
Geometric theorem regarding 3 circles intersecting at a point In geometry, a set of Johnson circles comprises three circles of equal radius r sharing one common point of intersection H. In such a configuration the circles usually have a total of four intersections (points where at least two of them meet): the common point H that they all share, and for each of the three pairs of circles one more intersection point (referred here as their 2-wise intersection). If any two of the circles happen to osculate, they only have H as a common point, and it will then be considered that H be their 2-wise intersection as well; if they should coincide we declare their 2-wise intersection be the point diametrically opposite H. The three 2-wise intersection points define the reference triangle of the figure. The concept is named after Roger Arthur Johnson. Proofs. Property 1 is obvious from the definition. Property 2 is also clear: for any circle of radius r, and any point P on it, the circle of radius 2"r" centered at P is tangent to the circle in its point opposite to P; this applies in particular to "P" = "H", giving the anticomplementary circle C. Property 3 in the formulation of the homothety immediately follows; the triangle of points of tangency is known as the anticomplementary triangle. For properties 4 and 5, first observe that any two of the three Johnson circles are interchanged by the reflection in the line connecting H and their 2-wise intersection (or in their common tangent at H if these points should coincide), and this reflection also interchanges the two vertices of the anticomplementary triangle lying on these circles. The 2-wise intersection point therefore is the midpoint of a side of the anticomplementary triangle, and H lies on the perpendicular bisector of this side. Now the midpoints of the sides of any triangle are the images of its vertices by a homothety with factor −½, centered at the barycenter of the triangle. Applied to the anticomplementary triangle, which is itself obtained from the Johnson triangle by a homothety with factor 2, it follows from composition of homotheties that the reference triangle is homothetic to the Johnson triangle by a factor −1. Since such a homothety is a congruence, this gives property 5, and also the Johnson circles theorem since congruent triangles have circumscribed circles of equal radius. For property 6, it was already established that the perpendicular bisectors of the sides of the anticomplementary triangle all pass through the point H; since that side is parallel to a side of the reference triangle, these perpendicular bisectors are also the altitudes of the reference triangle. Property 7 follows immediately from property 6 since the homothetic center whose factor is -1 must lie at the midpoint of the circumcenters O of the reference triangle and H of the Johnson triangle; the latter is the orthocenter of the reference triangle, and its nine-point center is known to be that midpoint. Since the central symmetry also maps the orthocenter of the reference triangle to that of the Johnson triangle, the homothetic center is also the nine-point center of the Johnson triangle. There is also an algebraic proof of the Johnson circles theorem, using a simple vector computation. There are vectors formula_0 all of length r, such that the Johnson circles are centered respectively at formula_1 Then the 2-wise intersection points are respectively formula_2, and the point formula_3 clearly has distance r to any of those 2-wise intersection points. Further properties. The three Johnson circles can be considered the reflections of the circumcircle of the reference triangle about each of the three sides of the reference triangle. Furthermore, under the reflections about the three sides of the reference triangle, its orthocenter H maps to three points on the circumcircle of the reference triangle that form the vertices of the circum-orthic triangle, its circumcenter O maps onto the vertices of the Johnson triangle and its Euler line (line passing through O, N, H) generates three lines that are concurrent at "X"(110). The Johnson triangle and its reference triangle share the same nine-point center, the same Euler line and the same nine-point circle. The six points formed from the vertices of the reference triangle and its Johnson triangle all lie on the Johnson circumconic that is centered at the nine-point center and that has the point "X"(216) of the reference triangle as its perspector. The circumconic and the circumcircle share a fourth point, "X"(110) of the reference triangle. Finally there are two interesting and documented circumcubics that pass through the six vertices of the reference triangle and its Johnson triangle as well as the circumcenter, the orthocenter and the nine-point center. The first is known as the first Musselman cubic – "K"026. This cubic also passes through the six vertices of the medial triangle and the medial triangle of the Johnson triangle. The second cubic is known as the Euler central cubic – "K"044. This cubic also passes through the six vertices of the orthic triangle and the orthic triangle of the Johnson triangle. The "X"("i") point notation is the Clark Kimberling ETC classification of triangle centers. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\vec{u}, \\vec{v}, \\vec{w}," }, { "math_id": 1, "text": "H+\\vec{u}, H+\\vec{v}, H+\\vec{w}." }, { "math_id": 2, "text": "H+\\vec{u}+\\vec{v}, H+\\vec{u}+\\vec{w}, H+\\vec{v}+\\vec{w}" }, { "math_id": 3, "text": "H+\\vec{u}+\\vec{v}+\\vec{w}" } ]
https://en.wikipedia.org/wiki?curid=12087798
1208872
Shannon's source coding theorem
Establishes the limits to possible data compression In information theory, Shannon's source coding theorem (or noiseless coding theorem) establishes the statistical limits to possible data compression for data whose source is an independent identically-distributed random variable, and the operational meaning of the Shannon entropy. Named after Claude Shannon, the source coding theorem shows that, in the limit, as the length of a stream of independent and identically-distributed random variable (i.i.d.) data tends to infinity, it is impossible to compress such data such that the code rate (average number of bits per symbol) is less than the Shannon entropy of the source, without it being virtually certain that information will be lost. However it is possible to get the code rate arbitrarily close to the Shannon entropy, with negligible probability of loss. The source coding theorem for symbol codes places an upper and a lower bound on the minimal possible expected length of codewords as a function of the entropy of the input word (which is viewed as a random variable) and of the size of the target alphabet. Note that, for data that exhibits more dependencies (whose source is not an i.i.d. random variable), the Kolmogorov complexity, which quantifies the minimal description length of an object, is more suitable to describe the limits of data compression. Shannon entropy takes into account only frequency regularities while Kolmogorov complexity takes into account all algorithmic regularities, so in general the latter is smaller. On the other hand, if an object is generated by a random process in such a way that it has only frequency regularities, entropy is close to complexity with high probability (Shen et al. 2017). Statements. "Source coding" is a mapping from (a sequence of) symbols from an information source to a sequence of alphabet symbols (usually bits) such that the source symbols can be exactly recovered from the binary bits (lossless source coding) or recovered within some distortion (lossy source coding). This is one approach to data compression. Source coding theorem. In information theory, the source coding theorem (Shannon 1948) informally states that (MacKay 2003, pg. 81, Cover 2006, Chapter 5): N i.i.d. random variables each with entropy "H"("X") can be compressed into more than "N H"("X") bits with negligible risk of information loss, as "N" → ∞; but conversely, if they are compressed into fewer than "N H"("X") bits it is virtually certain that information will be lost.The formula_0 coded sequence represents the compressed message in a biunivocal way, under the assumption that the decoder knows the source. From a practical point of view, this hypothesis is not always true. Consequently, when the entropy encoding is applied the transmitted message is formula_1. Usually, the information that characterizes the source is inserted at the beginning of the transmitted message. Source coding theorem for symbol codes. Let Σ1, Σ2 denote two finite alphabets and let Σ and Σ denote the set of all finite words from those alphabets (respectively). Suppose that X is a random variable taking values in Σ1 and let  "f"  be a uniquely decodable code from Σ to Σ where |Σ2| "a". Let S denote the random variable given by the length of codeword  "f" ("X"). If  "f"  is optimal in the sense that it has the minimal expected word length for X, then (Shannon 1948): formula_2 Where formula_3 denotes the expected value operator. Proof: source coding theorem. Given X is an i.i.d. source, its time series "X"1, ..., "Xn" is i.i.d. with entropy "H"("X") in the discrete-valued case and differential entropy in the continuous-valued case. The Source coding theorem states that for any "ε" &gt; 0, i.e. for any rate "H"("X") + "ε" larger than the entropy of the source, there is large enough n and an encoder that takes n i.i.d. repetition of the source, "X"1:"n", and maps it to "n"("H"("X") + "ε") binary bits such that the source symbols "X"1:"n" are recoverable from the binary bits with probability of at least 1 − "ε". Proof of Achievability. Fix some "ε" &gt; 0, and let formula_4 The typical set, "A", is defined as follows: formula_5 The asymptotic equipartition property (AEP) shows that for large enough n, the probability that a sequence generated by the source lies in the typical set, "A", as defined approaches one. In particular, for sufficiently large n, formula_6 can be made arbitrarily close to 1, and specifically, greater than formula_7 (See AEP for a proof). The definition of typical sets implies that those sequences that lie in the typical set satisfy: formula_8 Since formula_13 bits are enough to point to any string in this set. The encoding algorithm: the encoder checks if the input sequence lies within the typical set; if yes, it outputs the index of the input sequence within the typical set; if not, the encoder outputs an arbitrary "n"("H"("X") + "ε") digit number. As long as the input sequence lies within the typical set (with probability at least 1 − "ε"), the encoder does not make any error. So, the probability of error of the encoder is bounded above by ε. "Proof of converse": the converse is proved by showing that any set of size smaller than "A" (in the sense of exponent) would cover a set of probability bounded away from 1. Proof: Source coding theorem for symbol codes. For 1 ≤ "i" ≤ "n" let "si" denote the word length of each possible "xi". Define formula_14, where C is chosen so that "q"1 + ... + "qn" 1. Then formula_15 where the second line follows from Gibbs' inequality and the fifth line follows from Kraft's inequality: formula_16 so log "C" ≤ 0. For the second inequality we may set formula_17 so that formula_18 and so formula_19 and formula_20 and so by Kraft's inequality there exists a prefix-free code having those word lengths. Thus the minimal S satisfies formula_21 Extension to non-stationary independent sources. Fixed rate lossless source coding for discrete time non-stationary independent sources. Define typical set "A" as: formula_22 Then, for given "δ" &gt; 0, for n large enough, Pr("A") &gt; 1 − "δ". Now we just encode the sequences in the typical set, and usual methods in source coding show that the cardinality of this set is smaller than formula_23. Thus, on an average, bits suffice for encoding with probability greater than 1 − "δ", where ε and δ can be made arbitrarily small, by making n larger. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "NH(X)" }, { "math_id": 1, "text": "NH(X)+(inf. source)" }, { "math_id": 2, "text": " \\frac{H(X)}{\\log_2 a} \\leq \\mathbb{E}[S] < \\frac{H(X)}{\\log_2 a} +1 " }, { "math_id": 3, "text": "\\mathbb{E}" }, { "math_id": 4, "text": "p(x_1, \\ldots, x_n) = \\Pr \\left[X_1 = x_1, \\cdots, X_n = x_n \\right]." }, { "math_id": 5, "text": "A_n^\\varepsilon =\\left\\{(x_1, \\cdots, x_n) \\ : \\ \\left|-\\frac{1}{n} \\log p(x_1, \\cdots, x_n) - H_n(X)\\right| < \\varepsilon \\right\\}." }, { "math_id": 6, "text": "P((X_1,X_2,\\cdots,X_n) \\in A_n^\\varepsilon)" }, { "math_id": 7, "text": "1-\\varepsilon" }, { "math_id": 8, "text": "2^{-n(H(X)+\\varepsilon)} \\leq p \\left (x_1, \\cdots, x_n \\right ) \\leq 2^{-n(H(X)-\\varepsilon)}" }, { "math_id": 9, "text": "(X_1,X_2,\\cdots X_n)" }, { "math_id": 10, "text": "\\left| A_n^\\varepsilon \\right| \\leq 2^{n(H(X)+\\varepsilon)}" }, { "math_id": 11, "text": " p(x_1,x_2,\\cdots x_n)" }, { "math_id": 12, "text": "\\left| A_n^\\varepsilon \\right| \\geq (1-\\varepsilon) 2^{n(H(X)-\\varepsilon)}" }, { "math_id": 13, "text": "\\left| A_n^\\varepsilon \\right| \\leq 2^{n(H(X)+\\varepsilon)}, n(H(X)+\\varepsilon)" }, { "math_id": 14, "text": "q_i = a^{-s_i}/C" }, { "math_id": 15, "text": "\\begin{align}\nH(X) &= -\\sum_{i=1}^n p_i \\log_2 p_i \\\\\n &\\leq -\\sum_{i=1}^n p_i \\log_2 q_i \\\\\n &= -\\sum_{i=1}^n p_i \\log_2 a^{-s_i} + \\sum_{i=1}^n p_i \\log_2 C \\\\\n &= -\\sum_{i=1}^n p_i \\log_2 a^{-s_i} + \\log_2 C \\\\\n &\\leq -\\sum_{i=1}^n - s_i p_i \\log_2 a \\\\\n &= \\mathbb{E} S \\log_2 a \\\\\n\\end{align}" }, { "math_id": 16, "text": "C = \\sum_{i=1}^n a^{-s_i} \\leq 1" }, { "math_id": 17, "text": "s_i = \\lceil - \\log_a p_i \\rceil " }, { "math_id": 18, "text": " - \\log_a p_i \\leq s_i < -\\log_a p_i + 1 " }, { "math_id": 19, "text": " a^{-s_i} \\leq p_i" }, { "math_id": 20, "text": " \\sum a^{-s_i} \\leq \\sum p_i = 1" }, { "math_id": 21, "text": "\\begin{align}\n\\mathbb{E} S & = \\sum p_i s_i \\\\\n& < \\sum p_i \\left( -\\log_a p_i +1 \\right) \\\\\n& = \\sum - p_i \\frac{\\log_2 p_i}{\\log_2 a} +1 \\\\\n& = \\frac{H(X)}{\\log_2 a} +1 \\\\\n\\end{align}" }, { "math_id": 22, "text": "A_n^\\varepsilon = \\left \\{x_1^n \\ : \\ \\left|-\\frac{1}{n} \\log p \\left (X_1, \\cdots, X_n \\right ) - \\overline{H_n}(X)\\right| < \\varepsilon \\right \\}." }, { "math_id": 23, "text": "2^{n(\\overline{H_n}(X)+\\varepsilon)}" } ]
https://en.wikipedia.org/wiki?curid=1208872
12088839
Boroxine
6-sided cyclic compound of oxygen and boron &lt;templatestyles src="Chembox/styles.css"/&gt; Chemical compound Boroxine () is a 6-membered heterocyclic compound composed of alternating oxygen and singly-hydrogenated boron atoms. Boroxine derivatives (boronic anhydrides) such as trimethylboroxine and triphenylboroxine also make up a broader class of compounds called boroxines. These compounds are solids that are usually in equilibrium with their respective boronic acids at room temperature. Beside being used in theoretical studies, boroxine is primarily used in the production of optics. Structure and bonding. Three-coordinate compounds of boron typically exhibit trigonal planar geometry, therefore the boroxine ring is locked in a planar geometry as well. These compounds are isoelectronic to benzene. With the vacant p-orbital on the boron atoms, they may possess some aromatic character. Boron single-bonds on boroxine compounds are mostly s-character. Ethyl-substituted boroxine has B-O bond lengths of 1.384 Å and B-C bond lengths of 1.565 Å. Phenyl-substituted boroxine has similar bond lengths of 1.386 Å and 1.546 Å respectively, showing that the substituent has little effect on the boroxine ring size. Substitutions onto a boroxine ring determine its crystal structure. Alkyl-substituted boroxines have the simplest crystal structure. These molecules stack on top of each other, aligning an oxygen atom from one molecule with a boron atom in another, leaving each boron atom between two other oxygen atoms. This forms a tube out of the individual boroxine rings. The intermolecular B-O distance of ethyl-substituted boroxine is 3.462 Å, which is much longer than the B-O bond distance of 1.384 Å. The crystal structure of phenyl-substituted boroxine is more complex. The interaction between the vacant p-orbitals in the boron atoms and the π-electrons in the aromatic, phenyl-substituents cause a different crystal structure. The boroxine ring of one molecule is stacked between two phenyl rings of other molecules. This arrangement allows the phenyl-substituents to donate π-electron density to the vacant boron p-orbitals. Synthesis. The parent boroxine ("cyclo"-(HBO)3) is prepared in small quantities as a low pressure gas by high temperature reaction of water and elemental boron or reaction of various boranes (B2H6 or B5H9) with O2. It is thermodynamically unstable with respect to disproportionation to diborane and boron oxide. Some reactivity studies and an IR spectrum are reported, but it is otherwise not well characterized. As discovered in the 1930s, substituted boroxines ("cyclo"-(RBO)3, R = alkyl or aryl) are generally produced from their corresponding boronic acids by dehydration. This dehydration can be done either by a drying agent or by heating under a high vacuum. Trimethylboroxine can be synthesized by reacting carbon monoxide with diborane (B2H6) and lithium borohydride (LiBH4) as a catalyst (or reaction of borane–tetrahydrofuran or borane–(dimethyl sulfide) in the presence of sodium borohydride): formula_0 Reactions. Trimethylboroxine is used in the methylation of various aryl halides through palladium-catalyzed Suzuki-Miyaura coupling reactions: &lt;chem&gt;\overset{(X = Br, I)}{C6H5X} + (CH3BO)3 -&gt;[\ce{K2CO3, Pd(PPh3)4}][\ce{dioxane}] C6H5CH3&lt;/chem&gt; Another form of the Suzuki-Miyaura coupling reaction exhibits selectivity to aryl chlorides: Boroxines have also been examined as precursors to monomeric oxoborane, HB≡O. This compound quickly converts back to the cyclic boroxine, even at low temperatures. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "3 \\ \\ce{CO} + \\frac{3}{2} \\ \\ce{B2H6} \\ce{->[\\ce{LiBH4 (catalyst)}] (H3C BO)3}" } ]
https://en.wikipedia.org/wiki?curid=12088839
12089705
Continuous function (set theory)
In set theory, a continuous function is a sequence of ordinals such that the values assumed at limit stages are the limits (limit suprema and limit infima) of all values at previous stages. More formally, let "γ" be an ordinal, and formula_0 be a "γ"-sequence of ordinals. Then "s" is continuous if at every limit ordinal "β" &lt; "γ", formula_1 and formula_2 Alternatively, if "s" is an increasing function then "s" is continuous if "s": "γ" → range("s") is a continuous function when the sets are each equipped with the order topology. These continuous functions are often used in cofinalities and cardinal numbers. A normal function is a function that is both continuous and strictly increasing. References. &lt;templatestyles src="Refbegin/styles.css" /&gt;
[ { "math_id": 0, "text": "s := \\langle s_{\\alpha}| \\alpha < \\gamma\\rangle" }, { "math_id": 1, "text": "s_{\\beta} = \\limsup\\{s_{\\alpha}: \\alpha < \\beta\\} = \\inf \\{ \\sup\\{s_{\\alpha}: \\delta \\leq \\alpha < \\beta\\} : \\delta < \\beta\\} " }, { "math_id": 2, "text": "s_{\\beta} = \\liminf\\{s_{\\alpha}: \\alpha < \\beta\\} = \\sup \\{ \\inf\\{s_{\\alpha}: \\delta \\leq \\alpha < \\beta\\} : \\delta < \\beta\\} \\,." } ]
https://en.wikipedia.org/wiki?curid=12089705
1209
Area
Size of a two-dimensional surface Area is the measure of a region's size on a surface. The area of a plane region or "plane area" refers to the area of a shape or planar lamina, while "surface area" refers to the area of an open surface or the boundary of a three-dimensional object. Area can be understood as the amount of material with a given thickness that would be necessary to fashion a model of the shape, or the amount of paint necessary to cover the surface with a single coat. It is the two-dimensional analogue of the length of a curve (a one-dimensional concept) or the volume of a solid (a three-dimensional concept). Two different regions may have the same area (as in squaring the circle); by synecdoche, "area" sometimes is used to refer to the region, as in a "polygonal area". The area of a shape can be measured by comparing the shape to squares of a fixed size. In the International System of Units (SI), the standard unit of area is the square metre (written as m2), which is the area of a square whose sides are one metre long. A shape with an area of three square metres would have the same area as three such squares. In mathematics, the unit square is defined to have area one, and the area of any other shape or surface is a dimensionless real number. There are several well-known formulas for the areas of simple shapes such as triangles, rectangles, and circles. Using these formulas, the area of any polygon can be found by dividing the polygon into triangles. For shapes with curved boundary, calculus is usually required to compute the area. Indeed, the problem of determining the area of plane figures was a major motivation for the historical development of calculus. For a solid shape such as a sphere, cone, or cylinder, the area of its boundary surface is called the surface area. Formulas for the surface areas of simple shapes were computed by the ancient Greeks, but computing the surface area of a more complicated shape usually requires multivariable calculus. Area plays an important role in modern mathematics. In addition to its obvious importance in geometry and calculus, area is related to the definition of determinants in linear algebra, and is a basic property of surfaces in differential geometry. In analysis, the area of a subset of the plane is defined using Lebesgue measure, though not every subset is measurable if one supposes the axiom of choice. In general, area in higher mathematics is seen as a special case of volume for two-dimensional regions. Area can be defined through the use of axioms, defining it as a function of a collection of certain plane figures to the set of real numbers. It can be proved that such a function exists. Formal definition. An approach to defining what is meant by "area" is through axioms. "Area" can be defined as a function from a collection M of a special kinds of plane figures (termed measurable sets) to the set of real numbers, which satisfies the following properties: It can be proved that such an area function actually exists. Units. Every unit of length has a corresponding unit of area, namely the area of a square with the given side length. Thus areas can be measured in square metres (m2), square centimetres (cm2), square millimetres (mm2), square kilometres (km2), square feet (ft2), square yards (yd2), square miles (mi2), and so forth. Algebraically, these units can be thought of as the squares of the corresponding length units. The SI unit of area is the square metre, which is considered an SI derived unit. Conversions. Calculation of the area of a square whose length and width are 1 metre would be: 1 metre × 1 metre = 1 m2 and so, a rectangle with different sides (say length of 3 metres and width of 2 metres) would have an area in square units that can be calculated as: 3 metres × 2 metres = 6 m2. This is equivalent to 6 million square millimetres. Other useful conversions are: Non-metric units. In non-metric units, the conversion between two square units is the square of the conversion between the corresponding length units. 1 foot = 12 inches, the relationship between square feet and square inches is 1 square foot = 144 square inches, where 144 = 122 = 12 × 12. Similarly: In addition, conversion factors include: Other units including historical. There are several other common units for area. The are was the original unit of area in the metric system, with: Though the are has fallen out of use, the hectare is still commonly used to measure land: Other uncommon metric units of area include the tetrad, the hectad, and the myriad. The acre is also commonly used to measure land areas, where An acre is approximately 40% of a hectare. On the atomic scale, area is measured in units of barns, such that: The barn is commonly used in describing the cross-sectional area of interaction in nuclear physics. In South Asia (mainly Indians), although the countries use SI units as official, many South Asians still use traditional units. Each administrative division has its own area unit, some of them have same names, but with different values. There's no official consensus about the traditional units values. Thus, the conversions between the SI units and the traditional units may have different results, depending on what reference that has been used. Some traditional South Asian units that have fixed value: History. Circle area. In the 5th century BCE, Hippocrates of Chios was the first to show that the area of a disk (the region enclosed by a circle) is proportional to the square of its diameter, as part of his quadrature of the lune of Hippocrates, but did not identify the constant of proportionality. Eudoxus of Cnidus, also in the 5th century BCE, also found that the area of a disk is proportional to its radius squared. Subsequently, Book I of Euclid's "Elements" dealt with equality of areas between two-dimensional figures. The mathematician Archimedes used the tools of Euclidean geometry to show that the area inside a circle is equal to that of a right triangle whose base has the length of the circle's circumference and whose height equals the circle's radius, in his book "Measurement of a Circle". (The circumference is 2π"r", and the area of a triangle is half the base times the height, yielding the area π"r"2 for the disk.) Archimedes approximated the value of π (and hence the area of a unit-radius circle) with his doubling method, in which he inscribed a regular triangle in a circle and noted its area, then doubled the number of sides to give a regular hexagon, then repeatedly doubled the number of sides as the polygon's area got closer and closer to that of the circle (and did the same with circumscribed polygons). Quadrilateral area. In the 7th century CE, Brahmagupta developed a formula, now known as Brahmagupta's formula, for the area of a cyclic quadrilateral (a quadrilateral inscribed in a circle) in terms of its sides. In 1842, the German mathematicians Carl Anton Bretschneider and Karl Georg Christian von Staudt independently found a formula, known as Bretschneider's formula, for the area of any quadrilateral. General polygon area. The development of Cartesian coordinates by René Descartes in the 17th century allowed the development of the surveyor's formula for the area of any polygon with known vertex locations by Gauss in the 19th century. Areas determined using calculus. The development of integral calculus in the late 17th century provided tools that could subsequently be used for computing more complicated areas, such as the area of an ellipse and the surface areas of various curved three-dimensional objects. Area formulas. Polygon formulas. For a non-self-intersecting (simple) polygon, the Cartesian coordinates formula_0 ("i"=0, 1, ..., "n"-1) of whose "n" vertices are known, the area is given by the surveyor's formula: formula_1 where when "i"="n"-1, then "i"+1 is expressed as modulus "n" and so refers to 0. Rectangles. The most basic area formula is the formula for the area of a rectangle. Given a rectangle with length l and width w, the formula for the area is: "A" "lw"  (rectangle). That is, the area of the rectangle is the length multiplied by the width. As a special case, as "l" "w" in the case of a square, the area of a square with side length s is given by the formula: "A" "s"2  (square). The formula for the area of a rectangle follows directly from the basic properties of area, and is sometimes taken as a definition or axiom. On the other hand, if geometry is developed before arithmetic, this formula can be used to define multiplication of real numbers. Dissection, parallelograms, and triangles. Most other simple formulas for area follow from the method of dissection. This involves cutting a shape into pieces, whose areas must sum to the area of the original shape. For an example, any parallelogram can be subdivided into a trapezoid and a right triangle, as shown in figure to the left. If the triangle is moved to the other side of the trapezoid, then the resulting figure is a rectangle. It follows that the area of the parallelogram is the same as the area of the rectangle: "A" "bh"  (parallelogram). However, the same parallelogram can also be cut along a diagonal into two congruent triangles, as shown in the figure to the right. It follows that the area of each triangle is half the area of the parallelogram: formula_2  (triangle). Similar arguments can be used to find area formulas for the trapezoid as well as more complicated polygons. Area of curved shapes. Circles. The formula for the area of a circle (more properly called the area enclosed by a circle or the area of a disk) is based on a similar method. Given a circle of radius "r", it is possible to partition the circle into sectors, as shown in the figure to the right. Each sector is approximately triangular in shape, and the sectors can be rearranged to form an approximate parallelogram. The height of this parallelogram is "r", and the width is half the circumference of the circle, or π"r". Thus, the total area of the circle is π"r"2: "A" π"r"2  (circle). Though the dissection used in this formula is only approximate, the error becomes smaller and smaller as the circle is partitioned into more and more sectors. The limit of the areas of the approximate parallelograms is exactly π"r"2, which is the area of the circle. This argument is actually a simple application of the ideas of calculus. In ancient times, the method of exhaustion was used in a similar way to find the area of the circle, and this method is now recognized as a precursor to integral calculus. Using modern methods, the area of a circle can be computed using a definite integral: formula_3 Ellipses. The formula for the area enclosed by an ellipse is related to the formula of a circle; for an ellipse with semi-major and semi-minor axes "x" and "y" the formula is: formula_4 Non-planar surface area. Most basic formulas for surface area can be obtained by cutting surfaces and flattening them out (see: developable surfaces). For example, if the side surface of a cylinder (or any prism) is cut lengthwise, the surface can be flattened out into a rectangle. Similarly, if a cut is made along the side of a cone, the side surface can be flattened out into a sector of a circle, and the resulting area computed. The formula for the surface area of a sphere is more difficult to derive: because a sphere has nonzero Gaussian curvature, it cannot be flattened out. The formula for the surface area of a sphere was first obtained by Archimedes in his work "On the Sphere and Cylinder". The formula is: "A" 4"πr"2  (sphere), where "r" is the radius of the sphere. As with the formula for the area of a circle, any derivation of this formula inherently uses methods similar to calculus. formula_11 formula_12 where formula_13 is the curve with the greater y-value. formula_15 formula_18 or the "z"-component of formula_19 (For details, see .) This is the principle of the planimeter mechanical device. General formulas. Bounded area between two quadratic functions. To find the bounded area between two quadratic functions, we first subtract one from the other, writing the difference as formula_20 where "f"("x") is the quadratic upper bound and "g"("x") is the quadratic lower bound. By the area integral formulas above and Vieta's formula, we can obtain that formula_21 The above remains valid if one of the bounding functions is linear instead of quadratic. General formula for surface area. The general formula for the surface area of the graph of a continuously differentiable function formula_35 where formula_36 and formula_37 is a region in the xy-plane with the smooth boundary: formula_38 An even more general formula for the area of the graph of a parametric surface in the vector form formula_39 where formula_40 is a continuously differentiable vector function of formula_41 is: formula_42 List of formulas. The above calculations show how to find the areas of many common shapes. The areas of irregular (and thus arbitrary) polygons can be calculated using the "Surveyor's formula" (shoelace formula). Relation of area to perimeter. The isoperimetric inequality states that, for a closed curve of length "L" (so the region it encloses has perimeter "L") and for area "A" of the region that it encloses, formula_43 and equality holds if and only if the curve is a circle. Thus a circle has the largest area of any closed figure with a given perimeter. At the other extreme, a figure with given perimeter "L" could have an arbitrarily small area, as illustrated by a rhombus that is "tipped over" arbitrarily far so that two of its angles are arbitrarily close to 0° and the other two are arbitrarily close to 180°. For a circle, the ratio of the area to the circumference (the term for the perimeter of a circle) equals half the radius "r". This can be seen from the area formula "πr"2 and the circumference formula 2"πr". The area of a regular polygon is half its perimeter times the apothem (where the apothem is the distance from the center to the nearest point on any side). Fractals. Doubling the edge lengths of a polygon multiplies its area by four, which is two (the ratio of the new to the old side length) raised to the power of two (the dimension of the space the polygon resides in). But if the one-dimensional lengths of a fractal drawn in two dimensions are all doubled, the spatial content of the fractal scales by a power of two that is not necessarily an integer. This power is called the fractal dimension of the fractal. Area bisectors. There are an infinitude of lines that bisect the area of a triangle. Three of them are the medians of the triangle (which connect the sides' midpoints with the opposite vertices), and these are concurrent at the triangle's centroid; indeed, they are the only area bisectors that go through the centroid. Any line through a triangle that splits both the triangle's area and its perimeter in half goes through the triangle's incenter (the center of its incircle). There are either one, two, or three of these for any given triangle. Any line through the midpoint of a parallelogram bisects the area. All area bisectors of a circle or other ellipse go through the center, and any chords through the center bisect the area. In the case of a circle they are the diameters of the circle. Optimization. Given a wire contour, the surface of least area spanning ("filling") it is a minimal surface. Familiar examples include soap bubbles. The question of the filling area of the Riemannian circle remains open. The circle has the largest area of any two-dimensional object having the same perimeter. A cyclic polygon (one inscribed in a circle) has the largest area of any polygon with a given number of sides of the same lengths. A version of the isoperimetric inequality for triangles states that the triangle of greatest area among all those with a given perimeter is equilateral. The triangle of largest area of all those inscribed in a given circle is equilateral; and the triangle of smallest area of all those circumscribed around a given circle is equilateral. The ratio of the area of the incircle to the area of an equilateral triangle, formula_44, is larger than that of any non-equilateral triangle. The ratio of the area to the square of the perimeter of an equilateral triangle, formula_45 is larger than that for any other triangle. *Routh's theorem, a generalization of the one-seventh area triangle. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "(x_i, y_i)" }, { "math_id": 1, "text": "A = \\frac{1}{2} \\Biggl\\vert \\sum_{i = 0}^{n - 1}( x_i y_{i + 1} - x_{i + 1} y_i) \\Biggr\\vert" }, { "math_id": 2, "text": "A = \\frac{1}{2}bh" }, { "math_id": 3, "text": "A \\;=\\;2\\int_{-r}^r \\sqrt{r^2 - x^2}\\,dx \\;=\\; \\pi r^2." }, { "math_id": 4, "text": "A = \\pi xy ." }, { "math_id": 5, "text": "\\tfrac12Bh" }, { "math_id": 6, "text": "\\sqrt{s(s-a)(s-b)(s-c)}" }, { "math_id": 7, "text": "s = \\tfrac12(a + b + c)" }, { "math_id": 8, "text": "\\tfrac12 a b \\sin(C)" }, { "math_id": 9, "text": "\\tfrac12(x_1 y_2 + x_2 y_3 + x_3 y_1 - x_2 y_1 - x_3 y_2 - x_1 y_3)" }, { "math_id": 10, "text": "i + \\frac{b}{2} - 1" }, { "math_id": 11, "text": " A = \\int_a^{b} f(x) \\, dx." }, { "math_id": 12, "text": " A = \\int_a^{b} ( f(x) - g(x) ) \\, dx, " }, { "math_id": 13, "text": " f(x) " }, { "math_id": 14, "text": "r = r(\\theta)" }, { "math_id": 15, "text": "A = {1 \\over 2} \\int r^2 \\, d\\theta. " }, { "math_id": 16, "text": "\\vec u(t) = (x(t), y(t)) " }, { "math_id": 17, "text": " \\vec u(t_0) = \\vec u(t_1) " }, { "math_id": 18, "text": " \\oint_{t_0}^{t_1} x \\dot y \\, dt = - \\oint_{t_0}^{t_1} y \\dot x \\, dt = {1 \\over 2} \\oint_{t_0}^{t_1} (x \\dot y - y \\dot x) \\, dt " }, { "math_id": 19, "text": "{1 \\over 2} \\oint_{t_0}^{t_1} \\vec u \\times \\dot{\\vec u} \\, dt." }, { "math_id": 20, "text": "f(x)-g(x)=ax^2+bx+c=a(x-\\alpha)(x-\\beta)" }, { "math_id": 21, "text": "A=\\frac{(b^2-4ac)^{3/2}}{6a^2}=\\frac{a}{6}(\\beta-\\alpha)^3,\\qquad a\\neq0." }, { "math_id": 22, "text": "\\pi r\\left(r + \\sqrt{r^2 + h^2}\\right)" }, { "math_id": 23, "text": "\\pi r^2 + \\pi r l " }, { "math_id": 24, "text": "\\pi r (r + l) \\,\\!" }, { "math_id": 25, "text": "\\pi r^2 " }, { "math_id": 26, "text": "\\pi r l " }, { "math_id": 27, "text": "6s^2" }, { "math_id": 28, "text": "2\\pi r(r + h)" }, { "math_id": 29, "text": "2\\pi r" }, { "math_id": 30, "text": "\\pi d" }, { "math_id": 31, "text": "2B + Ph" }, { "math_id": 32, "text": "B + \\frac{PL}{2}" }, { "math_id": 33, "text": "2 (\\ell w + \\ell h + w h)" }, { "math_id": 34, "text": "\\ell" }, { "math_id": 35, "text": "z=f(x,y)," }, { "math_id": 36, "text": "(x,y)\\in D\\subset\\mathbb{R}^2" }, { "math_id": 37, "text": "D" }, { "math_id": 38, "text": " A=\\iint_D\\sqrt{\\left(\\frac{\\partial f}{\\partial x}\\right)^2+\\left(\\frac{\\partial f}{\\partial y}\\right)^2+1}\\,dx\\,dy. " }, { "math_id": 39, "text": "\\mathbf{r}=\\mathbf{r}(u,v)," }, { "math_id": 40, "text": "\\mathbf{r}" }, { "math_id": 41, "text": "(u,v)\\in D\\subset\\mathbb{R}^2" }, { "math_id": 42, "text": " A=\\iint_D \\left|\\frac{\\partial\\mathbf{r}}{\\partial u}\\times\\frac{\\partial\\mathbf{r}}{\\partial v}\\right|\\,du\\,dv. " }, { "math_id": 43, "text": "4\\pi A \\le L^2," }, { "math_id": 44, "text": "\\frac{\\pi}{3\\sqrt{3}}" }, { "math_id": 45, "text": "\\frac{1}{12\\sqrt{3}}," } ]
https://en.wikipedia.org/wiki?curid=1209
1209000
Electric flux
Measure of electric field through surface In electromagnetism, electric flux is the measure of the electric field through a given surface, although an electric field in itself cannot flow. The electric field E can exert a force on an electric charge at any point in space. The electric field is the gradient of the potential. Overview. An electric charge, such as a single electron in space, has an electric field surrounding it. In pictorial form, this electric field is shown as "lines of flux" being radiated from a dot (the charge). These are called Gauss lines. Note that field lines are a graphic illustration of field strength and direction and have no physical meaning. The density of these lines corresponds to the electric field strength, which could also be called the electric flux density: the number of "lines" per unit area. Electric flux is directly proportional to the total number of electric field lines going through a surface. For simplicity in calculations it is often convenient to consider a surface perpendicular to the flux lines. If the electric field is uniform, the electric flux passing through a surface of vector area A is formula_0 where E is the electric field (having units of V/m), "E" is its magnitude, "A" is the area of the surface, and "θ" is the angle between the electric field lines and the normal (perpendicular) to A. For a non-uniform electric field, the electric flux dΦE through a small surface area dA is given by formula_1 (the electric field, E, multiplied by the component of area perpendicular to the field). The electric flux over a surface is therefore given by the surface integral: formula_2 where E is the electric field and dA is an infinitesimal area on the surface with an outward facing surface normal defining its direction. For a closed Gaussian surface, electric flux is given by: &lt;templatestyles src="Block indent/styles.css"/&gt;formula_3 formula_4 formula_5 where This relation is known as Gauss's law for electric fields in its integral form and it is one of Maxwell's equations. While the electric flux is not affected by charges that are not within the closed surface, the net electric field, E can be affected by charges that lie outside the closed surface. While Gauss's law holds for all situations, it is most useful for "by hand" calculations when high degrees of symmetry exist in the electric field. Examples include spherical and cylindrical symmetry. The [SI] unit of electric flux is the volt-meter (V·m), or Dimension is m⁰t‐¹a‐¹, or, equivalently, newton-meter squared per coulomb (N·m2·C−1). Thus, the unit of electric flux expressed in terms of SI base units is kg·m3·s−3·A−1. Its dimensional formula is formula_6.
[ { "math_id": 0, "text": "\\Phi_E = \\mathbf{E} \\cdot \\mathbf{A} = EA \\cos \\theta," }, { "math_id": 1, "text": "\\textrm d\\Phi_E = \\mathbf{E} \\cdot \\textrm d\\mathbf{A}" }, { "math_id": 2, "text": "\\Phi_E = \\iint_S \\mathbf{E} \\cdot \\textrm{d}\\mathbf{A}" }, { "math_id": 3, "text": "\\Phi_E =\\,\\!" }, { "math_id": 4, "text": "\\scriptstyle S" }, { "math_id": 5, "text": "\\mathbf{E}\\cdot \\textrm{d}\\mathbf{A} = \\frac{Q}{\\varepsilon_0}\\,\\!" }, { "math_id": 6, "text": "\\mathsf{L}^3\\mathsf{MT}^{-3}\\mathsf{I}^{-1}" } ]
https://en.wikipedia.org/wiki?curid=1209000
1209074
Permeance
Permeance, in general, is the degree to which a material admits a flow of matter or energy. Permeance is usually represented by a curly capital P: P. Electromagnetism. In electromagnetism, permeance is the inverse of reluctance. In a magnetic circuit, permeance is a measure of the quantity of magnetic flux for a number of current-turns. A magnetic circuit almost acts as though the flux is conducted, therefore permeance is larger for large cross-sections of a material and smaller for smaller cross section lengths. This concept is analogous to electrical conductance in the electric circuit. Magnetic permeance P is defined as the reciprocal of magnetic reluctance R (in analogy with the reciprocity between electric conductance and resistance): formula_0 which can also be re-written: formula_1 using (magnetic circuit analogue of Ohm's law for electric circuits) and the definition of magnetomotive force (magnetic analogue of electromotive force): formula_2 where: Alternatively in terms of magnetic permeability (analogous to electric conductivity): formula_3 where: The SI unit of magnetic permeance is the henry (H), equivalently, webers per ampere. Materials science. In materials science, permeance is the degree to which a material transmits another substance. Notes. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\mathcal{P} = \\frac{1}{\\mathcal{R}}" }, { "math_id": 1, "text": "\\mathcal{P} = \\frac{\\Phi_\\mathrm{B}}{NI}" }, { "math_id": 2, "text": "\\mathcal{F} = \\Phi_\\mathrm{B} \\mathcal{R} = NI" }, { "math_id": 3, "text": "\\mathcal{P} = \\frac{\\mu A}{\\ell}" } ]
https://en.wikipedia.org/wiki?curid=1209074
12092048
Capillary length
The capillary length or capillary constant is a length scaling factor that relates gravity and surface tension. It is a fundamental physical property that governs the behavior of menisci, and is found when body forces (gravity) and surface forces (Laplace pressure) are in equilibrium. The pressure of a static fluid does not depend on the shape, total mass or surface area of the fluid. It is directly proportional to the fluid's specific weight – the force exerted by gravity over a specific volume, and its vertical height. However, a fluid also experiences pressure that is induced by surface tension, commonly referred to as the Young–Laplace pressure. Surface tension originates from cohesive forces between molecules, and in the bulk of the fluid, molecules experience attractive forces from all directions. The surface of a fluid is curved because exposed molecules on the surface have fewer neighboring interactions, resulting in a net force that contracts the surface. There exists a pressure difference either side of this curvature, and when this balances out the pressure due to gravity, one can rearrange to find the capillary length. In the case of a fluid–fluid interface, for example a drop of water immersed in another liquid, the capillary length denoted formula_1 or formula_2 is most commonly given by the formula, formula_3, where formula_4 is the surface tension of the fluid interface, formula_5 is the gravitational acceleration and formula_6 is the mass density difference of the fluids. The capillary length is sometimes denoted formula_7 in relation to the mathematical notation for curvature. The term capillary constant is somewhat misleading, because it is important to recognize that formula_1 is a composition of variable quantities, for example the value of surface tension will vary with temperature and the density difference will change depending on the fluids involved at an interface interaction. However if these conditions are known, the capillary length can be considered a constant for any given liquid, and be used in numerous fluid mechanical problems to scale the derived equations such that they are valid for any fluid. For molecular fluids, the interfacial tensions and density differences are typically of the order of mN m−1 and g mL−1 respectively resulting in a capillary length of formula_8mm for water and air at room temperature on earth. On the other hand, the capillary length would be formula_9mm for water-air on the moon. For a soap bubble, the surface tension must be divided by the mean thickness, resulting in a capillary length of about formula_10 meters in air! The equation for formula_1 can also be found with an extra formula_11 term, most often used when normalising the capillary height. Origin. Theoretical. One way to theoretically derive the capillary length, is to imagine a liquid droplet at the point where surface tension balances gravity. Let there be a spherical droplet with radius formula_12, The characteristic Laplace pressure formula_13, due to surface tension, is equal to formula_14, where formula_15 is the surface tension. The pressure due to gravity (hydrostatic pressure) formula_16 of a column of liquid is given by formula_17, where formula_18 is the droplet density, formula_5 the gravitational acceleration, and formula_19 is the height of the droplet. At the point where the Laplace pressure balances out the pressure due to gravity formula_20, formula_21. Relationship with the Eötvös number. The above derivation can be used when dealing with the Eötvös number, a dimensionless quantity that represents the ratio between the gravitational forces and surface tension of the liquid. Despite being introduced by Loránd Eötvös in 1886, he has since become fairly dissociated with it, being replaced with Wilfrid Noel Bond such that it is now referred to as the Bond number in recent literature. The Bond number can be written such that it includes a characteristic length- normally the radius of curvature of a liquid, and the capillary length formula_22, with parameters defined above, and formula_23 the radius of curvature. Therefore the bond number can be written as formula_24, with formula_1 the capillary length. If the bond number is set to 1, then the characteristic length is the capillary length. Experimental. The capillary length can also be found through the manipulation of many different physical phenomenon. One method is to focus on capillary action, which is the attraction of a liquids surface to a surrounding solid. Association with Jurin's law. Jurin's law is a quantitative law that shows that the maximum height that can be achieved by a liquid in a capillary tube is inversely proportional to the diameter of the tube. The law can be illustrated mathematically during capillary uplift, which is a traditional experiment measuring the height of a liquid in a capillary tube. When a capillary tube is inserted into a liquid, the liquid will rise or fall in the tube, due to an imbalance in pressure. The characteristic height is the distance from the bottom of the meniscus to the base, and exists when the Laplace pressure and the pressure due to gravity are balanced. One can reorganize to show the capillary length as a function of surface tension and gravity. formula_25, with formula_26 the height of the liquid, formula_27 the radius of the capillary tube, and formula_28 the contact angle. The contact angle is defined as the angle formed by the intersection of the liquid-solid interface and the liquid–vapour interface. The size of the angle quantifies the wettability of liquid, i.e., the interaction between the liquid and solid surface. A contact angle of formula_29 can be considered, perfect wetting. formula_30. Thus the formula_31 forms a cyclical 3 factor equation with formula_32. This property is usually used by physicists to estimate the height a liquid will rise in a particular capillary tube, radius known, without the need for an experiment. When the characteristic height of the liquid is sufficiently less than the capillary length, then the effect of hydrostatic pressure due to gravity can be neglected. Using the same premises of capillary rise, one can find the capillary length as a function of the volume increase, and wetting perimeter of the capillary walls. Association with a sessile droplet. Another way to find the capillary length is using different pressure points inside a sessile droplet, with each point having a radius of curvature, and equate them to the Laplace pressure equation. This time the equation is solved for the height of the meniscus level which again can be used to give the capillary length. The shape of a sessile droplet is directly proportional to whether the radius is greater than or less than the capillary length. Microdrops are droplets with radius smaller than the capillary length, and their shape is governed solely by surface tension, forming a spherical cap shape. If a droplet has a radius larger than the capillary length, they are known as macrodrops and the gravitational forces will dominate. Macrodrops will be 'flattened' by gravity and the height of the droplet will be reduced. History. The investigations in capillarity stem back as far as Leonardo da Vinci, however the idea of capillary length was not developed until much later. Fundamentally the capillary length is a product of the work of Thomas Young and Pierre Laplace. They both appreciated that surface tension arose from cohesive forces between particles and that the shape of a liquid's surface reflected the short range of these forces. At the turn of the 19th century they independently derived pressure equations, but due to notation and presentation, Laplace often gets the credit. The equation showed that the pressure within a curved surface between two static fluids is always greater than that outside of a curved surface, but the pressure will decrease to zero as the radius approached infinity. Since the force is perpendicular to the surface and acts towards the centre of the curvature, a liquid will rise when the surface is concave and depress when convex. This was a mathematical explanation of the work published by James Jurin in 1719, where he quantified a relationship between the maximum height taken by a liquid in a capillary tube and its diameter – Jurin's law. The capillary length evolved from the use of the Laplace pressure equation at the point it balanced the pressure due to gravity, and is sometimes called the "Laplace capillary constant," after being introduced by Laplace in 1806. In nature. Bubbles. Like a droplet, bubbles are round because cohesive forces pull its molecules into the tightest possible grouping, a sphere. Due to the trapped air inside the bubble, it is impossible for the surface area to shrink to zero, hence the pressure inside the bubble is greater than outside, because if the pressures were equal, then the bubble would simply collapse. This pressure difference can be calculated from Laplace's pressure equation, For a soap bubble, there exists two boundary surfaces, internal and external, and therefore two contributions to the excess pressure and Laplace's formula doubles to The capillary length can then be worked out the same way except that the thickness of the film, formula_33 must be taken into account as the bubble has a hollow center, unlike the droplet which is a solid. Instead of thinking of a droplet where each side is formula_0 as in the above derivation, for a bubble formula_34 is now formula_35, with formula_36 and formula_33 the radius and thickness of the bubble respectively. As above, the Laplace and hydrostatic pressure are equated resulting in formula_37. Thus the capillary length contributes to a physiochemical limit that dictates the maximum size a soap bubble can take. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\lambda_c" }, { "math_id": 1, "text": "\\lambda_{\\rm c}" }, { "math_id": 2, "text": "l_{\\rm c}" }, { "math_id": 3, "text": "\\lambda_{\\rm c} = \\sqrt{\\frac{\\gamma}{\\Delta\\rho g}}" }, { "math_id": 4, "text": "\\gamma" }, { "math_id": 5, "text": "g" }, { "math_id": 6, "text": "\\Delta\\rho" }, { "math_id": 7, "text": "\\kappa^{-1}" }, { "math_id": 8, "text": "\\sim3 " }, { "math_id": 9, "text": "{\\lambda \\scriptscriptstyle c} = 6.68" }, { "math_id": 10, "text": "3" }, { "math_id": 11, "text": "\\sqrt{2}" }, { "math_id": 12, "text": "\\lambda_c " }, { "math_id": 13, "text": "P_{\\gamma}" }, { "math_id": 14, "text": "P_{\\gamma}=2\\frac{\\gamma}{\\lambda_{\\rm c}}" }, { "math_id": 15, "text": "\\gamma " }, { "math_id": 16, "text": "P_{\\rm h}" }, { "math_id": 17, "text": "P_{\\rm h}=\\rho g h=2\\rho g\\lambda_{\\rm c} " }, { "math_id": 18, "text": "\\rho" }, { "math_id": 19, "text": "h=2\\lambda_{\\rm c}" }, { "math_id": 20, "text": "P_{\\rm h}=P_\\gamma" }, { "math_id": 21, "text": "\\lambda_{\\rm c} = \\sqrt{\\frac{\\gamma}{\\rho g}}" }, { "math_id": 22, "text": "\\mathrm{Bo}=\\frac{\\Delta\\rho \\,g \\,L^2}{\\gamma}" }, { "math_id": 23, "text": "L" }, { "math_id": 24, "text": "\\mathrm{Bo}=\\left(\\frac{L}{\\lambda_{\\rm c}}\\right)^2" }, { "math_id": 25, "text": "\\lambda_{\\rm c}^2=\\frac{hr}{2\\cos\\theta}" }, { "math_id": 26, "text": "h" }, { "math_id": 27, "text": "r" }, { "math_id": 28, "text": "\\theta" }, { "math_id": 29, "text": "\\theta=0" }, { "math_id": 30, "text": "\\lambda_{\\rm c}^2=\\frac{hr}{2}" }, { "math_id": 31, "text": "\\lambda_{\\rm c}^2" }, { "math_id": 32, "text": "r, h" }, { "math_id": 33, "text": "e_0" }, { "math_id": 34, "text": "m" }, { "math_id": 35, "text": "m=\\Delta \\rho R^2 e_0 " }, { "math_id": 36, "text": "R" }, { "math_id": 37, "text": "R= \\frac{\\gamma}{\\Delta \\rho g e_0}=\\frac{\\lambda_{\\rm c}^2}{e_0}" } ]
https://en.wikipedia.org/wiki?curid=12092048
1209346
Bicyclic semigroup
In mathematics, the bicyclic semigroup is an algebraic object important for the structure theory of semigroups. Although it is in fact a monoid, it is usually referred to as simply a semigroup. It is perhaps most easily understood as the syntactic monoid describing the Dyck language of balanced pairs of parentheses. Thus, it finds common applications in combinatorics, such as describing binary trees and associative algebras. History. The first published description of this object was given by Evgenii Lyapin in 1953. Alfred H. Clifford and Gordon Preston claim that one of them, working with David Rees, discovered it independently (without publication) at some point before 1943. Construction. There are at least three standard ways of constructing the bicyclic semigroup, and various notations for referring to it. Lyapin called it "P"; Clifford and Preston used formula_0; and most recent papers have tended to use "B". This article will use the modern style throughout. From a free semigroup. The bicyclic semigroup is the quotient of the free monoid on two generators "p" and "q" by the congruence generated by the relation "p" "q" = 1. Thus, each semigroup element is a string of those two letters, with the proviso that the subsequence ""p" "q"" does not appear. The semigroup operation is concatenation of strings, which is clearly associative. It can then be shown that all elements of "B" in fact have the form "q""a" "p""b", for some natural numbers "a" and "b". The composition operation simplifies to ("q""a" "p""b") ("q""c" "p""d") = "q""a"+"c"−min{"b","c"} "p""d"+"b"−min{"b","c"}. From ordered pairs. The way in which these exponents are constrained suggests that the ""p" and "q" structure" can be discarded, leaving only operations on the ""a" and "b"" part. So "B" is the semigroup of pairs of natural numbers (including zero), with operation ("a", "b") ("c", "d") = ("a" + "c" − min{"b", "c"}, "d" + "b" − min{"b", "c"}). This is sufficient to define "B" so that it is the same object as in the original construction. Just as "p" and "q" generated "B" originally, with the empty string as the monoid identity, this new construction of "B" has generators (1, 0) and (0, 1), with identity (0, 0). From functions. It can be shown that "any" semigroup "S" generated by elements "e", "a", and "b" that satisfies the statements below is isomorphic to the bicyclic semigroup. It is not entirely obvious that this should be the case – perhaps the hardest task is understanding that "S" must be infinite. To see this, suppose that "a" (say) does not have infinite order, so "a""k"+"h" = "a""h" for some "h" and "k". Then "a""k" = "e", and "b" = "e" "b" = "a""k" "b" = "a""k"−1 "e" = "a""k"−1, so "b" "a" = "a""k" = "e", which is not allowed – so there are infinitely many distinct powers of "a". The full proof is given in Clifford and Preston's book. Note that the two definitions given above both satisfy these properties. A third way of deriving "B" uses two appropriately-chosen functions to yield the bicyclic semigroup as a monoid of transformations of the natural numbers. Let "α", "β", and "ι" be elements of the transformation semigroup on the natural numbers, where These three functions have the required properties, so the semigroup they generate is "B". Properties. The bicyclic semigroup has the property that the image of any homomorphism "φ" from "B" to another semigroup "S" is either cyclic, or it is an isomorphic copy of "B". The elements "φ"("a"), "φ"("b") and "φ"("e") of "S" will always satisfy the conditions above (because "φ" is a homomorphism) with the possible exception that "φ"("b") "φ"("a") might turn out to be φ("e"). If this is not true, then "φ"("B") is isomorphic to "B"; otherwise, it is the cyclic semigroup generated by "φ"("a"). In practice, this means that the bicyclic semigroup can be found in many different contexts. The idempotents of "B" are all pairs ("x", "x"), where "x" is any natural number (using the ordered pair characterisation of "B"). Since these commute, and "B" is "regular" (for every "x" there is a "y" such that "x" "y" "x" = "x"), the bicyclic semigroup is an inverse semigroup. (This means that each element "x" of "B" has a unique inverse "y", in the "weak" semigroup sense that "x" "y" "x" = "x" and "y" "x" "y" = "y".) Every ideal of "B" is principal: the left and right principal ideals of ("m", "n") are Each of these contains infinitely many others, so "B" does not have minimal left or right ideals. In terms of Green's relations, "B" has only one "D"-class (it is "bisimple"), and hence has only one "J"-class (it is "simple"). The "L" and "R" relations are given by This implies that two elements are "H"-related if and only if they are identical. Consequently, the only subgroups of "B" are infinitely many copies of the trivial group, each corresponding to one of the idempotents. The egg-box diagram for "B" is infinitely large; the upper left corner begins: Each entry represents a singleton "H"-class; the rows are the "R"-classes and the columns are "L"-classes. The idempotents of "B" appear down the diagonal, in accordance with the fact that in a regular semigroup with commuting idempotents, each "L"-class and each "R"-class must contain exactly one idempotent. The bicyclic semigroup is the "simplest" example of a bisimple inverse semigroup with identity; there are many others. Where the definition of "B" from ordered pairs used the class of natural numbers (which is not only an additive semigroup, but also a commutative lattice under min and max operations), another set with appropriate properties could appear instead, and the "+", "−" and "max" operations modified accordingly. Notes. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\mathcal{C}" } ]
https://en.wikipedia.org/wiki?curid=1209346
12096154
Free androgen index
Ratio used to determine abnormal androgen status Free Androgen Index (FAI) is a ratio used to determine abnormal androgen status in humans. The ratio is the total testosterone level divided by the sex hormone binding globulin (SHBG) level, and then multiplying by a constant, usually 100. The concentrations of testosterone and SHBG are normally measured in nanomols per liter. FAI has no unit. formula_0 The majority of testosterone in the blood does not exist as the free molecule. Instead around half is tightly bound to sex hormone binding globulin, and the other half is weakly bound to albumin. Only a small percentage is unbound, under 3% in males, and less than 0.7% in females. Since only the free testosterone is able to bind to tissue receptors to exert its effects, it is believed that free testosterone is the best marker of a person's androgen status. However, free testosterone is difficult and expensive to measure (it requires a time-consuming dialysis step), and many laboratories do not offer this service. The free androgen index is intended to give a guide to the free testosterone level, but it is not very accurate (especially in males — see endocrine society commentary below). Consequently, there are no universally agreed 'normal ranges', and levels slightly above or below quoted laboratory reference ranges may not be clinically significant. Reference ranges depend on the constant in the calculation - 100 is used in the formula above, and the following suggested ranges are based on this. As with any laboratory measurement, however, it is vital that results are compared against the reference range quoted for that laboratory. Neither FAI nor free or total testosterone measurements should be interpreted in isolation; as a bare minimum, gonadotropin levels should also be measured. As a guide, in healthy adult men typical FAI values are 30-150. Values below 30 may indicate testosterone deficiency, which may contribute to fatigue, erectile dysfunction, osteoporosis and loss of secondary sex characteristics. In women, androgens are most often measured when there is concern that they may be raised (as in hirsutism or the polycystic ovary syndrome). Typical values for the FAI in women are 7-10. Testing. Various companies manufacture testing equipment and kits to measure this index. To test about 1 mL of blood is required. Usefulness as a biochemical marker. Validity as a measure of free testosterone. Statistical analysis has shown FAI to be a poor predictor of bioavailable testosterone and of hypogonadism. The Endocrine Society has taken a position against using the FAI to measure Free Testosterone in men: The FAI is often used as a surrogate for FT, and the FAI correlates well with FT in women but not men. Because T production is regulated by gonadotropin feedback in men, changes in SHBG, which alter FT concentrations, will be compensated by autoregulation of T production but not so in women. In addition, much circulating T in women is derived from the peripheral conversion of adrenal dehydroepiandrosterone and dehydroepiandrosterone sulfate that also is not subject to feedback control. Because SHBG is present in such large excess in women (10–100:1), FT concentrations are driven primarily by SHBG abundance. In addition, T excess in women lowers SHBG concentrations, which raises the FT concentration and contributes to the strong correlation of 1/SHBG with FT. The FAI has not been scientifically demonstrated to be a valid measurement of free testosterone in men: The Free Androgen Index (FAI) was initially proposed as a measure for assessing the circulating testosterone availability in female hirsutism. The extension of its use, by a number of investigators, to males has not been formally justified. Role in identifying polycystic ovary syndrome. The best single biochemical marker for polycystic ovary syndrome is a raised testosterone level, but "combination of SHBG and testosterone to derive a free testosterone value did not further aid the biochemical diagnosis of PCOS". Instead SHBG is reduced in obesity and so the FAI seems more correlated with the degree of obesity than with PCOS itself.
[ { "math_id": 0, "text": "\\text{FAI} = 100 \\times \\left(\\frac{\\text{total testosterone}}{\\text{SHBG}}\\right)" } ]
https://en.wikipedia.org/wiki?curid=12096154
12096417
Wirtinger inequality (2-forms)
"For other inequalities named after Wirtinger, see Wirtinger's inequality." In mathematics, the Wirtinger inequality, named after Wilhelm Wirtinger, is a fundamental result in complex linear algebra which relates the symplectic and volume forms of a hermitian inner product. It has important consequences in complex geometry, such as showing that the normalized exterior powers of the Kähler form of a Kähler manifold are calibrations. Statement. Consider a real vector space with positive-definite inner product "g", symplectic form "ω", and almost-complex structure "J", linked by "ω"("u", "v") "g"("J"("u"), "v") for any vectors "u" and "v". Then for any orthonormal vectors "v"1, ..., "v"2"k" there is formula_0 There is equality if and only if the span of "v"1, ..., "v"2"k" is closed under the operation of "J". In the language of the comass of a form, the Wirtinger theorem (although without precision about when equality is achieved) can also be phrased as saying that the comass of the form "ω" ∧ ⋅⋅⋅ ∧ "ω" is equal to "k"!. Proof. ==="k" 1=== In the special case "k" 1, the Wirtinger inequality is a special case of the Cauchy–Schwarz inequality: formula_1 According to the equality case of the Cauchy–Schwarz inequality, equality occurs if and only if "J"("v"1) and "v"2 are collinear, which is equivalent to the span of "v"1, "v"2 being closed under J. "k" &gt; 1. Let "v"1, ..., "v"2"k" be fixed, and let T denote their span. Then there is an orthonormal basis "e"1, ..., "e"2"k" of T with dual basis "w"1, ..., "w"2"k" such that formula_2 where "ι" denotes the inclusion map from T into V. This implies formula_3 which in turn implies formula_4 where the inequality follows from the previously-established "k" 1 case. If equality holds, then according to the "k" 1 equality case, it must be the case that "ω"("e"2"i" − 1, "e"2"i") ±1 for each i. This is equivalent to either "ω"("e"2"i" − 1, "e"2"i") 1 or "ω"("e"2"i", "e"2"i" − 1) 1, which in either case (from the "k" 1 case) implies that the span of "e"2"i" − 1, "e"2"i" is closed under "J", and hence that the span of "e"1, ..., "e"2"k" is closed under J. Finally, the dependence of the quantity formula_5 on "v"1, ..., "v"2"k" is only on the quantity "v"1 ∧ ⋅⋅⋅ ∧ "v"2"k", and from the orthonormality condition on "v"1, ..., "v"2"k", this wedge product is well-determined up to a sign. This relates the above work with "e"1, ..., "e"2"k" to the desired statement in terms of "v"1, ..., "v"2"k". Consequences. Given a complex manifold with hermitian metric, the Wirtinger theorem immediately implies that for any 2"k"-dimensional embedded submanifold M, there is formula_6 where "ω" is the Kähler form of the metric. Furthermore, equality is achieved if and only if M is a complex submanifold. In the special case that the hermitian metric satisfies the Kähler condition, this says that "ω""k" is a calibration for the underlying Riemannian metric, and that the corresponding calibrated submanifolds are the complex submanifolds of complex dimension k. This says in particular that every complex submanifold of a Kähler manifold is a minimal submanifold, and is even volume-minimizing among all submanifolds in its homology class. Using the Wirtinger inequality, these facts even extend to the more sophisticated context of currents in Kähler manifolds. Notes. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": " (\\underbrace{\\omega\\wedge\\cdots\\wedge\\omega}_{k\\text{ times}})(v_1,\\ldots,v_{2k}) \\leq k !." }, { "math_id": 1, "text": "\\omega(v_1,v_2)=g(J(v_1),v_2)\\leq \\|J(v_1)\\|_g\\|v_2\\|_g=1." }, { "math_id": 2, "text": "\\iota^\\ast\\omega=\\sum_{j=1}^k\\omega(e_{2j-1},e_{2j})w_{2j-1}\\wedge w_{2j}," }, { "math_id": 3, "text": "\\underbrace{\\iota^\\ast\\omega\\wedge\\cdots\\wedge \\iota^\\ast\\omega}_{k\\text{ times}}=k!\\prod_{i=1}^k\\omega(e_{2i-1},e_{2i})w_1\\wedge \\cdots\\wedge w_{2k}," }, { "math_id": 4, "text": "(\\underbrace{\\omega\\wedge\\cdots\\wedge\\omega}_{k\\text{ times}})(e_1,\\ldots,e_{2k})=k!\\prod_{i=1}^k\\omega(e_{2i-1},e_{2i})\\leq k!," }, { "math_id": 5, "text": "(\\underbrace{\\omega\\wedge\\cdots\\wedge\\omega}_{k\\text{ times}})(v_1,\\ldots,v_{2k})" }, { "math_id": 6, "text": "\\operatorname{vol}(M)\\geq\\frac{1}{k!}\\int_M \\omega^k," } ]
https://en.wikipedia.org/wiki?curid=12096417
1209759
Temporal difference learning
Computer programming concept &lt;templatestyles src="Machine learning/styles.css"/&gt; Temporal difference (TD) learning refers to a class of model-free reinforcement learning methods which learn by bootstrapping from the current estimate of the value function. These methods sample from the environment, like Monte Carlo methods, and perform updates based on current estimates, like dynamic programming methods. While Monte Carlo methods only adjust their estimates once the final outcome is known, TD methods adjust predictions to match later, more accurate, predictions about the future before the final outcome is known. This is a form of bootstrapping, as illustrated with the following example: Suppose you wish to predict the weather for Saturday, and you have some model that predicts Saturday's weather, given the weather of each day in the week. In the standard case, you would wait until Saturday and then adjust all your models. However, when it is, for example, Friday, you should have a pretty good idea of what the weather would be on Saturday – and thus be able to change, say, Saturday's model before Saturday arrives. Temporal difference methods are related to the temporal difference model of animal learning. Mathematical formulation. The tabular TD(0) method is one of the simplest TD methods. It is a special case of more general stochastic approximation methods. It estimates the state value function of a finite-state Markov decision process (MDP) under a policy formula_0. Let formula_1 denote the state value function of the MDP with states formula_2, rewards formula_3 and discount rate formula_4 under the policy formula_5: formula_6 We drop the action from the notation for convenience. formula_1 satisfies the Hamilton-Jacobi-Bellman Equation: formula_7 so formula_8 is an unbiased estimate for formula_9. This observation motivates the following algorithm for estimating formula_1. The algorithm starts by initializing a table formula_10 arbitrarily, with one value for each state of the MDP. A positive learning rate formula_11 is chosen. We then repeatedly evaluate the policy formula_0, obtain a reward formula_12 and update the value function for the current state using the rule: formula_13 where formula_14 and formula_15 are the current and next states, respectively. The value formula_16 is known as the TD target, and formula_17 is known as the TD error. TD-Lambda. TD-Lambda is a learning algorithm invented by Richard S. Sutton based on earlier work on temporal difference learning by Arthur Samuel. This algorithm was famously applied by Gerald Tesauro to create TD-Gammon, a program that learned to play the game of backgammon at the level of expert human players. The lambda (formula_18) parameter refers to the trace decay parameter, with formula_19. Higher settings lead to longer lasting traces; that is, a larger proportion of credit from a reward can be given to more distant states and actions when formula_18 is higher, with formula_20 producing parallel learning to Monte Carlo RL algorithms. In neuroscience. The TD algorithm has also received attention in the field of neuroscience. Researchers discovered that the firing rate of dopamine neurons in the ventral tegmental area (VTA) and substantia nigra (SNc) appear to mimic the error function in the algorithm. The error function reports back the difference between the estimated reward at any given state or time step and the actual reward received. The larger the error function, the larger the difference between the expected and actual reward. When this is paired with a stimulus that accurately reflects a future reward, the error can be used to associate the stimulus with the future reward. Dopamine cells appear to behave in a similar manner. In one experiment measurements of dopamine cells were made while training a monkey to associate a stimulus with the reward of juice. Initially the dopamine cells increased firing rates when the monkey received juice, indicating a difference in expected and actual rewards. Over time this increase in firing back propagated to the earliest reliable stimulus for the reward. Once the monkey was fully trained, there was no increase in firing rate upon presentation of the predicted reward. Subsequently, the firing rate for the dopamine cells decreased below normal activation when the expected reward was not produced. This mimics closely how the error function in TD is used for reinforcement learning. The relationship between the model and potential neurological function has produced research attempting to use TD to explain many aspects of behavioral research. It has also been used to study conditions such as schizophrenia or the consequences of pharmacological manipulations of dopamine on learning.
[ { "math_id": 0, "text": "\\pi" }, { "math_id": 1, "text": "V^\\pi" }, { "math_id": 2, "text": "(S_t)_{t\\in\\mathbb{N}}" }, { "math_id": 3, "text": "(R_t)_{t\\in\\mathbb{N}}" }, { "math_id": 4, "text": "\\gamma" }, { "math_id": 5, "text": " \\pi " }, { "math_id": 6, "text": "V^\\pi(s) = E_{a \\sim \\pi}\\left\\{\\sum_{t=0}^\\infty \\gamma^tR_{t+1}\\Bigg| S_0=s\\right\\}.\n" }, { "math_id": 7, "text": "V^\\pi(s)=E_{\\pi}\\{R_1 + \\gamma V^\\pi(S_1)|S_0=s\\}," }, { "math_id": 8, "text": "R_1 + \\gamma V^\\pi(S_1)" }, { "math_id": 9, "text": "V^\\pi(s)" }, { "math_id": 10, "text": "V(s)" }, { "math_id": 11, "text": "\\alpha" }, { "math_id": 12, "text": "r" }, { "math_id": 13, "text": " V(S_t) \\leftarrow (1 - \\alpha) V(S_t) + \\underbrace{\\alpha}_{\\text{learning rate}} [ \\overbrace{R_{t+1} + \\gamma V(S_{t+1})}^{\\text{The TD target}} ]" }, { "math_id": 14, "text": "S_t" }, { "math_id": 15, "text": "S_{t+1}" }, { "math_id": 16, "text": " R_{t+1} + \\gamma V(S_{t+1})" }, { "math_id": 17, "text": " R_{t+1} + \\gamma V(S_{t+1}) - V(S_t)" }, { "math_id": 18, "text": "\\lambda" }, { "math_id": 19, "text": "0 \\leqslant \\lambda \\leqslant 1" }, { "math_id": 20, "text": "\\lambda = 1" } ]
https://en.wikipedia.org/wiki?curid=1209759
1209760
Natural product
Chemical compound or substance produced by a living organism, found in nature A natural product is a natural compound or substance produced by a living organism—that is, found in nature. In the broadest sense, natural products include any substance produced by life. Natural products can also be prepared by chemical synthesis (both semisynthesis and total synthesis) and have played a central role in the development of the field of organic chemistry by providing challenging synthetic targets. The term "natural product" has also been extended for commercial purposes to refer to cosmetics, dietary supplements, and foods produced from natural sources without added artificial ingredients. Within the field of organic chemistry, the definition of natural products is usually restricted to organic compounds isolated from natural sources that are produced by the pathways of secondary metabolism. Within the field of medicinal chemistry, the definition is often further restricted to secondary metabolites. Secondary metabolites (or specialized metabolites) are not essential for survival, but nevertheless provide organisms that produce them an evolutionary advantage. Many secondary metabolites are cytotoxic and have been selected and optimized through evolution for use as "chemical warfare" agents against prey, predators, and competing organisms. Secondary or specialized metabolites are often unique to species, which is contrasted to primary metabolites which have broad use across kingdoms. Secondary metabolites are marked by chemical complexity which is why they are of such interest to chemists. Natural sources may lead to basic research on potential bioactive components for commercial development as lead compounds in drug discovery. Although natural products have inspired numerous drugs, drug development from natural sources has received declining attention in the 21st century by pharmaceutical companies, partly due to unreliable access and supply, intellectual property, cost, and profit concerns, seasonal or environmental variability of composition, and loss of sources due to rising extinction rates. Classes. The broadest definition of natural product is anything that is produced by life, and includes the likes of biotic materials (e.g. wood, silk), bio-based materials (e.g. bioplastics, cornstarch), bodily fluids (e.g. milk, plant exudates), and other natural materials (e.g. soil, coal). Natural products may be classified according to their biological function, biosynthetic pathway, or source. Depending on the sources, the number of known natural product molecules ranges between 300,000 and 400,000. Function. Following Albrecht Kossel's original proposal in 1891, natural products are often divided into two major classes, the primary and secondary metabolites. Primary metabolites have an intrinsic function that is essential to the survival of the organism that produces them. Secondary metabolites in contrast have an extrinsic function that mainly affects other organisms. Secondary metabolites are not essential to survival but do increase the competitiveness of the organism within its environment. Because of their ability to modulate biochemical and signal transduction pathways, some secondary metabolites have useful medicinal properties. Natural products especially within the field of organic chemistry are often defined as primary and secondary metabolites. A more restrictive definition limiting natural products to secondary metabolites is commonly used within the fields of medicinal chemistry and pharmacognosy. Primary metabolites. Primary metabolites as defined by Kossel are components of basic metabolic pathways that are required for life. They are associated with essential cellular functions such as nutrient assimilation, energy production, and growth/development. They have a wide species distribution that span many phyla and frequently more than one kingdom. Primary metabolites include the basic building blocks of life: carbohydrates, lipids, amino acids, and nucleic acids. Primary metabolites that are involved with energy production include respiratory and photosynthetic enzymes. Enzymes in turn are composed of amino acids and often non-peptidic cofactors that are essential for enzyme function. The basic structure of cells and of organisms are also composed of primary metabolites. These include cell membranes (e.g. phospholipids), cell walls (e.g. peptidoglycan, chitin), and cytoskeletons (proteins). Primary metabolite enzymatic cofactors include members of the vitamin B family. Vitamin B1 as thiamine diphosphate is a coenzyme for pyruvate dehydrogenase, 2-oxoglutarate dehydrogenase, and transketolase which are all involved in carbohydrate metabolism. Vitamin B2 (riboflavin) is a constituent of FMN and FAD which are necessary for many redox reactions. Vitamin B3 (nicotinic acid or niacin), synthesized from tryptophan is a component of the coenzymes NAD+ and NADP+ which in turn are required for electron transport in the Krebs cycle, oxidative phosphorylation, as well as many other redox reactions. Vitamin B5 (pantothenic acid) is a constituent of coenzyme A, a basic component of carbohydrate and amino acid metabolism as well as the biosynthesis of fatty acids and polyketides. Vitamin B6 (pyridoxol, pyridoxal, and pyridoxamine) as pyridoxal 5′-phosphate is a cofactor for many enzymes especially transaminases involve in amino acid metabolism. Vitamin B12 (cobalamins) contain a corrin ring similar in structure to porphyrin and is an essential coenzyme for the catabolism of fatty acids as well for the biosynthesis of methionine. DNA and RNA, which store and transmit genetic information, are composed of nucleic acid primary metabolites. First messengers are signaling molecules that control metabolism or cellular differentiation. These signaling molecules include hormones and growth factors in turn are composed of peptides, biogenic amines, steroid hormones, auxins, gibberellins etc. These first messengers interact with cellular receptors which are composed of proteins. Cellular receptors in turn activate second messengers are used to relay the extracellular message to intracellular targets. These signaling molecules include the primary metabolites cyclic nucleotides, diacyl glycerol etc. Secondary metabolites. Secondary in contrast to primary metabolites are dispensable and not absolutely required for survival. Furthermore, secondary metabolites typically have a narrow species distribution. Secondary metabolites have a broad range of functions. These include pheromones that act as social signaling molecules with other individuals of the same species, communication molecules that attract and activate symbiotic organisms, agents that solubilize and transport nutrients (siderophores etc.), and competitive weapons (repellants, venoms, toxins etc.) that are used against competitors, prey, and predators. For many other secondary metabolites, the function is unknown. One hypothesis is that they confer a competitive advantage to the organism that produces them. An alternative view is that, in analogy to the immune system, these secondary metabolites have no specific function, but having the machinery in place to produce these diverse chemical structures is important and a few secondary metabolites are therefore produced and selected for. General structural classes of secondary metabolites include alkaloids, phenylpropanoids, polyketides, and terpenoids. Biosynthesis. The biosynthetic pathways leading to the major classes of natural products are described below. Carbohydrates. Carbohydrates are an essential energy source for most life forms. In addition, polysaccharides formed from simpler carbohydrates are important structural components of many organisms such the cell walls of bacteria and plants. Carbohydrate are the products of plant photosynthesis and animal gluconeogenesis. Photosynthesis produces initially 3-phosphoglyceraldehyde, a three-carbon atom containing sugar (a triose). This triose in turn may be converted into glucose (a six carbon atom containing sugar) or a variety of pentoses (five carbon atom containing sugars) through the Calvin cycle. In animals, the three carbon precursors lactate or glycerol can be converted into pyruvate which in turn can be converted into carbohydrates in the liver. Fatty acids and polyketides. Through the process of glycolysis sugars are broken down into acetyl-CoA. In an ATP-dependent enzymatically catalyzed reaction, acetyl-CoA is carboxylated to form malonyl-CoA. Acetyl-CoA and malonyl-CoA undergo a Claisen condensation with lose of carbon dioxide to form acetoacetyl-CoA. Additional condensation reactions produce successively higher molecular weight poly-β-keto chains which are then converted into other polyketides. The polyketide class of natural products have diverse structures and functions and include prostaglandins and macrolide antibiotics. One molecule of acetyl-CoA (the "starter unit") and several molecules malonyl-CoA (the "extender units") are condensed by fatty acid synthase to produce fatty acids. Fatty acid are essential components of lipid bilayers that form cell membranes as well as fat energy stores in animals. Sources. Natural products may be extracted from the cells, tissues, and secretions of microorganisms, plants and animals. A crude (unfractionated) extract from any one of these sources will contain a range of structurally diverse and often novel chemical compounds. Chemical diversity in nature is based on biological diversity, so researchers collect samples from around the world to analyze and evaluate in drug discovery screens or bioassays. This effort to search for biologically active natural products is known as bioprospecting. Pharmacognosy provides the tools to detect, isolate and identify bioactive natural products that could be developed for medicinal use. When an "active principle" is isolated from a traditional medicine or other biological material, this is known as a "hit". Subsequent scientific and legal work is then performed to validate the hit (e.g. elucidation of mechanism of action, confirmation that there is no intellectual property conflict). This is followed by the hit to lead stage of drug discovery, where derivatives of the active compound are produced in an attempt to improve its potency and safety. In this and related ways, modern medicines can be developed directly from natural sources. Although traditional medicines and other biological material are considered an excellent source of novel compounds, the extraction and isolation of these compounds can be a slow, expensive and inefficient process. For large scale manufacture therefore, attempts may be made to produce the new compound by total synthesis or semisynthesis. Because natural products are generally secondary metabolites with complex chemical structures, their total/semisynthesis is not always commercially viable. In these cases, efforts can be made to design simpler analogues with comparable potency and safety that are amenable to total/semisynthesis. Prokaryotic. Bacteria. The serendipitous discovery and subsequent clinical success of penicillin prompted a large-scale search for other environmental microorganisms that might produce anti-infective natural products. Soil and water samples were collected from all over the world, leading to the discovery of streptomycin (derived from "Streptomyces griseus"), and the realization that bacteria, not just fungi, represent an important source of pharmacologically active natural products. This, in turn, led to the development of an impressive arsenal of antibacterial and antifungal agents including amphotericin B, chloramphenicol, daptomycin and tetracycline (from "Streptomyces" spp.), the polymyxins (from "Paenibacillus polymyxa"), and the rifamycins (from "Amycolatopsis rifamycinica"). Antiparasitic and antiviral drugs have similarly been derived from bacterial metabolites. Although most of the drugs derived from bacteria are employed as anti-infectives, some have found use in other fields of medicine. Botulinum toxin (from "Clostridium botulinum") and bleomycin (from "Streptomyces verticillus") are two examples. Botulinum, the neurotoxin responsible for botulism, can be injected into specific muscles (such as those controlling the eyelid) to prevent muscle spasm. Also, the glycopeptide bleomycin is used for the treatment of several cancers including Hodgkin's lymphoma, head and neck cancer, and testicular cancer. Newer trends in the field include the metabolic profiling and isolation of natural products from novel bacterial species present in underexplored environments. Examples include symbionts or endophytes from tropical environments, subterranean bacteria found deep underground via mining/drilling, and marine bacteria. Archaea. Because many Archaea have adapted to life in extreme environments such as polar regions, hot springs, acidic springs, alkaline springs, salt lakes, and the high pressure of deep ocean water, they possess enzymes that are functional under quite unusual conditions. These enzymes are of potential use in the food, chemical, and pharmaceutical industries, where biotechnological processes frequently involve high temperatures, extremes of pH, high salt concentrations, and / or high pressure. Examples of enzymes identified to date include amylases, pullulanases, cyclodextrin glycosyltransferases, cellulases, xylanases, chitinases, proteases, alcohol dehydrogenase, and esterases. Archaea represent a source of novel chemical compounds also, for example isoprenyl glycerol ethers 1 and 2 from "Thermococcus" S557 and "Methanocaldococcus jannaschii", respectively. Eukaryotic. Fungi. Several anti-infective medications have been derived from fungi including penicillin and the cephalosporins (antibacterial drugs from "Penicillium rubens" and "Cephalosporium acremonium", respectively) and griseofulvin (an antifungal drug from "Penicillium griseofulvum"). Other medicinally useful fungal metabolites include lovastatin (from "Pleurotus ostreatus"), which became a lead for a series of drugs that lower cholesterol levels, cyclosporin (from "Tolypocladium inflatum"), which is used to suppress the immune response after organ transplant operations, and ergometrine (from "Claviceps" spp.), which acts as a vasoconstrictor, and is used to prevent bleeding after childbirth. Asperlicin (from "Aspergillus alliaceus") is another example. Asperlicin is a novel antagonist of cholecystokinin, a neurotransmitter thought to be involved in panic attacks, and could potentially be used to treat anxiety. Plants. Plants are a major source of complex and highly structurally diverse chemical compounds (phytochemicals), this structural diversity attributed in part to the natural selection of organisms producing potent compounds to deter herbivory (feeding deterrents). Major classes of phytochemical include phenols, polyphenols, tannins, terpenes, and alkaloids. Though the number of plants that have been extensively studied is relatively small, many pharmacologically active natural products have already been identified. Clinically useful examples include the anticancer agents paclitaxel and omacetaxine mepesuccinate (from "Taxus brevifolia" and "Cephalotaxus harringtonii", respectively), the antimalarial agent artemisinin (from "Artemisia annua"), and the acetylcholinesterase inhibitor galantamine (from "Galanthus" spp.), used to treat Alzheimer's disease. Other plant-derived drugs, used medicinally and/or recreationally include morphine, cocaine, quinine, tubocurarine, muscarine, and nicotine. Animals. Animals also represent a source of bioactive natural products. In particular, venomous animals such as snakes, spiders, scorpions, caterpillars, bees, wasps, centipedes, ants, toads, and frogs have attracted much attention. This is because venom constituents (peptides, enzymes, nucleotides, lipids, biogenic amines etc.) often have very specific interactions with a macromolecular target in the body (e.g. α-bungarotoxin from cobras). As with plant feeding deterrents, this biological activity is attributed to natural selection, organisms capable of killing or paralyzing their prey and/or defending themselves against predators being more likely to survive and reproduce. Because of these specific chemical-target interactions, venom constituents have proved important tools for studying receptors, ion channels, and enzymes. In some cases, they have also served as leads in the development of novel drugs. For example, teprotide, a peptide isolated from the venom of the Brazilian pit viper "Bothrops jararaca", was a lead in the development of the antihypertensive agents cilazapril and captopril. Also, echistatin, a disintegrin from the venom of the saw-scaled viper "Echis carinatus" was a lead in the development of the antiplatelet drug tirofiban. In addition to the terrestrial animals and amphibians described above, many marine animals have been examined for pharmacologically active natural products, with corals, sponges, tunicates, sea snails, and bryozoans yielding chemicals with interesting analgesic, antiviral, and anticancer activities. Two examples developed for clinical use include ω-conotoxin (from the marine snail "Conus magus") and ecteinascidin 743 (from the tunicate "Ecteinascidia turbinata"). The former, ω-conotoxin, is used to relieve severe and chronic pain, while the latter, ecteinascidin 743 is used to treat metastatic soft tissue sarcoma. Other natural products derived from marine animals and under investigation as possible therapies include the antitumour agents discodermolide (from the sponge "Discodermia dissoluta"), eleutherobin (from the coral "Erythropodium caribaeorum"), and the bryostatins (from the bryozoan "Bugula neritina"). Medical uses. Natural products sometimes have pharmacological activity that can be of therapeutic benefit in treating diseases. Moreover, synthetic analogs of natural products with improved potency and safety can be prepared and therefore natural products are often used as starting points for drug discovery. Natural product constituents have inspired numerous drug discovery efforts that eventually gained approval as new drugs Modern natural product-derived drugs. A large number of currently prescribed drugs have been either directly derived from or inspired by natural products. Some of the oldest natural product based drugs are analgesics. The bark of the willow tree has been known from antiquity to have pain relieving properties. This is due to presence of the natural product salicin which in turn may be hydrolyzed into salicylic acid. A synthetic derivative acetylsalicylic acid better known as "aspirin" is a widely used pain reliever. Its mechanism of action is inhibition of the cyclooxygenase (COX) enzyme. Another notable example is opium is extracted from the latex from "Papaver somniferous" (a flowering poppy plant). The most potent narcotic component of opium is the alkaloid morphine which acts as an opioid receptor agonist. A more recent example is the N-type calcium channel blocker ziconotide analgesic which is based on a cyclic peptide cone snail toxin (ω-conotoxin MVIIA) from the species "Conus magus". A significant number of anti-infectives are based on natural products. The first antibiotic to be discovered, penicillin, was isolated from the mold "Penicillium". Penicillin and related beta lactams work by inhibiting DD-transpeptidase enzyme that is required by bacteria to cross link peptidoglycan to form the cell wall. Several natural product drugs target tubulin, which is a component of the cytoskeleton. These include the tubulin polymerization inhibitor colchicine isolated from the "Colchicum autumnale" (autumn crocus flowering plant), which is used to treat gout. Colchicine is biosynthesized from the amino acids phenylalanine and tryptophan. Paclitaxel, in contrast, is a tubulin polymerization stabilizer and is used as a chemotherapeutic drug. Paclitaxel is based on the terpenoid natural product taxol, which is isolated from "Taxus brevifolia" (the pacific yew tree). A class of drugs widely used to lower cholesterol are the HMG-CoA reductase inhibitors, for example atorvastatin. These were developed from mevastatin, a polyketide produced by the fungus "Penicillium citrinum". Finally, a number natural product drugs are used to treat hypertension and congestive heart failure. These include the angiotensin-converting enzyme inhibitor captopril. Captopril is based on the peptidic bradykinin potentiating factor isolated from venom of the Brazilian arrowhead viper ("Bothrops jararaca"). Limiting and enabling factors. Numerous challenges limit the use of natural products for drug discovery, resulting in 21st century preference by pharmaceutical companies to dedicate discovery efforts toward high-throughput screening of pure synthetic compounds with shorter timelines to refinement. Natural product sources are often unreliable to access and supply, have a high probability of duplication, inherently create intellectual property concerns about patent protection, vary in composition due to sourcing season or environment, and are susceptible to rising extinction rates. The biological resource for drug discovery from natural products remains abundant, with small percentages of microorganisms, plant species, and insects assessed for bioactivity. In enormous numbers, bacteria and marine microorganisms remain unexamined. As of 2008, the field of metagenomics was proposed to examine genes and their function in soil microbes, but most pharmaceutical firms have not exploited this resource fully, choosing instead to develop "diversity-oriented synthesis" from libraries of known drugs or natural sources for lead compounds with higher potential for bioactivity. Isolation and purification. All natural products begin as mixtures with other compounds from the natural source, often very complex mixtures, from which the product of interest must be isolated and purified. The "isolation" of a natural product refers, depending on context, either to the isolation of sufficient quantities of pure chemical matter for chemical structure elucidation, derivitzation/degradation chemistry, biological testing, and other research needs (generally milligrams to grams, but historically, often more), or to the isolation of "analytical quantities" of the substance of interest, where the focus is on identification and quantitation of the substance (e.g. in biological tissue or fluid), and where the quantity isolated depends on the analytical method applied (but is generally always sub-microgram in scale). The ease with which the active agent can be isolated and purified depends on the structure, stability, and quantity of the natural product. The methods of isolation applied toward achieving these two distinct scales of product are likewise distinct, but generally involve extraction, precipitation, adsorptions, chromatography, and sometimes crystallizations. In both cases, the isolated substance is purified to "chemical homogeneity", i.e. specific combined separation and analytical methods such as LC-MS methods are chosen to be "orthogonal"—achieving their separations based on distinct modes of interaction between substance and isolating matrix—with the goal being repeated detection of only a single species present in the putative pure sample. Early isolation is almost inevitably followed by "structure determination", especially if an important pharmacologic activity is associated with the purified natural product. Structure determination refers to methods applied to determine the chemical structure of an isolated, pure natural product, a process that involves an array of chemical and physical methods that have changed markedly over the history of natural products research; in earliest days, these focused on chemical transformation of unknown substances into known substances, and measurement of physical properties such as melting point and boiling point, and related methods for determining molecular weight. In the modern era, methods focus on mass spectrometry and nuclear magnetic resonance methods, often multidimensional, and, when feasible, small molecule crystallography. For instance, the chemical structure of penicillin was determined by Dorothy Crowfoot Hodgkin in 1945, work for which she later received a Nobel Prize in Chemistry (1964). Synthesis. Many natural products have very complex structures. The perceived complexity of a natural product is a qualitative matter, consisting of consideration of its molecular mass, the particular arrangements of substructures (functional groups, rings etc.) with respect to one another, the number and density of those functional groups, the stability of those groups and of the molecule as a whole, the number and type of stereochemical elements, the physical properties of the molecule and its intermediates (which bear on the ease of its handling and purification), all of these viewed in the context of the novelty of the structure and whether preceding related synthetic efforts have been successful (see below for details). Some natural products, especially those less complex, are easily and cost-effectively prepared via complete chemical synthesis from readily available, simpler chemical ingredients, a process referred to as total synthesis (especially when the process involves no steps mediated by biological agents). Not all natural products are amenable to total synthesis, cost-effective or otherwise. In particular, those most complex often are not. Many are accessible, but the required routes are simply too expensive to allow synthesis on any practical or industrial scale. However, to be available for further study, all natural products must yield to isolation and purification. This may suffice if isolation provides appropriate quantities of the natural product for the intended purpose (e.g. as a drug to alleviate disease). Drugs such as penicillin, morphine, and paclitaxel proved to be affordably acquired at needed commercial scales solely via isolation procedures (without any significant synthetic chemistry contributing). However, in other cases, needed agents are not available without synthetic chemistry manipulations. Semisynthesis. The process of isolating a natural product from its source can be costly in terms of committed time and material expense, and it may challenge the availability of the relied upon natural resource (or have ecological consequences for the resource). For instance, it has been estimated that the bark of an entire yew tree ("Taxus brevifolia") would have to be harvested to extract enough paclitaxel for just a single dose of therapy. Furthermore, the number of structural analogues obtainable for structure–activity analysis (SAR) simply via harvest (if more than one structural analogue is even present) is limited by the biology at work in the organism, and so outside of the experimentalist's control. In such cases where the ultimate target is harder to come by, or limits SAR, it is sometimes possible to source a middle-to-late stage biosynthetic precursor or analogue from which the ultimate target can be prepared. This is termed semisynthesis or "partial synthesis". With this approach, the related biosynthetic intermediate is harvested and then converted to the final product by conventional procedures of chemical synthesis. This strategy can have two advantages. Firstly, the intermediate may be more easily extracted, and in higher yield, than the ultimate desired product. An example of this is paclitaxel, which can be manufactured by extracting 10-deacetylbaccatin III from "T. brevifolia" needles, then carrying out a four-step synthesis. Secondly, the route designed between semisynthetic starting material and ultimate product may permit analogues of the final product to be synthesized. The newer generation semisynthetic penicillins are an illustration of the benefit of this approach. Total synthesis. In general, the total synthesis of natural products is a non-commercial research activity, aimed at deeper understanding of the synthesis of particular natural product frameworks, and the development of fundamental new synthetic methods. Even so, it is of tremendous commercial and societal importance. By providing challenging synthetic targets, for example, it has played a central role in the development of the field of organic chemistry. Prior to the development of analytical chemistry methods in the twentieth century, the structures of natural products were affirmed by total synthesis (so-called "structure proof by synthesis"). Early efforts in natural products synthesis targeted complex substances such as cobalamin (vitamin B12), an essential cofactor in cellular metabolism. Symmetry. Examination of dimerized and trimerized natural products has shown that an element of bilateral symmetry is often present. Bilateral symmetry refers to a molecule or system that contains a C2, Cs, or C2v point group identity. C2 symmetry tends to be much more abundant than other types of bilateral symmetry. This finding sheds light on how these compounds might be mechanistically created, as well as providing insight into the thermodynamic properties that make these compounds more favorable. Density functional theory (DFT), the Hartree–Fock method, and semiempirical calculations also show some favorability for dimerization in natural products due to evolution of more energy per bond than the equivalent trimer or tetramer. This is proposed to be due to steric hindrance at the core of the molecule, as most natural products dimerize and trimerize in a head-to-head fashion rather than head-to-tail. Research and teaching. Research and teaching activities related to natural products fall into a number of diverse academic areas, including organic chemistry, medicinal chemistry, pharmacognosy, ethnobotany, traditional medicine, and ethnopharmacology. Other biological areas include chemical biology, chemical ecology, chemogenomics, systems biology, molecular modeling, chemometrics, and chemoinformatics. Chemistry. Natural products chemistry is a distinct area of chemical research which was important in the development and history of chemistry. Isolating and identifying natural products has been important to source substances for early preclinical drug discovery research, to understand traditional medicine and ethnopharmacology, and to find pharmacologically useful areas of chemical space. To achieve this, many technological advances have been made, such as the evolution of technology associated with chemical separations, and the development of modern methods in chemical structure determination such as NMR. Early attempts to understand the biosynthesis of natural products, saw chemists employ first radiolabelling and more recently stable isotope labeling combined with NMR experiments. In addition, natural products are prepared by organic synthesis, to provide confirmation of their structure, or to give access to larger quantities of natural products of interest. In this process, the structure of some natural products have been revised, and the challenge of synthesising natural products has led to the development of new synthetic methodology, synthetic strategy, and tactics. In this regard, natural products play a central role in the training of new synthetic organic chemists, and are a principal motivation in the development of new variants of old chemical reactions (e.g., the Evans aldol reaction), as well as the discovery of completely new chemical reactions (e.g., the Woodward cis-hydroxylation, Sharpless epoxidation, and Suzuki–Miyaura cross-coupling reactions). History. Foundations of organic and natural product chemistry. The concept of natural products dates back to the early 19th century, when the foundations of organic chemistry were laid. Organic chemistry was regarded at that time as the chemistry of substances that plants and animals are composed of. It was a relatively complex form of chemistry and stood in stark contrast to inorganic chemistry, the principles of which had been established in 1789 by the Frenchman Antoine Lavoisier in his work "Traité Élémentaire de Chimie". Isolation. Lavoisier showed at the end of the 18th century that organic substances consisted of a limited number of elements: primarily carbon and hydrogen and supplemented by oxygen and nitrogen. He quickly focused on the isolation of these substances, often because they had an interesting pharmacological activity. Plants were the main source of such compounds, especially alkaloids and glycosides. It was long been known that opium, a sticky mixture of alkaloids (including codeine, morphine, noscapine, thebaine, and papaverine) from the opium poppy ("Papaver somniferum"), possessed a narcotic and at the same time mind-altering properties. By 1805, morphine had already been isolated by the German chemist Friedrich Sertürner and in the 1870s it was discovered that boiling morphine with acetic anhydride produced a substance with a strong pain suppressive effect: heroin. In 1815, Eugène Chevreul isolated cholesterol, a crystalline substance, from animal tissue that belongs to the class of steroids, and in 1819 strychnine, an alkaloid was isolated. Synthesis. A second important step was the synthesis of organic compounds. Whereas the synthesis of inorganic substances had been known for a long time, the synthesis of organic substances was a difficult hurdle. In 1827 the Swedish chemist Jöns Jacob Berzelius held that an indispensable force of nature for the synthesis of organic compounds, called vital force or life force, was needed. This philosophical idea, vitalism, well into the 19th century had many supporters, even after the introduction of the atomic theory. The idea of vitalism especially fitted in with beliefs in medicine; the most traditional healing practices believed that disease was the result of some imbalance in the vital energies that distinguishes life from nonlife. A first attempt to break the vitalism idea in science was made in 1828, when the German chemist Friedrich Wöhler succeeded in synthesizing urea, a natural product found in urine, by heating ammonium cyanate, an inorganic substance: formula_0 This reaction showed that there was no need for a life force in order to prepare organic substances. This idea, however, was initially met with a high degree of skepticism, and only 20 years later, with the synthesis of acetic acid from carbon by Adolph Wilhelm Hermann Kolbe, was the idea accepted. Organic chemistry has since developed into an independent area of research dedicated to the study of carbon-containing compounds, since that element in common was detected in a variety of nature-derived substances. An important factor in the characterization of organic materials was on the basis of their physical properties (such as melting point, boiling point, solubility, crystallinity, or color). Structural theories. A third step was the structure elucidation of organic substances: although the elemental composition of pure organic substances (irrespective of whether they were of natural or synthetic origin) could be determined fairly accurately, the molecular structure was still a problem. The urge to do structural elucidation resulted from a dispute between Friedrich Wöhler and Justus von Liebig, who both studied a silver salt of the same composition but had different properties. Wöhler studied silver cyanate, a harmless substance, while von Liebig investigated silver fulminate, a salt with explosive properties. The elemental analysis shows that both salts contain equal quantities of silver, carbon, oxygen and nitrogen. According to the then prevailing ideas, both substances should possess the same properties, but this was not the case. This apparent contradiction was later solved by Berzelius's theory of isomers, whereby not only the number and type of elements are of importance to the properties and chemical reactivity, but also the position of atoms in within a compound. This was a direct cause for the development of structure theories, such as the radical theory of Jean-Baptiste Dumas and the substitution theory of Auguste Laurent. However, it took until 1858 before by August Kekulé formulated a definite structure theory. He posited that carbon is tetravalent and can bind to itself to form carbon chains as they occur in natural products. Expanding the concept. The concept of natural product, which initially based on organic compounds that could be isolated from plants, was extended to include animal material in the middle of the 19th century by the German Justus von Liebig. Hermann Emil Fischer in 1884, turned his attention to the study of carbohydrates and purines, work for which he was awarded the Nobel Prize in 1902. He also succeeded to make synthetically in the laboratory in a variety of carbohydrates, including glucose and mannose. After the discovery of penicillin by Alexander Fleming in 1928, fungi and other micro-organisms were added to the arsenal of sources of natural products. Milestones. By the 1930s, several large classes of natural products were known. Important milestones included: References. Footnotes &lt;templatestyles src="Reflist/styles.css" /&gt; Citations &lt;templatestyles src="Reflist/styles.css" /&gt; Further reading. &lt;templatestyles src="Refbegin/styles.css" /&gt;
[ { "math_id": 0, "text": "\\mathrm{NH_4OCN\\ \\xrightarrow {\\ \\ 60^{\\circ}C \\ \\ }\\ H_2NCONH_2}" } ]
https://en.wikipedia.org/wiki?curid=1209760
1209823
Rotating reference frame
Concept in classical mechanics &lt;templatestyles src="Hlist/styles.css"/&gt; A rotating frame of reference is a special case of a non-inertial reference frame that is rotating relative to an inertial reference frame. An everyday example of a rotating reference frame is the surface of the Earth. (This article considers only frames rotating about a fixed axis. For more general rotations, see Euler angles.) Fictitious forces. All non-inertial reference frames exhibit fictitious forces; rotating reference frames are characterized by three: and, for non-uniformly rotating reference frames, Scientists in a rotating box can measure the rotation speed and axis of rotation by measuring these fictitious forces. For example, Léon Foucault was able to show the Coriolis force that results from Earth's rotation using the Foucault pendulum. If Earth were to rotate many times faster, these fictitious forces could be felt by humans, as they are when on a spinning carousel. Centrifugal force. In classical mechanics, "centrifugal force" is an outward force associated with rotation. Centrifugal force is one of several so-called pseudo-forces (also known as inertial forces), so named because, unlike real forces, they do not originate in interactions with other bodies situated in the environment of the particle upon which they act. Instead, centrifugal force originates in the rotation of the frame of reference within which observations are made. Coriolis force. The mathematical expression for the Coriolis force appeared in an 1835 paper by a French scientist Gaspard-Gustave Coriolis in connection with hydrodynamics, and also in the tidal equations of Pierre-Simon Laplace in 1778. Early in the 20th century, the term Coriolis force began to be used in connection with meteorology. Perhaps the most commonly encountered rotating reference frame is the Earth. Moving objects on the surface of the Earth experience a Coriolis force, and appear to veer to the right in the northern hemisphere, and to the left in the southern. Movements of air in the atmosphere and water in the ocean are notable examples of this behavior: rather than flowing directly from areas of high pressure to low pressure, as they would on a non-rotating planet, winds and currents tend to flow to the right of this direction north of the equator, and to the left of this direction south of the equator. This effect is responsible for the rotation of large cyclones (see Coriolis effects in meteorology). Euler force. In classical mechanics, the "Euler acceleration" (named for Leonhard Euler), also known as "azimuthal acceleration" or "transverse acceleration" is an acceleration that appears when a non-uniformly rotating reference frame is used for analysis of motion and there is variation in the angular velocity of the reference frame's axis. This article is restricted to a frame of reference that rotates about a fixed axis. The "Euler force" is a fictitious force on a body that is related to the Euler acceleration by "F"  = "ma", where "a" is the Euler acceleration and "m" is the mass of the body. Relating rotating frames to stationary frames. The following is a derivation of the formulas for accelerations as well as fictitious forces in a rotating frame. It begins with the relation between a particle's coordinates in a rotating frame and its coordinates in an inertial (stationary) frame. Then, by taking time derivatives, formulas are derived that relate the velocity of the particle as seen in the two frames, and the acceleration relative to each frame. Using these accelerations, the fictitious forces are identified by comparing Newton's second law as formulated in the two different frames. Relation between positions in the two frames. To derive these fictitious forces, it's helpful to be able to convert between the coordinates formula_0 of the rotating reference frame and the coordinates formula_1 of an inertial reference frame with the same origin. If the rotation is about the formula_2 axis with a constant angular velocity formula_3 (so formula_4 and formula_5 which implies formula_6 for some constant formula_7 where formula_8 denotes the angle in the formula_9-plane formed at time formula_10 by formula_11 and the formula_12-axis), and if the two reference frames coincide at time formula_13 (meaning formula_14 when formula_15 so take formula_16 or some other integer multiple of formula_17), the transformation from rotating coordinates to inertial coordinates can be written formula_18 formula_19 whereas the reverse transformation is formula_20 formula_21 This result can be obtained from a rotation matrix. Introduce the unit vectors formula_22 representing standard unit basis vectors in the rotating frame. The time-derivatives of these unit vectors are found next. Suppose the frames are aligned at formula_13 and the formula_2-axis is the axis of rotation. Then for a counterclockwise rotation through angle formula_23: formula_24 where the formula_25 components are expressed in the stationary frame. Likewise, formula_26 Thus the time derivative of these vectors, which rotate without changing magnitude, is formula_27 formula_28 where formula_29 This result is the same as found using a vector cross product with the rotation vector formula_30 pointed along the z-axis of rotation formula_31 namely, formula_32 where formula_33 is either formula_34 or formula_35 Time derivatives in the two frames. Introduce unit vectors formula_22, now representing standard unit basis vectors in the general rotating frame. As they rotate they will remain normalized and perpendicular to each other. If they rotate at the speed of formula_36 about an axis along the rotation vector formula_37 then each unit vector formula_33 of the rotating coordinate system (such as formula_38 or formula_39) abides by the following equation: formula_40 So if formula_41 denotes the transformation taking basis vectors of the inertial- to the rotating frame, with matrix columns equal to the basis vectors of the rotating frame, then the cross product multiplication by the rotation vector is given by formula_42. If formula_43 is a vector function that is written as formula_44 and we want to examine its first derivative then (using the product rule of differentiation): formula_45 where formula_46 denotes the rate of change of formula_43 as observed in the rotating coordinate system. As a shorthand the differentiation is expressed as: formula_47 This result is also known as the transport theorem in analytical dynamics and is also sometimes referred to as the "basic kinematic equation". Relation between velocities in the two frames. A velocity of an object is the time-derivative of the object's position, so formula_48 The time derivative of a position formula_49 in a rotating reference frame has two components, one from the explicit time dependence due to motion of the object itself in the rotating reference frame, and another from the frame's own rotation. Applying the result of the previous subsection to the displacement formula_50 the velocities in the two reference frames are related by the equation formula_51 where subscript formula_52 means the inertial frame of reference, and formula_53 means the rotating frame of reference. Relation between accelerations in the two frames. Acceleration is the second time derivative of position, or the first time derivative of velocity formula_54 where subscript formula_52 means the inertial frame of reference, formula_53 the rotating frame of reference, and where the expression, again, formula_55 in the bracketed expression on the left is to be interpreted as an operator working onto the bracketed expression on the right. As formula_56, the first time derivatives of formula_57 inside either frame, when expressed with respect to the basis of e.g. the inertial frame, coincide. Carrying out the differentiations and re-arranging some terms yields the acceleration "relative to the rotating" reference frame, formula_58 formula_59 where formula_60 is the apparent acceleration in the rotating reference frame, the term formula_61 represents centrifugal acceleration, and the term formula_62 is the Coriolis acceleration. The last term, formula_63, is the Euler acceleration and is zero in uniformly rotating frames. Newton's second law in the two frames. When the expression for acceleration is multiplied by the mass of the particle, the three extra terms on the right-hand side result in fictitious forces in the rotating reference frame, that is, apparent forces that result from being in a non-inertial reference frame, rather than from any physical interaction between bodies. Using Newton's second law of motion formula_64 we obtain: where formula_68 is the mass of the object being acted upon by these fictitious forces. Notice that all three forces vanish when the frame is not rotating, that is, when formula_69 For completeness, the inertial acceleration formula_70 due to impressed external forces formula_71 can be determined from the total physical force in the inertial (non-rotating) frame (for example, force from physical interactions such as electromagnetic forces) using Newton's second law in the inertial frame: formula_72 Newton's law in the rotating frame then becomes formula_73 In other words, to handle the laws of motion in a rotating reference frame: &lt;templatestyles src="Template:Blockquote/styles.css" /&gt;Treat the fictitious forces like real forces, and pretend you are in an inertial frame. &lt;templatestyles src="Template:Blockquote/styles.css" /&gt;Obviously, a rotating frame of reference is a case of a non-inertial frame. Thus the particle in addition to the real force is acted upon by a fictitious force...The particle will move according to Newton's second law of motion if the total force acting on it is taken as the sum of the real and fictitious forces. &lt;templatestyles src="Template:Blockquote/styles.css" /&gt;This equation has exactly the form of Newton's second law, "except" that in addition to F, the sum of all forces identified in the inertial frame, there is an extra term on the right...This means we can continue to use Newton's second law in the noninertial frame "provided" we agree that in the noninertial frame we must add an extra force-like term, often called the inertial force. Use in magnetic resonance. It is convenient to consider magnetic resonance in a frame that rotates at the Larmor frequency of the spins. This is illustrated in the animation below. The rotating wave approximation may also be used. References. &lt;templatestyles src="Reflist/styles.css" /&gt; &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\left(x', y', z'\\right)" }, { "math_id": 1, "text": "(x, y, z)" }, { "math_id": 2, "text": "z" }, { "math_id": 3, "text": "\\Omega" }, { "math_id": 4, "text": "z' = z" }, { "math_id": 5, "text": "\\frac{\\mathrm{d} \\theta}{\\mathrm{d} t} \\equiv \\Omega," }, { "math_id": 6, "text": "\\theta(t) = \\Omega t + \\theta_0" }, { "math_id": 7, "text": "\\theta_0" }, { "math_id": 8, "text": "\\theta(t)" }, { "math_id": 9, "text": "x-y" }, { "math_id": 10, "text": "t" }, { "math_id": 11, "text": "\\left(x', y'\\right)" }, { "math_id": 12, "text": "x" }, { "math_id": 13, "text": "t = 0" }, { "math_id": 14, "text": "\\left(x', y', z'\\right) = (x, y, z)" }, { "math_id": 15, "text": "t = 0," }, { "math_id": 16, "text": "\\theta_0 = 0" }, { "math_id": 17, "text": "2\\pi" }, { "math_id": 18, "text": "x = x'\\cos(\\theta(t)) - y'\\sin(\\theta(t))" }, { "math_id": 19, "text": "y = x'\\sin(\\theta(t)) + y'\\cos(\\theta(t))" }, { "math_id": 20, "text": "x' = x\\cos(-\\theta(t)) - y\\sin(-\\theta(t))" }, { "math_id": 21, "text": "y' = x\\sin( -\\theta(t)) + y\\cos(-\\theta(t)) \\ ." }, { "math_id": 22, "text": "\\hat{\\boldsymbol{\\imath}},\\ \\hat{\\boldsymbol{\\jmath}},\\ \\hat{\\boldsymbol{k}}" }, { "math_id": 23, "text": "\\Omega t" }, { "math_id": 24, "text": "\\hat{\\boldsymbol{\\imath}}(t) = (\\cos\\theta(t),\\ \\sin \\theta(t))" }, { "math_id": 25, "text": "(x, y)" }, { "math_id": 26, "text": "\\hat{\\boldsymbol{\\jmath}}(t) = (-\\sin \\theta(t),\\ \\cos \\theta(t)) \\ ." }, { "math_id": 27, "text": "\\frac{\\mathrm{d}}{\\mathrm{d}t}\\hat{\\boldsymbol{\\imath}}(t) = \\Omega (-\\sin \\theta(t), \\ \\cos \\theta(t))= \\Omega \\hat{\\boldsymbol{\\jmath}} \\ ; " }, { "math_id": 28, "text": "\\frac{\\mathrm{d}}{\\mathrm{d}t}\\hat{\\boldsymbol{\\jmath}}(t) = \\Omega (-\\cos \\theta(t), \\ -\\sin \\theta(t))= - \\Omega \\hat{\\boldsymbol{\\imath}} \\ ," }, { "math_id": 29, "text": "\\Omega \\equiv \\frac{\\mathrm{d}}{\\mathrm{d}t}\\theta(t)." }, { "math_id": 30, "text": "\\boldsymbol{\\Omega}" }, { "math_id": 31, "text": "\\boldsymbol{\\Omega} = (0,\\ 0,\\ \\Omega)," }, { "math_id": 32, "text": "\\frac{\\mathrm{d}}{\\mathrm{d}t}\\hat{\\boldsymbol{u}} = \\boldsymbol{\\Omega \\times}\\hat{\\boldsymbol{u}} \\ , " }, { "math_id": 33, "text": "\\hat{\\boldsymbol{u}}" }, { "math_id": 34, "text": "\\hat{\\boldsymbol{\\imath}}" }, { "math_id": 35, "text": "\\hat{\\boldsymbol{\\jmath}}." }, { "math_id": 36, "text": "\\Omega(t)" }, { "math_id": 37, "text": "\\boldsymbol {\\Omega}(t)" }, { "math_id": 38, "text": "\\hat{\\boldsymbol{\\imath}},\\ \\hat{\\boldsymbol{\\jmath}}," }, { "math_id": 39, "text": "\\hat{\\boldsymbol{k}}" }, { "math_id": 40, "text": "\\frac{\\mathrm{d}}{\\mathrm{d}t}\\hat{\\boldsymbol{u}} = \\boldsymbol{\\Omega} \\times \\boldsymbol{\\hat{u}} \\ ." }, { "math_id": 41, "text": "R(t)" }, { "math_id": 42, "text": "\\boldsymbol{\\Omega}\\times = R'(t)\\cdot R(t)^T" }, { "math_id": 43, "text": "\\boldsymbol{f}" }, { "math_id": 44, "text": "\\boldsymbol{f}(t)=f_1(t) \\hat{\\boldsymbol{\\imath}}+f_2(t) \\hat{\\boldsymbol{\\jmath}}+f_3(t) \\hat{\\boldsymbol{k}}\\ ," }, { "math_id": 45, "text": "\\begin{align}\n \\frac{\\mathrm{d}}{\\mathrm{d}t}\\boldsymbol{f}\n &= \\frac{\\mathrm{d}f_1}{\\mathrm{d}t}\\hat{\\boldsymbol{\\imath}} + \\frac{\\mathrm{d}\\hat{\\boldsymbol{\\imath}}}{\\mathrm{d}t}f_1 + \\frac{\\mathrm{d}f_2}{\\mathrm{d}t}\\hat{\\boldsymbol{\\jmath}} + \\frac{\\mathrm{d}\\hat{\\boldsymbol{\\jmath}}}{\\mathrm{d}t}f_2 + \\frac{\\mathrm{d}f_3}{\\mathrm{d}t}\\hat{\\boldsymbol{k}} + \\frac{\\mathrm{d}\\hat{\\boldsymbol{k}}}{\\mathrm{d}t}f_3 \\\\\n &= \\frac{\\mathrm{d}f_1}{\\mathrm{d}t}\\hat{\\boldsymbol{\\imath}} + \\frac{\\mathrm{d}f_2}{\\mathrm{d}t}\\hat{\\boldsymbol{\\jmath}} + \\frac{\\mathrm{d}f_3}{\\mathrm{d}t}\\hat{\\boldsymbol{k}} + \\left[\\boldsymbol{\\Omega} \\times \\left(f_1 \\hat{\\boldsymbol{\\imath}} + f_2 \\hat{\\boldsymbol{\\jmath}} + f_3 \\hat{\\boldsymbol{k}}\\right)\\right] \\\\\n &= \\left( \\frac{\\mathrm{d}\\boldsymbol{f}}{\\mathrm{d}t}\\right)_{\\mathrm{r}} + \\boldsymbol{\\Omega} \\times \\boldsymbol{f}\n\\end{align}" }, { "math_id": 46, "text": "\\left( \\frac{\\mathrm{d}\\boldsymbol{f}}{\\mathrm{d}t}\\right)_{\\mathrm{r}}" }, { "math_id": 47, "text": "\\frac{\\mathrm{d}}{\\mathrm{d}t}\\boldsymbol{f} = \\left[ \\left(\\frac{\\mathrm{d}}{\\mathrm{d}t}\\right)_{\\mathrm{r}} + \\boldsymbol{\\Omega} \\times \\right] \\boldsymbol{f} \\ ." }, { "math_id": 48, "text": "\\mathbf{v} \\ \\stackrel{\\mathrm{def}}{=}\\ \\frac{\\mathrm{d}\\mathbf{r}}{\\mathrm{d}t} \\ ." }, { "math_id": 49, "text": "\\boldsymbol{r}(t)" }, { "math_id": 50, "text": "\\boldsymbol{r}(t)," }, { "math_id": 51, "text": " \n\\mathbf{v_i} \\ \\stackrel{\\mathrm{def}}{=}\\ \n\\left({\\frac{\\mathrm{d}\\mathbf{r}}{\\mathrm{d}t}}\\right)_{\\mathrm{i}} \\ \\stackrel{\\mathrm{def}}{=}\\ \n\\frac{\\mathrm{d}\\mathbf{r}}{\\mathrm{d}t} = \n\\left[ \\left(\\frac{\\mathrm{d}}{\\mathrm{d}t}\\right)_{\\mathrm{r}} + \\boldsymbol{\\Omega} \\times \\right] \\boldsymbol{r} = \n\\left(\\frac{\\mathrm{d}\\mathbf{r}}{\\mathrm{d}t}\\right)_{\\mathrm{r}} + \\boldsymbol\\Omega \\times \\mathbf{r} = \n\\mathbf{v}_{\\mathrm{r}} + \\boldsymbol\\Omega \\times \\mathbf{r} \\ ,\n" }, { "math_id": 52, "text": "\\mathrm{i}" }, { "math_id": 53, "text": "\\mathrm{r}" }, { "math_id": 54, "text": " \n\\mathbf{a}_{\\mathrm{i}} \\ \\stackrel{\\mathrm{def}}{=}\\ \n\\left( \\frac{\\mathrm{d}^{2}\\mathbf{r}}{\\mathrm{d}t^{2}}\\right)_{\\mathrm{i}} = \n\\left( \\frac{\\mathrm{d}\\mathbf{v}}{\\mathrm{d}t} \\right)_{\\mathrm{i}} = \n\\left[ \\left( \\frac{\\mathrm{d}}{\\mathrm{d}t} \\right)_{\\mathrm{r}} + \\boldsymbol\\Omega \\times \\right]\n\\left[\\left( \\frac{\\mathrm{d}\\mathbf{r}}{\\mathrm{d}t} \\right)_{\\mathrm{r}} + \\boldsymbol\\Omega \\times \\mathbf{r} \\right] \\ ,\n" }, { "math_id": 55, "text": "\\boldsymbol\\Omega \\times" }, { "math_id": 56, "text": "\\boldsymbol\\Omega\\times\\boldsymbol\\Omega=\\boldsymbol 0" }, { "math_id": 57, "text": "\\boldsymbol\\Omega" }, { "math_id": 58, "text": "\\mathbf{a}_{\\mathrm{r}}" }, { "math_id": 59, "text": " \n\\mathbf{a}_{\\mathrm{r}} = \n\\mathbf{a}_{\\mathrm{i}} -\n2 \\boldsymbol\\Omega \\times \\mathbf{v}_{\\mathrm{r}} -\n\\boldsymbol\\Omega \\times (\\boldsymbol\\Omega \\times \\mathbf{r}) -\n\\frac{\\mathrm{d}\\boldsymbol\\Omega}{\\mathrm{d}t} \\times \\mathbf{r}\n" }, { "math_id": 60, "text": "\\mathbf{a}_{\\mathrm{r}} \\ \\stackrel{\\mathrm{def}}{=}\\ \\left( \\tfrac{\\mathrm{d}^{2}\\mathbf{r}}{\\mathrm{d}t^{2}} \\right)_{\\mathrm{r}}" }, { "math_id": 61, "text": "-\\boldsymbol\\Omega \\times (\\boldsymbol\\Omega \\times \\mathbf{r})" }, { "math_id": 62, "text": "-2 \\boldsymbol\\Omega \\times \\mathbf{v}_{\\mathrm{r}}" }, { "math_id": 63, "text": "-\\tfrac{\\mathrm{d}\\boldsymbol\\Omega}{\\mathrm{d}t} \\times \\mathbf{r}" }, { "math_id": 64, "text": "\\mathbf{F}=m\\mathbf{a}," }, { "math_id": 65, "text": "\n\\mathbf{F}_{\\mathrm{Coriolis}} = \n-2m \\boldsymbol\\Omega \\times \\mathbf{v}_{\\mathrm{r}}\n" }, { "math_id": 66, "text": "\n\\mathbf{F}_{\\mathrm{centrifugal}} = \n-m\\boldsymbol\\Omega \\times (\\boldsymbol\\Omega \\times \\mathbf{r})\n" }, { "math_id": 67, "text": "\n\\mathbf{F}_{\\mathrm{Euler}} = \n-m\\frac{\\mathrm{d}\\boldsymbol\\Omega}{\\mathrm{d}t} \\times \\mathbf{r}\n" }, { "math_id": 68, "text": "m" }, { "math_id": 69, "text": "\\boldsymbol{\\Omega} = 0 \\ . " }, { "math_id": 70, "text": "\\mathbf{a}_{\\mathrm{i}}" }, { "math_id": 71, "text": "\\mathbf{F}_{\\mathrm{imp}}" }, { "math_id": 72, "text": "\n\\mathbf{F}_{\\mathrm{imp}} = m \\mathbf{a}_{\\mathrm{i}}\n" }, { "math_id": 73, "text": "\\mathbf{F_{\\mathrm{r}}} = \\mathbf{F}_{\\mathrm{imp}} + \\mathbf{F}_{\\mathrm{centrifugal}} +\\mathbf{F}_{\\mathrm{Coriolis}} + \\mathbf{F}_{\\mathrm{Euler}} = m\\mathbf{a_{\\mathrm{r}}} \\ . " } ]
https://en.wikipedia.org/wiki?curid=1209823
12098816
Damage mechanics
Damage mechanics is concerned with the representation, or modeling, of damage of materials that is suitable for making engineering predictions about the initiation, propagation, and fracture of materials without resorting to a microscopic description that would be too complex for practical engineering analysis. Damage mechanics illustrates the typical engineering approach to model complex phenomena. To quote Dusan Krajcinovic, "It is often argued that the ultimate task of engineering research is to provide not so much a better insight into the examined phenomenon but to supply a rational predictive tool applicable in design". Damage mechanics is a topic of applied mechanics that relies heavily on continuum mechanics. Most of the work on damage mechanics uses state variables to represent the "effects" of damage on the stiffness and remaining life of the material that is damaging as a result of thermomechanical load and ageing. The state variables may be measurable, e.g., crack density, or inferred from the "effect" they have on some macroscopic property, such as stiffness, coefficient of thermal expansion, remaining life, etc. The state variables have conjugate thermodynamic forces that motivate further damage. Initially the material is pristine, or "intact". A damage activation criterion is needed to predict damage initiation. Damage evolution does not progress spontaneously after initiation, thus requiring a damage evolution model. In plasticity like formulations, the damage evolution is controlled by a hardening function but this requires additional phenomenological parameters that must be found through experimentation, which is expensive, time consuming, and virtually no one does. On the other hand, micromechanics of damage formulations are able to predict both damage initiation and evolution without additional material properties. Creep Continuum Damage Mechanics. When mechanical structures are exposed to temperatures exceeding one-third of the melting temperature of the material of construction, time-dependent deformation (creep) and associated material degradation mechanisms become dominant modes of structural failure. While these deformation and damage mechanisms originate at the microscale where discrete processes dominate, practical application of failure theories to macroscale components is most readily achieved using the formalism of continuum mechanics. In this context, microscopic damage is idealized as a continuous state variable defined at all points within a structure. State equations are defined which govern the time evolution of damage. These equations may be readily integrated into finite element codes to analyze the damage evolution in complex 3D structures and calculate how long a component may safely be used before failure occurs. Lumped damage state variable. L. M. Kachanov and Y. N. Rabotnov suggested the following evolution equations for the creep strain ε and a lumped damage state variable ω: formula_0 formula_1 Where, formula_2 is the creep strain rate, formula_3 is the creep-rate multiplier, formula_4 is the applied stress, formula_5 is the creep stress exponent of the material of interest, formula_6 is the rate of damage accumulation, formula_7 is the damage-rate multiplier, and formula_8 is the damage stress exponent. In this simple case, the strain rate is governed by power-law creep with the stress enhanced by the damage state variable as damage accumulates. The damage term ω is interpreted as a distributed loss of load bearing area which results in an increased local stress at the microscale. The time to failure is determined by integrating the damage evolution equation from an initial undamaged state formula_9 to a specified critical damage formula_10. If formula_11 is taken to be 1, this results in the following prediction for a structure loaded under a constant uniaxial stress formula_4: formula_12 Model parameters formula_13 and n are found by fitting the creep strain rate equation at zero damage to minimum creep rate measurements. Model parameters formula_14 and m are found by fitting the above equation to creep rupture life data. Mechanistically informed damage state variables. While easy to apply, the lumped damage model proposed by Kachanov and Robotnov is limited by the fact that the damage state variable cannot be directly tied to a specific mechanism of strain and damage evolution. Correspondingly, extrapolation of the model beyond the original dataset of test data is not justified. This limitation was remedied by researchers such as A.C.F. Cocks, M.F. Ashby, and B.F. Dyson, who proposed mechanistically informed strain and damage evolution equations. Extrapolation using such equations is justified if the dominant damage mechanism remains the same at the conditions of interest. Void-growth by Power-Law Creep. In the power-law creep regime, global deformation is controlled by glide and climb of dislocations. If internal voids are present within the microstructure, global structural continuity requires that the voids must both elongate and expand laterally, further reducing the local section. When cast in the damage mechanics formalism, the growth of internal voids by power-law creep can be represented by the following equations. formula_15 formula_16 Where,formula_3 is the creep-rate multiplier, formula_4 is the applied stress, n is the creep stress exponent, formula_17 is the average initial void radius, and d is the grain size. Void-growth by Boundary Diffusion. At very high temperature and/or low stresses, void growth on grain boundaries is primarily controlled by the diffusive flux of vacancies along the grain boundary. As matter diffuses away from the void and plates onto the adjacent grain boundaries, a roughly spherical void is maintained by rapid diffusion of vacancies along the surface of the void. When cast in the damage mechanics formalism, the growth of internal voids by boundary diffusion can be represented by the following equations. formula_18 formula_19 formula_20 Where,formula_21 is the creep-rate multiplier, formula_4 is the applied stress, formula_22 is the center-to-center void spacing, formula_23 is the grain size, formula_24 is the grain-boundary diffusion coefficient, formula_25 is the grain boundary thickness, formula_26 is the atomic volume, formula_27 is Boltzmann’s constant, and formula_28 is the absolute temperatures. It is noted that factors present in formula_29 are very similar to the Coble creep pre-factors due to the similarity of the two mechanisms. Precipitate Coarsening. Many modern steels and alloys are designed such that precipitates will precipitate either within the matrix or along grain boundaries during casting. These precipitates restrict dislocation motion and, if present on grain boundaries, grain boundary sliding during creep. Many precipitates are not thermodynamically stable and grow via diffusion when exposed to elevated temperatures. As the precipitates coarsen, their ability to restrict dislocation motion decreases as the average spacing between particles increases, thus decreasing the required Orowan stress for bowing. In the case of grain boundary precipitates, precipitate growth means that fewer grain boundaries are impeded from grain boundary sliding. When cast into the damage mechanics formalism, precipitation coarsening and its effect on strain rate may be represented by the following equations. formula_30 formula_31 Where,formula_32 is the creep-rate multiplier, formula_4 is the applied stress, formula_5 is the creep-rate stress exponent, formula_33 is a parameter linking the precipitation damage to the strain rate, formula_34 determines the rate of precipitate coarsening. Combining Damage Mechanisms. Multiple damage mechanism can be combined to represent a broader range of phenomena. For instance, if both void-growth by power-law creep and precipitate coarsening are relevant mechanisms, the following combined set of equations may be used: formula_35 formula_36 formula_37 Note that both damage mechanisms are included in the creep strain rate equation. The precipitate coarsening damage mechanisms influences the void-growth damage mechanism as the void-growth mechanism depends on the global strain rate. The precipitate growth mechanisms is only time and temperature dependent and hence does not depend on the void-growth damage formula_38. Multiaxial Effects. The preceding equations are valid under uniaxial tension only. When a multiaxial state of stress is present in the system, each equation must be adapted so that the driving multiaxial stress is considered. For void-growth by power-law creep, the relevant stress is the von Mises stress as this drives the global creep deformation; however, for void-growth by boundary diffusion, the maximum principal stress drives the vacancy flux. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": " \\dot \\epsilon = \\dot \\epsilon_0 \\left(\\frac{\\sigma}{1-\\omega}\\right)^n " }, { "math_id": 1, "text": " \\dot \\omega = \\dot \\omega_0 \\left(\\frac{\\sigma}{1-\\omega}\\right)^m " }, { "math_id": 2, "text": " \\dot{\\epsilon} " }, { "math_id": 3, "text": "\\dot \\epsilon_0" }, { "math_id": 4, "text": "\\sigma" }, { "math_id": 5, "text": "n" }, { "math_id": 6, "text": "\\dot \\omega" }, { "math_id": 7, "text": "\\dot \\omega_0" }, { "math_id": 8, "text": "m" }, { "math_id": 9, "text": "(\\omega = 0)" }, { "math_id": 10, "text": "\\left(\\omega = \\omega_f\\right)" }, { "math_id": 11, "text": "\\omega_f" }, { "math_id": 12, "text": "t_f=\\frac{1}{\\left(m+1\\right)\\dot\\omega_0 \\sigma^m}" }, { "math_id": 13, "text": " \\dot{\\epsilon_0} " }, { "math_id": 14, "text": " \\dot{\\omega_0} " }, { "math_id": 15, "text": "\\dot \\epsilon = \\dot \\epsilon_0 \\sigma^n \\left(1 + \\frac{2 r_h^0}{d}\\left[\\frac{1}{\\left(1-\\omega\\right)^n} - 1\\right] \\right) " }, { "math_id": 16, "text": " \\dot \\omega = \\dot \\epsilon_0 \\sigma^n \\left(\\frac{1}{\\left(1-\\omega\\right)^n} - \\left(1-\\omega\\right) \\right) " }, { "math_id": 17, "text": "r_h^0" }, { "math_id": 18, "text": " \\dot\\epsilon=\\dot\\epsilon_0\\phi_0\\sigma\\frac{2l}{d\\ln\\left(\\frac{1}{\\omega}\\right)}" }, { "math_id": 19, "text": "\\dot\\omega=\\dot\\epsilon_0\\phi_0\\sigma\\frac{1}{\\omega^{1/2}\\ln\\left(\\frac{1}{\\omega}\\right)}" }, { "math_id": 20, "text": "\\phi_0=\\frac{2D_B\\delta_B\\Omega}{kTl^3}\\frac{1}{{\\dot{\\varepsilon}}_0}" }, { "math_id": 21, "text": "\\dot\\epsilon_0" }, { "math_id": 22, "text": "2l" }, { "math_id": 23, "text": "d" }, { "math_id": 24, "text": "D_B" }, { "math_id": 25, "text": "\\delta_B" }, { "math_id": 26, "text": "\\Omega" }, { "math_id": 27, "text": "k" }, { "math_id": 28, "text": "T" }, { "math_id": 29, "text": "\\phi_0" }, { "math_id": 30, "text": "\\dot\\epsilon=\\dot\\epsilon_0\\sigma^n\\left(1+K^{\\prime\\prime}\\omega\\right)^n" }, { "math_id": 31, "text": "\\dot\\omega=\\frac{K^{\\prime}}{3}\\left(1-\\omega\\right)^4" }, { "math_id": 32, "text": "\\ \\dot\\epsilon_0" }, { "math_id": 33, "text": "K^{\\prime\\prime}" }, { "math_id": 34, "text": "K^{\\prime}" }, { "math_id": 35, "text": "\\dot\\epsilon=\\dot\\epsilon_0\\sigma^n\\left(1+\\frac{2r_h^0}{d}\\left[\\frac{1}{\\left(1-\\omega_1\\right)^n}-1\\right]\\right)\\left(1+K^{\\prime\\prime}\\omega_2\\right)^n" }, { "math_id": 36, "text": "\\dot\\omega_1=\\dot\\epsilon_0\\sigma^n\\left(\\frac{1}{\\left(1-\\omega_1\\right)^n}-\\left(1-\\omega_1\\right)\\right)\\left(1+K^{\\prime\\prime}\\omega_2\\right)^n" }, { "math_id": 37, "text": "\\dot\\omega_2=\\frac{K^{\\prime}}{3}\\left(1-\\omega_2\\right)^4" }, { "math_id": 38, "text": "\\omega_1" } ]
https://en.wikipedia.org/wiki?curid=12098816
12099372
Samuel A. Stouffer
American sociologist Samuel Andrew Stouffer (June 6, 1900 – August 24, 1960) was a prominent American sociologist and developer of survey research techniques. Stouffer spent much of his career attempting to answer the fundamental question: How does one measure an attitude? Stouffer served as a professor of sociology at both the University of Chicago and Harvard University, and also directed the Laboratory of Social Relations at Harvard. Biography. Born in Sac City, Iowa, Stouffer received a Bachelor of Arts at Morningside College, Sioux City in 1921, then went on to earn a Master of Arts in English at Harvard University in 1923. He returned to Sac City in 1923 to manage and edit his father's newspaper, the "Sac Sun", until 1926 when he sold it and started his doctoral studies. During that time, he married Ruth McBurney in 1924, with whom he had three children. Stouffer earned his PhD in sociology in 1930 at the University of Chicago. His dissertation was “An Experimental Comparison of Statistical and Case-History Methods of Attitude Research,” supervised by Herbert Blumer. He then served as a professor of sociology, statistics, and social statistics at universities such as the University of Chicago, the University of London, and the University of Wisconsin–Madison. Principal works. "Studies in Social Psychology in World War II: The American Soldier". (Princeton University Press, 1949). Stouffer and a distinguished team of social scientists working for the War Department surveyed over a half million American soldiers during World War II using interviews, over two hundred questionnaires, and other techniques to determine their attitudes on everything from racial integration to their officers’ performance. Their answers, almost always complex and often also counterintuitive, reveal individuals both defining and defined by their society and their primary groups. Stouffer's work in World War II led to the Expert and Combat Infantryman Badges, revision of pay scales, the demobilization point system, and influenced what appeared in "Yank, the Army Weekly, Stars &amp; Stripes", and Frank Capra's “Why We Fight” propaganda films. Additionally, it was Stouffer and his colleagues who during their research for "The American Soldier" developed the important sociological concept of “relative deprivation”, which roughly stated is the idea that one determines his status based on comparison with others. The research was published in 4 volumes: After Stouffer's death, the punch cards for the unclassified surveys used in "The American Soldier" were digitized by the Roper Center and are now available from the US National Archives; for details, see "A Finding Aid to Records Relating to Personal Participation in World War II ("The American Soldier"" Surveys)". Microfilms of the soldiers' handwritten responses to the survey questions are also held by the US National Archives and by 2019 were digitized as images so that they could be transcribed for full-text searching. Historian Edward Gitre wrote of this project:The handwritten commentaries the researchers preserved — photographed in 1947, and amounting to some 65,000 pages — capture for posterity converging and diverging plotlines that ran through the same organization. [... W]ith the indispensable help of volunteer citizen-archivists on the 1.7 million member Zooniverse crowdsourcing platform, the entire collection of now-digitized commentaries are being transcribed, so the public can finally access and read them.A 2013 book by Joseph W. Ryan, "Samuel Stouffer and the GI Survey: Sociologists and Soldiers during the Second World War" has been recommended "for those seeking an understanding of the World War II roots of modern opinion polling, an examination of the effects the GI Survey had on wartime operations, and an analysis of the place of "The American Soldier" in the historiography of sociology." It is an expanded version of his 2009 thesis ("What Were They Thinking? Samuel A. Stouffer and "The American Soldier"", Ryan 2009). "Communism, Conformity &amp; Civil Liberties: A Cross Section of the Nation Speaks its Mind". (Doubleday &amp; Co., 1955). In the summer of 1954, 500 interviewers under Professor Stouffer's supervision polled a cross section of 6000 Americans to determine their attitudes on nonconformist behavior. Through both anecdotal and highly disciplined research data, Stouffer illuminated the attitudes of Americans to nonconformist behavior in general, and to what liberals considered the intolerance of the McCarthy Era in particular. Although he found no “national neurosis”, what he did find was that Americans remained mostly concerned about their day-to-day existence – an important discovery in the face of an increasingly mass-culture society. He also found differing levels of tolerance based on socio-economic factors. Among his other major works is "Social Research to Test Ideas", (The Free Press, 1962). Activities. Professor Stouffer was a delegate to the International Conference on Population in Paris, 1938, President of the American Sociological Society 1952-3, President of the American Association of Public Opinion Research 1953-54, a member of the American Academy of Arts and Sciences, the American Philosophical Association, the American Philosophical Society, Phi Beta Kappa, the American Statistical Association, the Sociological Research Association, the Institute of Mathematical Statistics, the Population Association of America, the Psychometric Association, the Harvard Club and Cosmos Club. He also consulted with scores of private and public institutes, a partial listing of which includes: Personality. Stouffer is described by his family and those who knew him well as a gentleman of warmth, compassion, restless energy, high standards, depth, and a puckish sense of humor. His academic lectures, through which he often chain-smoked, were littered with allusions and quotations from Shakespeare, and these were often accompanied by baseball statistics. Deeply intellectually curious and impatient for survey results, Stouffer frequently sat by the IBM punched card sifting machine to see the raw answers to his queries. (These traits help to explain how he produced the classic "Communism, Conformity and Civil Liberties" so quickly). In his few free hours he favored Mickey Spillane novels and listening to baseball on the radio. His correspondence reveals a clear thinking pragmatist with a deep sense of responsibility to his society and to his profession. As James Davis writes in the introduction to "Communism, Conformity and Civil Liberties" (reprinted in 1992 by Transaction Publishers, New Brunswick), “Sam was a great sociologist….” Legacy. Samuel Stouffer's influence reaches well beyond military history and sociology. His work is cited in journals as diverse as Child Development Abstract, The Journal of Abnormal and Social Psychology, and Commentary. His research has had a lasting effect on polling procedures and analysis, market research and interpretation, race relations, population and nuclear policies, education, and economics. Additionally, his clear, honest writing style, free of unexplained jargon and “bureaucratese”, remains a model of the simple, elegant use of the English language. He also originated `Stouffer's Method' for calculating the significance of a combined result. If N individual p-values are expressed as their equivalent number of standard deviations from the normal distribution, then the combined number of standard deviations is the total divided by formula_0. This appears as an obscure footnote in "The American Soldier: Vol I" but is now in widespread use. Footnotes. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\sqrt N" } ]
https://en.wikipedia.org/wiki?curid=12099372
1210
Astronomical unit
Mean distance between Earth and the Sun &lt;templatestyles src="Template:Infobox/styles-images.css" /&gt; The astronomical unit (symbol: au or AU) is a unit of length defined to be exactly equal to . Historically, the astronomical unit was conceived as the average Earth-Sun distance (the average of Earth's aphelion and perihelion), before its modern redefinition in 2012. The astronomical unit is used primarily for measuring distances within the Solar System or around other stars. It is also a fundamental component in the definition of another unit of astronomical length, the parsec. One au is equivalent to 499 light-seconds to within 10 parts per million. History of symbol usage. A variety of unit symbols and abbreviations have been in use for the astronomical unit. In a 1976 resolution, the International Astronomical Union (IAU) had used the symbol "A" to denote a length equal to the astronomical unit. In the astronomical literature, the symbol AU is common. In 2006, the International Bureau of Weights and Measures (BIPM) had recommended ua as the symbol for the unit, from the French "unité astronomique". In the non-normative Annex C to ISO 80000-3:2006 (later withdrawn), the symbol of the astronomical unit was also ua. In 2012, the IAU, noting "that various symbols are presently in use for the astronomical unit", recommended the use of the symbol "au". The scientific journals published by the American Astronomical Society and the Royal Astronomical Society subsequently adopted this symbol. In the 2014 revision and 2019 edition of the SI Brochure, the BIPM used the unit symbol "au". ISO 80000-3:2019, which replaces ISO 80000-3:2006, does not mention the astronomical unit. Development of unit definition. Earth's orbit around the Sun is an ellipse. The semi-major axis of this elliptic orbit is defined to be half of the straight line segment that joins the perihelion and aphelion. The centre of the Sun lies on this straight line segment, but not at its midpoint. Because ellipses are well-understood shapes, measuring the points of its extremes defined the exact shape mathematically, and made possible calculations for the entire orbit as well as predictions based on observation. In addition, it mapped out exactly the largest straight-line distance that Earth traverses over the course of a year, defining times and places for observing the largest parallax (apparent shifts of position) in nearby stars. Knowing Earth's shift and a star's shift enabled the star's distance to be calculated. But all measurements are subject to some degree of error or uncertainty, and the uncertainties in the length of the astronomical unit only increased uncertainties in the stellar distances. Improvements in precision have always been a key to improving astronomical understanding. Throughout the twentieth century, measurements became increasingly precise and sophisticated, and ever more dependent on accurate observation of the effects described by Einstein's theory of relativity and upon the mathematical tools it used. Improving measurements were continually checked and cross-checked by means of improved understanding of the laws of celestial mechanics, which govern the motions of objects in space. The expected positions and distances of objects at an established time are calculated (in au) from these laws, and assembled into a collection of data called an ephemeris. NASA's Jet Propulsion Laboratory HORIZONS System provides one of several ephemeris computation services. In 1976, to establish an even precise measure for the astronomical unit, the IAU formally adopted a new definition. Although directly based on the then-best available observational measurements, the definition was recast in terms of the then-best mathematical derivations from celestial mechanics and planetary ephemerides. It stated that "the astronomical unit of length is that length ("A") for which the Gaussian gravitational constant ("k") takes the value when the units of measurement are the astronomical units of length, mass and time". Equivalently, by this definition, one au is "the radius of an unperturbed circular Newtonian orbit about the sun of a particle having infinitesimal mass, moving with an angular frequency of "; or alternatively that length for which the heliocentric gravitational constant (the product "G"M☉) is equal to ()2 au3/d2, when the length is used to describe the positions of objects in the Solar System. Subsequent explorations of the Solar System by space probes made it possible to obtain precise measurements of the relative positions of the inner planets and other objects by means of radar and telemetry. As with all radar measurements, these rely on measuring the time taken for photons to be reflected from an object. Because all photons move at the speed of light in vacuum, a fundamental constant of the universe, the distance of an object from the probe is calculated as the product of the speed of light and the measured time. However, for precision the calculations require adjustment for things such as the motions of the probe and object while the photons are transiting. In addition, the measurement of the time itself must be translated to a standard scale that accounts for relativistic time dilation. Comparison of the ephemeris positions with time measurements expressed in Barycentric Dynamical Time (TDB) leads to a value for the speed of light in astronomical units per day (of ). By 2009, the IAU had updated its standard measures to reflect improvements, and calculated the speed of light at (TDB). In 1983, the CIPM modified the International System of Units (SI) to make the metre defined as the distance travelled in a vacuum by light in 1 /  s. This replaced the previous definition, valid between 1960 and 1983, which was that the metre equalled a certain number of wavelengths of a certain emission line of krypton-86. (The reason for the change was an improved method of measuring the speed of light.) The speed of light could then be expressed exactly as "c"0 = , a standard also adopted by the IERS numerical standards. From this definition and the 2009 IAU standard, the time for light to traverse an astronomical unit is found to be "τ"A = , which is slightly more than 8 minutes 19 seconds. By multiplication, the best IAU 2009 estimate was "A" = "c"0"τ"A = , based on a comparison of Jet Propulsion Laboratory and IAA–RAS ephemerides. In 2006, the BIPM reported a value of the astronomical unit as . In the 2014 revision of the SI Brochure, the BIPM recognised the IAU's 2012 redefinition of the astronomical unit as . This estimate was still derived from observation and measurements subject to error, and based on techniques that did not yet standardize all relativistic effects, and thus were not constant for all observers. In 2012, finding that the equalization of relativity alone would make the definition overly complex, the IAU simply used the 2009 estimate to redefine the astronomical unit as a conventional unit of length directly tied to the metre (exactly ). The new definition recognizes as a consequence that the astronomical unit has reduced importance, limited in use to a convenience in some applications. This definition makes the speed of light, defined as exactly , equal to exactly  ×  ÷  or about  au/d, some 60 parts per trillion less than the 2009 estimate. Usage and significance. With the definitions used before 2012, the astronomical unit was dependent on the heliocentric gravitational constant, that is the product of the gravitational constant, "G", and the solar mass, M☉. Neither "G" nor M☉ can be measured to high accuracy separately, but the value of their product is known very precisely from observing the relative positions of planets (Kepler's third law expressed in terms of Newtonian gravitation). Only the product is required to calculate planetary positions for an ephemeris, so ephemerides are calculated in astronomical units and not in SI units. The calculation of ephemerides also requires a consideration of the effects of general relativity. In particular, time intervals measured on Earth's surface (Terrestrial Time, TT) are not constant when compared with the motions of the planets: the terrestrial second (TT) appears to be longer near January and shorter near July when compared with the "planetary second" (conventionally measured in TDB). This is because the distance between Earth and the Sun is not fixed (it varies between and ) and, when Earth is closer to the Sun (perihelion), the Sun's gravitational field is stronger and Earth is moving faster along its orbital path. As the metre is defined in terms of the second and the speed of light is constant for all observers, the terrestrial metre appears to change in length compared with the "planetary metre" on a periodic basis. The metre is defined to be a unit of proper length. Indeed, the International Committee for Weights and Measures (CIPM) notes that "its definition applies only within a spatial extent sufficiently small that the effects of the non-uniformity of the gravitational field can be ignored". As such, a distance within the Solar System without specifying the frame of reference for the measurement is problematic. The 1976 definition of the astronomical unit was incomplete because it did not specify the frame of reference in which to apply the measurement, but proved practical for the calculation of ephemerides: a fuller definition that is consistent with general relativity was proposed, and "vigorous debate" ensued until August 2012 when the IAU adopted the current definition of 1 astronomical unit = metres. The astronomical unit is typically used for stellar system scale distances, such as the size of a protostellar disk or the heliocentric distance of an asteroid, whereas other units are used for other distances in astronomy. The astronomical unit is too small to be convenient for interstellar distances, where the parsec and light-year are widely used. The parsec (parallax arcsecond) is defined in terms of the astronomical unit, being the distance of an object with a parallax of . The light-year is often used in popular works, but is not an approved non-SI unit and is rarely used by professional astronomers. When simulating a numerical model of the Solar System, the astronomical unit provides an appropriate scale that minimizes (overflow, underflow and truncation) errors in floating point calculations. History. The book "On the Sizes and Distances of the Sun and Moon", which is ascribed to Aristarchus, says the distance to the Sun is 18 to 20 times the distance to the Moon, whereas the true ratio is about . The latter estimate was based on the angle between the half-moon and the Sun, which he estimated as (the true value being close to ). Depending on the distance that van Helden assumes Aristarchus used for the distance to the Moon, his calculated distance to the Sun would fall between and Earth radii. According to Eusebius in the "Praeparatio evangelica" (Book XV, Chapter 53), Eratosthenes found the distance to the Sun to be "σταδιων μυριαδας τετρακοσιας και οκτωκισμυριας" (literally "of "stadia" myriads 400 and ) but with the additional note that in the Greek text the grammatical agreement is between "myriads" (not "stadia") on the one hand and both "400" and "" on the other: all three are accusative plural, while σταδιων is genitive plural ("of stadia") . All three words (or all four including "stadia") are inflected. This has been translated either as "stadia" (1903 translation by Edwin Hamilton Gifford), or as "stadia" (edition of Édourad des Places, dated 1974–1991). Using the Greek stadium of 185 to 190 metres, the former translation comes to to , which is far too low, whereas the second translation comes to 148.7 to 152.8 billion metres (accurate within 2%). Hipparchus also gave an estimate of the distance of Earth from the Sun, quoted by Pappus as equal to 490 Earth radii. According to the conjectural reconstructions of Noel Swerdlow and G. J. Toomer, this was derived from his assumption of a "least perceptible" solar parallax of . A Chinese mathematical treatise, the "Zhoubi Suanjing" (c. 1st century BCE), shows how the distance to the Sun can be computed geometrically, using the different lengths of the noontime shadows observed at three places "li" apart and the assumption that Earth is flat. In the 2nd century CE, Ptolemy estimated the mean distance of the Sun as times Earth's radius. To determine this value, Ptolemy started by measuring the Moon's parallax, finding what amounted to a horizontal lunar parallax of 1° 26′, which was much too large. He then derived a maximum lunar distance of Earth radii. Because of cancelling errors in his parallax figure, his theory of the Moon's orbit, and other factors, this figure was approximately correct. He then measured the apparent sizes of the Sun and the Moon and concluded that the apparent diameter of the Sun was equal to the apparent diameter of the Moon at the Moon's greatest distance, and from records of lunar eclipses, he estimated this apparent diameter, as well as the apparent diameter of the shadow cone of Earth traversed by the Moon during a lunar eclipse. Given these data, the distance of the Sun from Earth can be trigonometrically computed to be Earth radii. This gives a ratio of solar to lunar distance of approximately 19, matching Aristarchus's figure. Although Ptolemy's procedure is theoretically workable, it is very sensitive to small changes in the data, so much so that changing a measurement by a few per cent can make the solar distance infinite. After Greek astronomy was transmitted to the medieval Islamic world, astronomers made some changes to Ptolemy's cosmological model, but did not greatly change his estimate of the Earth–Sun distance. For example, in his introduction to Ptolemaic astronomy, al-Farghānī gave a mean solar distance of Earth radii, whereas in his "zij", al-Battānī used a mean solar distance of Earth radii. Subsequent astronomers, such as al-Bīrūnī, used similar values. Later in Europe, Copernicus and Tycho Brahe also used comparable figures ( and Earth radii), and so Ptolemy's approximate Earth–Sun distance survived through the 16th century. Johannes Kepler was the first to realize that Ptolemy's estimate must be significantly too low (according to Kepler, at least by a factor of three) in his "Rudolphine Tables" (1627). Kepler's laws of planetary motion allowed astronomers to calculate the relative distances of the planets from the Sun, and rekindled interest in measuring the absolute value for Earth (which could then be applied to the other planets). The invention of the telescope allowed far more accurate measurements of angles than is possible with the naked eye. Flemish astronomer Godefroy Wendelin repeated Aristarchus’ measurements in 1635, and found that Ptolemy's value was too low by a factor of at least eleven. A somewhat more accurate estimate can be obtained by observing the transit of Venus. By measuring the transit in two different locations, one can accurately calculate the parallax of Venus and from the relative distance of Earth and Venus from the Sun, the solar parallax α (which cannot be measured directly due to the brightness of the Sun). Jeremiah Horrocks had attempted to produce an estimate based on his observation of the 1639 transit (published in 1662), giving a solar parallax of , similar to Wendelin's figure. The solar parallax is related to the Earth–Sun distance as measured in Earth radii by formula_0 The smaller the solar parallax, the greater the distance between the Sun and Earth: a solar parallax of is equivalent to an Earth–Sun distance of Earth radii. Christiaan Huygens believed that the distance was even greater: by comparing the apparent sizes of Venus and Mars, he estimated a value of about Earth radii, equivalent to a solar parallax of . Although Huygens' estimate is remarkably close to modern values, it is often discounted by historians of astronomy because of the many unproven (and incorrect) assumptions he had to make for his method to work; the accuracy of his value seems to be based more on luck than good measurement, with his various errors cancelling each other out. Jean Richer and Giovanni Domenico Cassini measured the parallax of Mars between Paris and Cayenne in French Guiana when Mars was at its closest to Earth in 1672. They arrived at a figure for the solar parallax of , equivalent to an Earth–Sun distance of about Earth radii. They were also the first astronomers to have access to an accurate and reliable value for the radius of Earth, which had been measured by their colleague Jean Picard in 1669 as "toises". This same year saw another estimate for the astronomical unit by John Flamsteed, which accomplished it alone by measuring the martian diurnal parallax. Another colleague, Ole Rømer, discovered the finite speed of light in 1676: the speed was so great that it was usually quoted as the time required for light to travel from the Sun to the Earth, or "light time per unit distance", a convention that is still followed by astronomers today. A better method for observing Venus transits was devised by James Gregory and published in his "Optica Promata" (1663). It was strongly advocated by Edmond Halley and was applied to the transits of Venus observed in 1761 and 1769, and then again in 1874 and 1882. Transits of Venus occur in pairs, but less than one pair every century, and observing the transits in 1761 and 1769 was an unprecedented international scientific operation including observations by James Cook and Charles Green from Tahiti. Despite the Seven Years' War, dozens of astronomers were dispatched to observing points around the world at great expense and personal danger: several of them died in the endeavour. The various results were collated by Jérôme Lalande to give a figure for the solar parallax of . Karl Rudolph Powalky had made an estimate of in 1864. Another method involved determining the constant of aberration. Simon Newcomb gave great weight to this method when deriving his widely accepted value of for the solar parallax (close to the modern value of ), although Newcomb also used data from the transits of Venus. Newcomb also collaborated with A. A. Michelson to measure the speed of light with Earth-based equipment; combined with the constant of aberration (which is related to the light time per unit distance), this gave the first direct measurement of the Earth–Sun distance in metres. Newcomb's value for the solar parallax (and for the constant of aberration and the Gaussian gravitational constant) were incorporated into the first international system of astronomical constants in 1896, which remained in place for the calculation of ephemerides until 1964. The name "astronomical unit" appears first to have been used in 1903. The discovery of the near-Earth asteroid 433 Eros and its passage near Earth in 1900–1901 allowed a considerable improvement in parallax measurement. Another international project to measure the parallax of 433 Eros was undertaken in 1930–1931. Direct radar measurements of the distances to Venus and Mars became available in the early 1960s. Along with improved measurements of the speed of light, these showed that Newcomb's values for the solar parallax and the constant of aberration were inconsistent with one another. Developments. The unit distance A (the value of the astronomical unit in metres) can be expressed in terms of other astronomical constants: formula_1 where G is the Newtonian constant of gravitation, M☉ is the solar mass, k is the numerical value of Gaussian gravitational constant and D is the time period of one day. The Sun is constantly losing mass by radiating away energy, so the orbits of the planets are steadily expanding outward from the Sun. This has led to calls to abandon the astronomical unit as a unit of measurement. As the speed of light has an exact defined value in SI units and the Gaussian gravitational constant k is fixed in the astronomical system of units, measuring the light time per unit distance is exactly equivalent to measuring the product G×M☉ in SI units. Hence, it is possible to construct ephemerides entirely in SI units, which is increasingly becoming the norm. A 2004 analysis of radiometric measurements in the inner Solar System suggested that the secular increase in the unit distance was much larger than can be accounted for by solar radiation, + metres per century. The measurements of the secular variations of the astronomical unit are not confirmed by other authors and are quite controversial. Furthermore, since 2010, the astronomical unit has not been estimated by the planetary ephemerides. Examples. The following table contains some distances given in astronomical units. It includes some examples with distances that are normally not given in astronomical units, because they are either too short or far too long. Distances normally change over time. Examples are listed by increasing distance. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "A = \\cot\\alpha \\approx 1\\,\\textrm{radian}/\\alpha." }, { "math_id": 1, "text": "A^3 = \\frac{G M_\\odot D^2}{k^2}," } ]
https://en.wikipedia.org/wiki?curid=1210
12100
Graviton
Hypothetical elementary particle that mediates gravity In theories of quantum gravity, the graviton is the hypothetical quantum of gravity, an elementary particle that mediates the force of gravitational interaction. There is no complete quantum field theory of gravitons due to an outstanding mathematical problem with renormalization in general relativity. In string theory, believed by some to be a consistent theory of quantum gravity, the graviton is a massless state of a fundamental string. If it exists, the graviton is expected to be massless because the gravitational force has a very long range, and appears to propagate at the speed of light. The graviton must be a spin-2 boson because the source of gravitation is the stress–energy tensor, a second-order tensor (compared with electromagnetism's spin-1 photon, the source of which is the four-current, a first-order tensor). Additionally, it can be shown that any massless spin-2 field would give rise to a force indistinguishable from gravitation, because a massless spin-2 field would couple to the stress–energy tensor in the same way gravitational interactions do. This result suggests that, if a massless spin-2 particle is discovered, it must be the graviton. Theory. It is hypothesized that gravitational interactions are mediated by an as yet undiscovered elementary particle, dubbed the "graviton". The three other known forces of nature are mediated by elementary particles: electromagnetism by the photon, the strong interaction by gluons, and the weak interaction by the W and Z bosons. All three of these forces appear to be accurately described by the Standard Model of particle physics. In the classical limit, a successful theory of gravitons would reduce to general relativity, which itself reduces to Newton's law of gravitation in the weak-field limit. History. Albert Einstein discussed quantized gravitational radiation in 1916, the year following his publication of general relativity. The term "graviton" was coined in 1934 by Soviet physicists Dmitry Blokhintsev and Fyodor Galperin. Paul Dirac reintroduced the term in a number of lectures in 1959, noting that the energy of the gravitational field should come in quanta. A mediation of the gravitational interaction by particles was anticipated by Pierre-Simon Laplace. Just like Newton's anticipation of photons, Laplace's anticipated "gravitons" had a greater speed than the speed of light in vacuum formula_0, the speed of gravitons expected in modern theories, and were not connected to quantum mechanics or special relativity, since these theories didn't yet exist during Laplace's lifetime. Gravitons and renormalization. When describing graviton interactions, the classical theory of Feynman diagrams and semiclassical corrections such as one-loop diagrams behave normally. However, Feynman diagrams with at least two loops lead to ultraviolet divergences. These infinite results cannot be removed because quantized general relativity is not perturbatively renormalizable, unlike quantum electrodynamics and models such as the Yang–Mills theory. Therefore, incalculable answers are found from the perturbation method by which physicists calculate the probability of a particle to emit or absorb gravitons, and the theory loses predictive veracity. Those problems and the complementary approximation framework are grounds to show that a theory more unified than quantized general relativity is required to describe the behavior near the Planck scale. Comparison with other forces. Like the force carriers of the other forces (see photon, gluon, W and Z bosons), the graviton plays a role in general relativity, in defining the spacetime in which events take place. In some descriptions energy modifies the "shape" of spacetime itself, and gravity is a result of this shape, an idea which at first glance may appear hard to match with the idea of a force acting between particles. Because the diffeomorphism invariance of the theory does not allow any particular space-time background to be singled out as the "true" space-time background, general relativity is said to be background-independent. In contrast, the Standard Model is "not" background-independent, with Minkowski space enjoying a special status as the fixed background space-time. A theory of quantum gravity is needed in order to reconcile these differences. Whether this theory should be background-independent is an open question. The answer to this question will determine the understanding of what specific role gravitation plays in the fate of the universe. Energy and wavelength. While gravitons are presumed to be massless, they would still carry energy, as does any other quantum particle. Photon energy and gluon energy are also carried by massless particles. It is unclear which variables might determine graviton energy, the amount of energy carried by a single graviton. Alternatively, if gravitons are massive at all, the analysis of gravitational waves yielded a new upper bound on the mass of gravitons. The graviton's Compton wavelength is at least , or about 1.6 light-years, corresponding to a graviton mass of no more than . This relation between wavelength and mass-energy is calculated with the Planck–Einstein relation, the same formula that relates electromagnetic wavelength to photon energy. Experimental observation. Unambiguous detection of individual gravitons, though not prohibited by any fundamental law, is impossible with any physically reasonable detector. The reason is the extremely low cross section for the interaction of gravitons with matter. For example, a detector with the mass of Jupiter and 100% efficiency, placed in close orbit around a neutron star, would only be expected to observe one graviton every 10 years, even under the most favorable conditions. It would be impossible to discriminate these events from the background of neutrinos, since the dimensions of the required neutrino shield would ensure collapse into a black hole. LIGO and Virgo collaborations' observations have directly detected gravitational waves. Others have postulated that graviton scattering yields gravitational waves as particle interactions yield coherent states. Although these experiments cannot detect individual gravitons, they might provide information about certain properties of the graviton. For example, if gravitational waves were observed to propagate slower than "c" (the speed of light in vacuum), that would imply that the graviton has mass (however, gravitational waves must propagate slower than "c" in a region with non-zero mass density if they are to be detectable). Recent observations of gravitational waves have put an upper bound of on the graviton's mass. Astronomical observations of the kinematics of galaxies, especially the galaxy rotation problem and modified Newtonian dynamics, might point toward gravitons having non-zero mass. Difficulties and outstanding issues. Most theories containing gravitons suffer from severe problems. Attempts to extend the Standard Model or other quantum field theories by adding gravitons run into serious theoretical difficulties at energies close to or above the Planck scale. This is because of infinities arising due to quantum effects; technically, gravitation is not renormalizable. Since classical general relativity and quantum mechanics seem to be incompatible at such energies, from a theoretical point of view, this situation is not tenable. One possible solution is to replace particles with strings. String theories are quantum theories of gravity in the sense that they reduce to classical general relativity plus field theory at low energies, but are fully quantum mechanical, contain a graviton, and are thought to be mathematically consistent. See also. &lt;templatestyles src="Div col/styles.css"/&gt; References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "c" } ]
https://en.wikipedia.org/wiki?curid=12100
12100059
HydroGeoSphere
HydroGeoSphere (HGS) is a 3D control-volume finite element groundwater model, and is based on a rigorous conceptualization of the hydrologic system consisting of surface and subsurface flow regimes. The model is designed to take into account all key components of the hydrologic cycle. For each time step, the model solves surface and subsurface flow, solute and energy transport equations simultaneously, and provides a complete water and solute balance. History. The original name for the code was FRAC3DVS, which was created by René Therrien in 1992. The code was further developed jointly at the University of Waterloo and the Laval University, and was primarily used for academic research. It was renamed to HydroGeoSphere in 2002 with the implementation of 2D surface water flow and transport. In 2012, the software became commercialized under the support and management of Aquanty Inc. Governing equations. In order to accomplish the integrated analysis, HydroGeoSphere utilizes a rigorous, mass conservative modeling approach that fully couples the surface flow and transport equations with the 3-D, variably saturated subsurface flow and transport equations. This approach is significantly more robust than previous conjunctive approaches that rely on linkage of separate surface and subsurface modeling codes. Groundwater Flow. HydroGeoSphere assumes that the subsurface flow equation in a porous medium is always solved during a simulation, either for fully saturated or variably saturated flow conditions. The subsurface flow equation can be expanded to incorporate discrete fractures, a second interacting porous continuum, wells, tile drains and surface flow. The following assumptions are made for subsurface flow: The Richards’ equation is used to describe three-dimensional transient subsurface flow in a variably saturated porous medium: formula_0 The fluid flux, formula_1, is represented by the Darcy's law shown as: formula_2 where formula_3 is the volumetric fraction of the total porosity occupied by the porous medium, formula_4 is the internal fluid exchange rate (e.g. surface water, wells, and tile drains), formula_5 is the external fluid outside of the model domain, formula_6 is the saturated water content, formula_7 is the degree of saturation, formula_8 is the hydraulic conductivity tensor, formula_9 is the relative permeability of the medium calculated as a function of saturation, formula_10 is the pressure head, and formula_11 is the elevation head. Surface water flow. Areal surface water flow is represented in HydroGeoSphere by a two-dimensional depth-averaged flow equation, which is the diffusion-wave approximation of the Saint Venant equation for surface water flow. HydroGeoSphere's surface water flow component is implemented with the following assumptions: The surface flow components are solved by the following three equations, which are given by the following mass balance equation: formula_12 coupled with the momentum equations, neglecting inertia terms, for the x-direction: formula_13 and for the y-direction: formula_14 where formula_15 is the surface flow domain porosity, formula_16 is the water surface elevation, formula_17 and formula_18 are the vertically averaged flow velocities in the x and y directions, formula_19 is the depth of surface water flow, formula_20 is the internal fluid exchange, and formula_21 is the external fluid exchange. The surface conductances, formula_22 and formula_23 are approximated by either the Manning or Chezy equation. Solute transport. Three-dimensional transport of solutes is described by the modified reactive transport advective-dispersion equation: formula_24 where formula_25 is the solute concentration, formula_26 is the first-order decay constant, formula_27 is the external source or sink term, formula_28 is the internal solute transfer between domains, formula_29 is the retardation factor, formula_30 is the diffusion coefficient, and formula_31 designates parent species for the case of a decay chain. Heat transport. Graf [2005] incorporated heat transport within the saturated-zone flow regime into HydroGeoSphere together with temperature-dependent fluid properties, such as viscosity and density. The model’s capability was successfully demonstrated for the case of thermohaline flow and transport in porous and fractured porous media [Graf and Therrien, 2007]. This work extends the model’s capability to include thermal energy transport in the unsaturated zone and in the surface water, which is considered a key step in the linkage between the atmospheric and hydrologic systems. Surface heat fluxes from atmospheric inputs are an important source/sink of thermal energy, especially to the surface water system. As such, surface heat fluxes across the land surface were also incorporated into HydroGeoSphere. A complete description of the physical processes and governing flow and solute transport equations that form the basis of HydroGeoSphere can be found in Therrien et al. [2007] and therefore will not be presented here. The general equation for variably saturated subsurface thermal energy transport following Molson et al. [1992] is given by: formula_32 where formula_33 is the density, formula_34 is the heat capacity, formula_35 is the temperature of the bulk subsurface, formula_36 is the thermal conductivity, formula_37 is the thermal dispersion term, formula_38 is the thermal source/sink, formula_39 is the thermal interactions between the surface and subsurface, and formula_38 is the external thermal interactions. Surface-subsurface coupling. The 2-D areal surface flow modules of HydroGeoSphere follow the same conventions for spatial and temporal discretizations as those used by the subsurface modules. The surface flow equation is solved on a 2-D finite-element mesh stacked upon a subsurface grid when solving for both domains (i.e. the x- and y-locations of nodes are the same for each layer of nodes). For superposition, the grid generated for the subsurface domain is mirrored areally for the surface flow nodes, with surface flow node elevations corresponding to the top elevation of the topmost active layer of the subsurface grid. Note that surface flow node elevations may vary substantially to conform with topography. However, the assumptions of small slope inherent in the diffusion-wave equation will not allow for modeling of inertial effects. The discretized surface equation is coupled with the 3-D subsurface flow equation via superposition (common node approach) or via leakage through a surficial skin layer (dual node approach). For both approaches, fully implicit coupling of the surface and subsurface flow regimes provides an integral view of the movement of water, as opposed to the traditional division of surface and subsurface regimes. Flux across the land surface is, therefore, a natural internal process allowing water to move between the surface and subsurface flow systems as governed by local flow hydrodynamics, instead of using physically artificial boundary conditions at the interface. When the subsurface connection is provided via superposition, HydroGeoSphere adds the surface flow equation terms for the 2-D surface mesh to those of the top layer of subsurface nodes. In that case, the fluid exchange flux, which contains leakance term does not need to be explicitly defined. Features. The HGS model is a three-dimensional control-volume finite element simulator which is designed to simulate the entire terrestrial portion of the hydrologic cycle. It uses a globally implicit approach to simultaneously solve the 2D diffusive-wave equation and the 3D form of Richards’ equation. HGS also dynamically integrates key components of the hydrologic cycle such as evaporation from bare soil and water bodies, vegetation-dependent transpiration with root uptake, snowmelt and soil freeze/thaw. Features such as macro pores, fractures, and tile drains can either be incorporated discretely or using a dual-porosity, dual permeability formulation. Additionally, HydroGeoSphere has been linked to Weather Research and Forecasting, a mesoscale atmospheric model for fully coupled subsurface, surface, and atmospheric simulations. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "-\\nabla \\cdot(w_m\\textbf{q})+\\sum \\Gamma_{eq} \\pm Q = w_m \\frac{\\partial}{\\partial t} (\\theta_s S_w) " }, { "math_id": 1, "text": "\\textbf{q}" }, { "math_id": 2, "text": " \\textbf{q} = -\\textbf{K} \\cdot k_r \\nabla (\\psi+z) " }, { "math_id": 3, "text": "w_m" }, { "math_id": 4, "text": "\\Gamma_{ex}" }, { "math_id": 5, "text": "Q" }, { "math_id": 6, "text": "\\theta_s" }, { "math_id": 7, "text": "S_w" }, { "math_id": 8, "text": "\\textbf{K}" }, { "math_id": 9, "text": "k_r" }, { "math_id": 10, "text": "\\psi" }, { "math_id": 11, "text": "z" }, { "math_id": 12, "text": "\\frac{\\partial \\phi_o h_o}{\\partial t}+\\frac{\\partial \\bar{v}_{xo} d_o}{\\partial x}+\\frac{\\partial \\bar{v}_{yo} d_o}{\\partial y}+ d_o \\Gamma_o \\pm Q_o = 0 " }, { "math_id": 13, "text": "\\bar{v}_{ox} = -K_{ox} \\frac{\\partial h_o}{\\partial x} " }, { "math_id": 14, "text": "\\bar{v}_{oy} = -K_{oy} \\frac{\\partial h_o}{\\partial y} " }, { "math_id": 15, "text": "\\phi_o" }, { "math_id": 16, "text": "h_o" }, { "math_id": 17, "text": "\\bar{v}_{xo}" }, { "math_id": 18, "text": "\\bar{v}_{yo}" }, { "math_id": 19, "text": "d_o" }, { "math_id": 20, "text": "\\Gamma_o" }, { "math_id": 21, "text": "Q_o" }, { "math_id": 22, "text": "K_{ox}" }, { "math_id": 23, "text": "K_{oy}" }, { "math_id": 24, "text": " -\\nabla \\cdot w_m (\\textbf{q} C-\\theta_s S_w \\textbf{D} \\nabla C)+[w_m \\theta_s S_w R \\lambda C]_{par} + \\sum{\\Omega_{ex} + Q_c} = w_m \\left[\\frac{\\partial(\\theta_s S_w R C)}{\\partial t} + \\theta_s S_w R \\lambda C \\right] " }, { "math_id": 25, "text": "C" }, { "math_id": 26, "text": "\\lambda" }, { "math_id": 27, "text": "Q_c" }, { "math_id": 28, "text": "\\Omega" }, { "math_id": 29, "text": "R" }, { "math_id": 30, "text": "\\textbf{D}" }, { "math_id": 31, "text": "par" }, { "math_id": 32, "text": "-\\nabla \\cdot \\Big(\\textbf{q} \\rho_w c_w T - (k_b + c_b \\rho_b \\textbf{D}) \\nabla T\\Big) + \\Omega_o \\pm Q_T = \\frac{\\partial \\rho_b c_b T}{\\partial t} " }, { "math_id": 33, "text": "\\rho" }, { "math_id": 34, "text": "c" }, { "math_id": 35, "text": "T" }, { "math_id": 36, "text": "k" }, { "math_id": 37, "text": "D" }, { "math_id": 38, "text": "Q_T" }, { "math_id": 39, "text": "\\Omega_o" } ]
https://en.wikipedia.org/wiki?curid=12100059
12101027
Holstein–Primakoff transformation
In quantum mechanics, the Holstein–Primakoff transformation is a mapping from boson creation and annihilation operators to the spin operators, effectively truncating their infinite-dimensional Fock space to finite-dimensional subspaces. One important aspect of quantum mechanics is the occurrence of—in general—non-commuting operators which represent observables, quantities that can be measured. A standard example of a set of such operators are the three components of the angular momentum operators, which are crucial in many quantum systems. These operators are complicated, and one would like to find a simpler representation, which can be used to generate approximate calculational schemes. The transformation was developed in 1940 by Theodore Holstein, a graduate student at the time, and Henry Primakoff. This method has found widespread applicability and has been extended in many different directions. There is a close link to other methods of boson mapping of operator algebras: in particular, the (non-Hermitian) Dyson–Maleev technique, and to a lesser extent the Jordan–Schwinger map. There is, furthermore, a close link to the theory of (generalized) coherent states in Lie algebras. Description. The basic idea can be illustrated for the basic example of spin operators of quantum mechanics. For any set of right-handed orthogonal axes, define the components of this vector operator as formula_0, formula_1 and formula_2, which are mutually noncommuting, i.e., formula_3 and its cyclic permutations. In order to uniquely specify the states of a spin, one may diagonalise any set of commuting operators. Normally one uses the SU(2) Casimir operators formula_4 and formula_2, which leads to states with the quantum numbers formula_5, formula_6 formula_7 The projection quantum number formula_8 takes on all the values formula_9. Consider a single particle of spin s (i.e., look at a single irreducible representation of SU(2)). Now take the state with maximal projection formula_10, the extremal weight state as a vacuum for a set of boson operators, and each subsequent state with lower projection quantum number as a boson excitation of the previous one, formula_11 Each additional boson then corresponds to a decrease of "ħ" in the spin projection. Thus, the spin raising and lowering operators formula_12 and formula_13, so that formula_14, correspond (in the sense detailed below) to the bosonic annihilation and creation operators, respectively. The precise relations between the operators must be chosen to ensure the correct commutation relations for the spin operators, such that they act on a finite-dimensional space, unlike the original Fock space. The resulting Holstein–Primakoff transformation can be written as formula_15 The transformation is particularly useful in the case where s is large, when the square roots can be expanded as Taylor series, to give an expansion in decreasing powers of s. Alternatively to a Taylor expansion there has been recent progress with a resummation of the series that made expressions possible that are polynomial in bosonic operators but still mathematically exact (on the physical subspace). The first method develops a resummation method that is exact for spin formula_16, while the latter employs a Newton series (a finite difference) expansion with an identical result, as shown below formula_17 While the expression above is not exact for spins higher than 1/2 it is an improvement over the Taylor series. Exact expressions also exist for higher spins and include formula_18 terms. Much like the result above also for the expressions of higher spins formula_19 and therefore the resummation is hermitian. There also exists a non-Hermitian Dyson–Maleev (by Freeman Dyson and S.V. Maleev) variant realization J is related to the above and valid for all spins, formula_20 satisfying the same commutation relations and characterized by the same Casimir invariant. The technique can be further extended to the Witt algebra, which is the centerless Virasoro algebra. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "S_x" }, { "math_id": 1, "text": "S_y" }, { "math_id": 2, "text": "S_z" }, { "math_id": 3, "text": "\\left[S_x,S_y\\right] = i\\hbar S_z" }, { "math_id": 4, "text": "S^2" }, { "math_id": 5, "text": "\\left|s,m_s\\right\\rangle" }, { "math_id": 6, "text": "S^2\\left|s,m_s\\right\\rangle=\\hbar^2 s(s+1) \\left|s,m_s\\right\\rangle," }, { "math_id": 7, "text": "S_z\\left|s,m_s\\right\\rangle=\\hbar m_s\\left|s,m_s\\right\\rangle." }, { "math_id": 8, "text": "m_s" }, { "math_id": 9, "text": " (-s, -s+1, \\ldots ,s-1, s) " }, { "math_id": 10, "text": "\\left|s,m_s= +s\\right\\rangle" }, { "math_id": 11, "text": "\\left|s,s-n\\right\\rangle\\mapsto \\frac{1}{\\sqrt{n!}}\\left(a^\\dagger\\right)^n|0\\rangle_B ~." }, { "math_id": 12, "text": "S_+= S_x + i S_y" }, { "math_id": 13, "text": "S_- = S_x - i S_y" }, { "math_id": 14, "text": "[S_+,S_-]=2\\hbar S_z" }, { "math_id": 15, "text": "S_+ = \\hbar \\sqrt{2s} \\sqrt{1-\\frac{a^\\dagger a}{2s}}\\, a ~, \\qquad\nS_- = \\hbar \\sqrt{2s} a^\\dagger\\, \\sqrt{1-\\frac{a^\\dagger a}{2s}} ~, \\qquad \nS_z = \\hbar(s - a^\\dagger a) ~." }, { "math_id": 16, "text": "s=1/2" }, { "math_id": 17, "text": "S_+^{(1/2)}= \\hbar \\sqrt{2s}\\left[1+\\left(\\sqrt{1-\\frac{1}{2 s}}-1\\right)a^\\dagger a\\right]a, \\qquad\nS_-^{(1/2)} = (S_+^{(1/2)})^\\dagger, \\qquad \nS_z^{(1/2)} = \\hbar(s - a^\\dagger a) ~." }, { "math_id": 18, "text": "2s+1" }, { "math_id": 19, "text": "S_+ = S_-^\\dagger" }, { "math_id": 20, "text": "\nJ_+ = \\hbar \\, a ~, \\qquad\nJ_-= S_- ~ \\sqrt{2s-a^\\dagger a} = \\hbar a^\\dagger\\, (2s-a^\\dagger a)~, \\qquad\nJ_z=S_z = \\hbar(s - a^\\dagger a) ~,\n" } ]
https://en.wikipedia.org/wiki?curid=12101027
12101596
Convolution power
In mathematics, the convolution power is the "n"-fold iteration of the convolution with itself. Thus if formula_0 is a function on Euclidean space R"d" and formula_1 is a natural number, then the convolution power is defined by formula_2 where ∗ denotes the convolution operation of functions on R"d" and δ0 is the Dirac delta distribution. This definition makes sense if "x" is an integrable function (in L1), a rapidly decreasing distribution (in particular, a compactly supported distribution) or is a finite Borel measure. If "x" is the distribution function of a random variable on the real line, then the "n"th convolution power of "x" gives the distribution function of the sum of "n" independent random variables with identical distribution "x". The central limit theorem states that if "x" is in L1 and L2 with mean zero and variance σ2, then formula_3 where Φ is the cumulative standard normal distribution on the real line. Equivalently, formula_4 tends weakly to the standard normal distribution. In some cases, it is possible to define powers "x"*"t" for arbitrary real "t" &gt; 0. If μ is a probability measure, then μ is infinitely divisible provided there exists, for each positive integer "n", a probability measure μ1/"n" such that formula_5 That is, a measure is infinitely divisible if it is possible to define all "n"th roots. Not every probability measure is infinitely divisible, and a characterization of infinitely divisible measures is of central importance in the abstract theory of stochastic processes. Intuitively, a measure should be infinitely divisible provided it has a well-defined "convolution logarithm." The natural candidate for measures having such a logarithm are those of (generalized) Poisson type, given in the form formula_6 In fact, the Lévy–Khinchin theorem states that a necessary and sufficient condition for a measure to be infinitely divisible is that it must lie in the closure, with respect to the vague topology, of the class of Poisson measures . Many applications of the convolution power rely on being able to define the analog of analytic functions as formal power series with powers replaced instead by the convolution power. Thus if formula_7 is an analytic function, then one would like to be able to define formula_8 If "x" ∈ "L"1(R"d") or more generally is a finite Borel measure on R"d", then the latter series converges absolutely in norm provided that the norm of "x" is less than the radius of convergence of the original series defining "F"("z"). In particular, it is possible for such measures to define the convolutional exponential formula_9 It is not generally possible to extend this definition to arbitrary distributions, although a class of distributions on which this series still converges in an appropriate weak sense is identified by . Properties. If "x" is itself suitably differentiable, then from the properties of convolution, one has formula_10 where formula_11 denotes the derivative operator. Specifically, this holds if "x" is a compactly supported distribution or lies in the Sobolev space "W"1,1 to ensure that the derivative is sufficiently regular for the convolution to be well-defined. Applications. In the configuration random graph, the size distribution of connected components can be expressed via the convolution power of the excess degree distribution (): formula_12 Here, formula_13 is the size distribution for connected components, formula_14 is the excess degree distribution, and formula_15 denotes the degree distribution. As convolution algebras are special cases of Hopf algebras, the convolution power is a special case of the (ordinary) power in a Hopf algebra. In applications to quantum field theory, the convolution exponential, convolution logarithm, and other analytic functions based on the convolution are constructed as formal power series in the elements of the algebra . If, in addition, the algebra is a Banach algebra, then convergence of the series can be determined as above. In the formal setting, familiar identities such as formula_16 continue to hold. Moreover, by the permanence of functional relations, they hold at the level of functions, provided all expressions are well-defined in an open set by convergent series.
[ { "math_id": 0, "text": "x" }, { "math_id": 1, "text": "n" }, { "math_id": 2, "text": " x^{*n} = \\underbrace{x * x * x * \\cdots * x * x}_n,\\quad x^{*0}=\\delta_0 " }, { "math_id": 3, "text": "P\\left(\\frac{x^{*n}}{\\sigma\\sqrt{n}} < \\beta\\right) \\to \\Phi(\\beta)\\quad\\rm{as}\\ n\\to\\infty" }, { "math_id": 4, "text": "x^{*n}/\\sigma\\sqrt{n}" }, { "math_id": 5, "text": "\\mu_{1/n}^{* n} = \\mu." }, { "math_id": 6, "text": "\\pi_{\\alpha,\\mu} = e^{-\\alpha}\\sum_{n=0}^\\infty \\frac{\\alpha^n}{n!}\\mu^{*n}." }, { "math_id": 7, "text": "\\textstyle{F(z) = \\sum_{n=0}^\\infty a_n z^n}" }, { "math_id": 8, "text": "F^*(x) = a_0\\delta_0 + \\sum_{n=1}^\\infty a_n x^{*n}." }, { "math_id": 9, "text": "\\exp^*(x) = \\delta_0 + \\sum_{n=1}^\\infty \\frac{x^{*n}}{n!}." }, { "math_id": 10, "text": "\\mathcal{D}\\big\\{x^{*n}\\big\\} = (\\mathcal{D}x) * x^{*(n-1)} = x * \\mathcal{D}\\big\\{x^{*(n-1)}\\big\\}" }, { "math_id": 11, "text": "\\mathcal{D}" }, { "math_id": 12, "text": "\nw(n)=\\begin{cases}\n\\frac{\\mu_1}{n-1} u_1^{*n}(n-2),& n>1, \\\\\nu(0) & n=1.\n\\end{cases}\n" }, { "math_id": 13, "text": "w(n)" }, { "math_id": 14, "text": "\nu_1(k) = \\frac{k+1}{\\mu_1} u(k+1),\n" }, { "math_id": 15, "text": "u(k)" }, { "math_id": 16, "text": "x = \\log^*(\\exp^*x) = \\exp^*(\\log^*x)" } ]
https://en.wikipedia.org/wiki?curid=12101596
12104271
Pochhammer k-symbol
Term in the mathematical theory of special functions In the mathematical theory of special functions, the Pochhammer "k"-symbol and the "k"-gamma function, introduced by Rafael Díaz and Eddy Pariguan are generalizations of the Pochhammer symbol and gamma function. They differ from the Pochhammer symbol and gamma function in that they can be related to a general arithmetic progression in the same manner as those are related to the sequence of consecutive integers. Definition. The Pochhammer "k"-symbol ("x")"n,k" is defined as formula_0 and the "k"-gamma function Γ"k", with "k" &gt; 0, is defined as formula_1 When "k" = 1 the standard Pochhammer symbol and gamma function are obtained. Díaz and Pariguan use these definitions to demonstrate a number of properties of the hypergeometric function. Although Díaz and Pariguan restrict these symbols to "k" &gt; 0, the Pochhammer "k"-symbol as they define it is well-defined for all real "k," and for negative "k" gives the falling factorial, while for "k" = 0 it reduces to the power "xn". The Díaz and Pariguan paper does not address the many analogies between the Pochhammer "k"-symbol and the power function, such as the fact that the binomial theorem can be extended to Pochhammer "k"-symbols. It is true, however, that many equations involving the power function "xn" continue to hold when "xn" is replaced by ("x")"n,k". Continued Fractions, Congruences, and Finite Difference Equations. Jacobi-type J-fractions for the "ordinary" generating function of the Pochhammer k-symbol, denoted in slightly different notation by formula_2 for fixed formula_3 and some indeterminate parameter formula_4, are considered in in the form of the next infinite continued fraction expansion given by formula_5 The rational formula_6 convergent function, formula_7, to the full generating function for these products expanded by the last equation is given by formula_8 where the component convergent function sequences, formula_9 and formula_10, are given as closed-form sums in terms of the ordinary Pochhammer symbol and the Laguerre polynomials by formula_11 The rationality of the formula_6 convergent functions for all formula_12, combined with known enumerative properties of the J-fraction expansions, imply the following finite difference equations both exactly generating formula_13 for all formula_14, and generating the symbol modulo formula_15 for some fixed integer formula_16: formula_17 The rationality of formula_7 also implies the next exact expansions of these products given by formula_18 where the formula is expanded in terms of the special zeros of the Laguerre polynomials, or equivalently, of the confluent hypergeometric function, defined as the finite (ordered) set formula_19 and where formula_20 denotes the partial fraction decomposition of the rational formula_6 convergent function. Additionally, since the denominator convergent functions, formula_10, are expanded exactly through the Laguerre polynomials as above, we can exactly generate the Pochhammer k-symbol as the series coefficients formula_21 for any prescribed integer formula_22. Special Cases. Special cases of the Pochhammer k-symbol, formula_23, correspond to the following special cases of the falling and rising factorials, including the Pochhammer symbol, and the generalized cases of the multiple factorial functions (multifactorial functions), or the formula_24-factorial functions studied in the last two references by Schmidt: The expansions of these "k-symbol-related" products considered termwise with respect to the coefficients of the powers of formula_34 (formula_35) for each finite formula_14 are defined in the article on generalized Stirling numbers of the first kind and generalized Stirling (convolution) polynomials in.
[ { "math_id": 0, "text": " \n\\begin{align}\n(x)_{n,k} & = x(x + k)(x + 2k) \\cdots (x + (n-1)k)=\\prod_{i=1}^n (x+(i-1)k) \\\\ \n & = k^n \\times \\left(\\frac{x}{k}\\right)_n,\\, \n\\end{align} \n" }, { "math_id": 1, "text": "\\Gamma_k(x) = \\lim_{n\\to\\infty} \\frac{n!k^n (nk)^{x/k - 1}}{(x)_{n,k}}. " }, { "math_id": 2, "text": "p_n(\\alpha, R) := R(R+\\alpha)\\cdots(R+(n-1)\\alpha)" }, { "math_id": 3, "text": "\\alpha > 0" }, { "math_id": 4, "text": "R" }, { "math_id": 5, "text": "\n\\begin{align} \n\\text{Conv}_h(\\alpha, R; z) & := \n \\cfrac{1}{1 - R \\cdot z - \n \\cfrac{\\alpha R \\cdot z^2}{ \n 1 - (R+2\\alpha) \\cdot z -\n \\cfrac{2\\alpha (R + \\alpha) \\cdot z^2}{ \n 1 - (R + 4\\alpha) \\cdot z - \n \\cfrac{3\\alpha (R + 2\\alpha) \\cdot z^2}{ \n \\cdots}}}}.\n\\end{align} \n" }, { "math_id": 6, "text": "h^{th}" }, { "math_id": 7, "text": "\\text{Conv}_h(\\alpha, R; z)" }, { "math_id": 8, "text": "\n\\begin{align} \n\\text{Conv}_h(\\alpha, R; z) & := \n \\cfrac{1}{1 - R \\cdot z - \n \\cfrac{\\alpha R \\cdot z^2}{ \n 1 - (R+2\\alpha) \\cdot z -\n \\cfrac{2\\alpha (R + \\alpha) \\cdot z^2}{ \n 1 - (R + 4\\alpha) \\cdot z - \n \\cfrac{3\\alpha (R + 2\\alpha) \\cdot z^2}{ \n \\cfrac{\\cdots}{1 - (R + 2 (h-1) \\alpha) \\cdot z}}}}} \\\\ \n & = \n \\frac{\\text{FP}_h(\\alpha, R; z)}{\\text{FQ}_h(\\alpha, R; z)} = \n \\sum_{n=0}^{2h-1} p_n(\\alpha, R) z^n + \n \\sum_{n=2h}^{\\infty} \\widetilde{e}_{h,n}(\\alpha, R) z^n, \n\\end{align} \n" }, { "math_id": 9, "text": "\\text{FP}_h(\\alpha, R; z)" }, { "math_id": 10, "text": "\\text{FQ}_h(\\alpha, R; z)" }, { "math_id": 11, "text": "\n\\begin{align} \n\\text{FP}_h(\\alpha, R; z) & = \\sum_{n=0}^{h-1}\\left[\\sum_{i=0}^n \\binom{h}{i} (1-h-R/\\alpha)_i (R/\\alpha)_{n-i}\\right] (\\alpha z)^n \\\\ \n\\text{FQ}_h(\\alpha, R; z) & = \\sum_{i=0}^h \\binom{h}{i} (R/\\alpha+h-i)_i(-\\alpha z)^i \\\\ \n & = (-\\alpha z)^h \\cdot h! \\cdot L_h^{(R/\\alpha-1)}\\left((\\alpha z)^{-1}\\right). \n\\end{align} \n" }, { "math_id": 12, "text": "h \\geq 2" }, { "math_id": 13, "text": "(x)_{n,\\alpha}" }, { "math_id": 14, "text": "n \\geq 1" }, { "math_id": 15, "text": "h \\alpha^t" }, { "math_id": 16, "text": "0 \\leq t \\leq h" }, { "math_id": 17, "text": " \n\\begin{align} \n(x)_{n,\\alpha} & = \\sum_{0 \\leq k < n} \\binom{n}{k+1} (-1)^k (x+(n-1)\\alpha)_{k+1,-\\alpha} (x)_{n-1-k,\\alpha} \\\\ \n(x)_{n,\\alpha} & \\equiv \\sum_{0 \\leq k \\leq n} \\binom{h}{k} \\alpha^{n+(t+1)k} (1-h-x/\\alpha)_k (x/\\alpha)_{n-k} && \\pmod{h \\alpha^t}. \n\\end{align} \n" }, { "math_id": 18, "text": "(x)_{n,\\alpha} = \\sum_{j=1}^h c_{h,j}(\\alpha, x) \\times \\ell_{h,j}(\\alpha, x)^n, " }, { "math_id": 19, "text": "\\left(\\ell_{h,j}(\\alpha, x)\\right)_{j=1}^h = \\left\\{ z_j : \\alpha^h \\times U\\left(-h, \\frac{x}{\\alpha}, \\frac{z}{\\alpha}\\right) = 0,\\ 1 \\leq j \\leq h \\right\\}, " }, { "math_id": 20, "text": "\\text{Conv}_h(\\alpha, R; z) := \\sum_{j=1}^h c_{h,j}(\\alpha, x) / (1-\\ell_{h,j}(\\alpha, x))" }, { "math_id": 21, "text": "(x)_{n,\\alpha} = \\alpha^n \\cdot [w^n]\\left(\\sum_{i=0}^{n+n_0-1} \\binom{\\frac{x}{\\alpha}+i-1}{i} \\times \\frac{(-1/w)}{(i+1) L_i^{(x/\\alpha-1)}(1/w) L_{i+1}^{(x/\\alpha-1)}(1/w)}\\right), " }, { "math_id": 22, "text": "n_0 \\geq 0" }, { "math_id": 23, "text": "(x)_{n,k}" }, { "math_id": 24, "text": "\\alpha" }, { "math_id": 25, "text": "(x)_{n,1} \\equiv (x)_n" }, { "math_id": 26, "text": "(x)_{n,-1} \\equiv x^{\\underline{n}}" }, { "math_id": 27, "text": "n! = (1)_{n,1} = (n)_{n,-1}" }, { "math_id": 28, "text": "(2n-1)!! = (1)_{n,2} = (2n-1)_{n,-2}" }, { "math_id": 29, "text": "n!_{(\\alpha)} = n \\cdot (n-\\alpha)!_{(\\alpha)}" }, { "math_id": 30, "text": "\\alpha \\in \\mathbb{Z}^{+}" }, { "math_id": 31, "text": "0 \\leq d < \\alpha" }, { "math_id": 32, "text": "(\\alpha n-d)!_{(\\alpha)} = (\\alpha-d)_{n,\\alpha} = (\\alpha n-d)_{n,-\\alpha}" }, { "math_id": 33, "text": "n!_{(\\alpha)} = (n)_{\\lfloor (n+\\alpha-1) / \\alpha \\rfloor,-\\alpha}" }, { "math_id": 34, "text": "x^k" }, { "math_id": 35, "text": "1 \\leq k \\leq n" } ]
https://en.wikipedia.org/wiki?curid=12104271
12106314
Jenkins–Traub algorithm
The Jenkins–Traub algorithm for polynomial zeros is a fast globally convergent iterative polynomial root-finding method published in 1970 by Michael A. Jenkins and Joseph F. Traub. They gave two variants, one for general polynomials with complex coefficients, commonly known as the "CPOLY" algorithm, and a more complicated variant for the special case of polynomials with real coefficients, commonly known as the "RPOLY" algorithm. The latter is "practically a standard in black-box polynomial root-finders". This article describes the complex variant. Given a polynomial "P", formula_0 with complex coefficients it computes approximations to the "n" zeros formula_1 of "P"("z"), one at a time in roughly increasing order of magnitude. After each root is computed, its linear factor is removed from the polynomial. Using this "deflation" guarantees that each root is computed only once and that all roots are found. The real variant follows the same pattern, but computes two roots at a time, either two real roots or a pair of conjugate complex roots. By avoiding complex arithmetic, the real variant can be faster (by a factor of 4) than the complex variant. The Jenkins–Traub algorithm has stimulated considerable research on theory and software for methods of this type. Overview. The Jenkins–Traub algorithm calculates all of the roots of a polynomial with complex coefficients. The algorithm starts by checking the polynomial for the occurrence of very large or very small roots. If necessary, the coefficients are rescaled by a rescaling of the variable. In the algorithm, proper roots are found one by one and generally in increasing size. After each root is found, the polynomial is deflated by dividing off the corresponding linear factor. Indeed, the factorization of the polynomial into the linear factor and the remaining deflated polynomial is already a result of the root-finding procedure. The root-finding procedure has three stages that correspond to different variants of the inverse power iteration. See Jenkins and Traub. A description can also be found in Ralston and Rabinowitz p. 383. The algorithm is similar in spirit to the two-stage algorithm studied by Traub. Root-finding procedure. Starting with the current polynomial "P"("X") of degree "n", the aim is to compute the smallest root formula_2 of "P(x)". The polynomial can then be split into a linear factor and the remaining polynomial factor formula_3 Other root-finding methods strive primarily to improve the root and thus the first factor. The main idea of the Jenkins-Traub method is to incrementally improve the second factor. To that end, a sequence of so-called "H" polynomials is constructed. These polynomials are all of degree "n" − 1 and are supposed to converge to the factor formula_4 of "P"("X") containing (the linear factors of) all the remaining roots. The sequence of "H" polynomials occurs in two variants, an unnormalized variant that allows easy theoretical insights and a normalized variant of formula_5 polynomials that keeps the coefficients in a numerically sensible range. The construction of the "H" polynomials formula_6 is guided by a sequence of complex numbers formula_7 called shifts. These shifts themselves depend, at least in the third stage, on the previous "H" polynomials. The "H" polynomials are defined as the solution to the implicit recursion formula_8 and formula_9 A direct solution to this implicit equation is formula_10 where the polynomial division is exact. Algorithmically, one would use long division by the linear factor as in the Horner scheme or Ruffini rule to evaluate the polynomials at formula_11 and obtain the quotients at the same time. With the resulting quotients "p"("X") and "h"("X") as intermediate results the next "H" polynomial is obtained as formula_12 Since the highest degree coefficient is obtained from "P(X)", the leading coefficient of formula_13 is formula_14. If this is divided out the normalized "H" polynomial is formula_15 Stage one: no-shift process. For formula_16 set formula_17. Usually "M=5" is chosen for polynomials of moderate degrees up to "n" = 50. This stage is not necessary from theoretical considerations alone, but is useful in practice. It emphasizes in the "H" polynomials the cofactor(s) (of the linear factor) of the smallest root(s). Stage two: fixed-shift process. The shift for this stage is determined as some point close to the smallest root of the polynomial. It is quasi-randomly located on the circle with the inner root radius, which in turn is estimated as the positive solution of the equation formula_18 Since the left side is a convex function and increases monotonically from zero to infinity, this equation is easy to solve, for instance by Newton's method. Now choose formula_19 on the circle of this radius. The sequence of polynomials formula_20, formula_21, is generated with the fixed shift value formula_22. This creates an asymmetry relative to the previous stage which increases the chance that the "H" polynomial moves towards the cofactor of a single root. During this iteration, the current approximation for the root formula_23 is traced. The second stage is terminated as successful if the conditions formula_24 and formula_25 are simultaneously met. This limits the relative step size of the iteration, ensuring that the approximation sequence stays in the range of the smaller roots. If there was no success after some number of iterations, a different random point on the circle is tried. Typically one uses a number of 9 iterations for polynomials of moderate degree, with a doubling strategy for the case of multiple failures. Stage three: variable-shift process. The formula_13 polynomials are now generated using the variable shifts formula_26 which are generated by formula_27 being the last root estimate of the second stage and formula_28 where formula_29 is the normalized "H" polynomial, that is formula_30 divided by its leading coefficient. If the step size in stage three does not fall fast enough to zero, then stage two is restarted using a different random point. If this does not succeed after a small number of restarts, the number of steps in stage two is doubled. Convergence. It can be shown that, provided "L" is chosen sufficiently large, "s"λ always converges to a root of "P". The algorithm converges for any distribution of roots, but may fail to find all roots of the polynomial. Furthermore, the convergence is slightly faster than the quadratic convergence of the Newton–Raphson method, however, it uses one-and-half as many operations per step, two polynomial evaluations for Newton vs. three polynomial evaluations in the third stage. What gives the algorithm its power? Compare with the Newton–Raphson iteration formula_31 The iteration uses the given "P" and formula_32. In contrast the third-stage of Jenkins–Traub formula_33 is precisely a Newton–Raphson iteration performed on certain rational functions. More precisely, Newton–Raphson is being performed on a sequence of rational functions formula_34 For formula_35 sufficiently large, formula_36 is as close as desired to a first degree polynomial formula_37 where formula_38 is one of the zeros of formula_39. Even though Stage 3 is precisely a Newton–Raphson iteration, differentiation is not performed. Analysis of the "H" polynomials. Let formula_40 be the roots of "P"("X"). The so-called Lagrange factors of "P(X)" are the cofactors of these roots, formula_41 If all roots are different, then the Lagrange factors form a basis of the space of polynomials of degree at most "n" − 1. By analysis of the recursion procedure one finds that the "H" polynomials have the coordinate representation formula_42 Each Lagrange factor has leading coefficient 1, so that the leading coefficient of the H polynomials is the sum of the coefficients. The normalized H polynomials are thus formula_43 Convergence orders. If the condition formula_44 holds for almost all iterates, the normalized H polynomials will converge at least geometrically towards formula_45. Under the condition that formula_46 one gets the asymptotic estimates for Interpretation as inverse power iteration. All stages of the Jenkins–Traub complex algorithm may be represented as the linear algebra problem of determining the eigenvalues of a special matrix. This matrix is the coordinate representation of a linear map in the "n"-dimensional space of polynomials of degree "n" − 1 or less. The principal idea of this map is to interpret the factorization formula_54 with a root formula_55 and formula_56 the remaining factor of degree "n" − 1 as the eigenvector equation for the multiplication with the variable "X", followed by remainder computation with divisor "P"("X"), formula_57 This maps polynomials of degree at most "n" − 1 to polynomials of degree at most "n" − 1. The eigenvalues of this map are the roots of "P"("X"), since the eigenvector equation reads formula_58 which implies that formula_59, that is, formula_60 is a linear factor of "P"("X"). In the monomial basis the linear map formula_61 is represented by a companion matrix of the polynomial "P", as formula_62 the resulting transformation matrix is formula_63 To this matrix the inverse power iteration is applied in the three variants of no shift, constant shift and generalized Rayleigh shift in the three stages of the algorithm. It is more efficient to perform the linear algebra operations in polynomial arithmetic and not by matrix operations, however, the properties of the inverse power iteration remain the same. Real coefficients. The Jenkins–Traub algorithm described earlier works for polynomials with complex coefficients. The same authors also created a three-stage algorithm for polynomials with real coefficients. See Jenkins and Traub A Three-Stage Algorithm for Real Polynomials Using Quadratic Iteration. The algorithm finds either a linear or quadratic factor working completely in real arithmetic. If the complex and real algorithms are applied to the same real polynomial, the real algorithm is about four times as fast. The real algorithm always converges and the rate of convergence is greater than second order. A connection with the shifted QR algorithm. There is a surprising connection with the shifted QR algorithm for computing matrix eigenvalues. See Dekker and Traub The shifted QR algorithm for Hermitian matrices. Again the shifts may be viewed as Newton-Raphson iteration on a sequence of rational functions converging to a first degree polynomial. Software and testing. The software for the Jenkins–Traub algorithm was published as Jenkins and Traub Algorithm 419: Zeros of a Complex Polynomial. The software for the real algorithm was published as Jenkins Algorithm 493: Zeros of a Real Polynomial. The methods have been extensively tested by many people. As predicted they enjoy faster than quadratic convergence for all distributions of zeros. However, there are polynomials which can cause loss of precision as illustrated by the following example. The polynomial has all its zeros lying on two half-circles of different radii. Wilkinson recommends that it is desirable for stable deflation that smaller zeros be computed first. The second-stage shifts are chosen so that the zeros on the smaller half circle are found first. After deflation the polynomial with the zeros on the half circle is known to be ill-conditioned if the degree is large; see Wilkinson, p. 64. The original polynomial was of degree 60 and suffered severe deflation instability. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "P(z) = \\sum_{i=0}^na_iz^{n-i}, \\quad a_0=1,\\quad a_n\\ne 0" }, { "math_id": 1, "text": "\\alpha_1,\\alpha_2,\\dots,\\alpha_n" }, { "math_id": 2, "text": "\\alpha" }, { "math_id": 3, "text": "P(X)=(X-\\alpha)\\bar H(X)" }, { "math_id": 4, "text": "\\bar H(X)" }, { "math_id": 5, "text": "\\bar H" }, { "math_id": 6, "text": "\\left(H^{(\\lambda)}(z)\\right)_{\\lambda=0,1,2,\\dots}" }, { "math_id": 7, "text": "(s_\\lambda)_{\\lambda=0,1,2,\\dots}" }, { "math_id": 8, "text": "\n H^{(0)}(z)=P^\\prime(z)\n" }, { "math_id": 9, "text": "\n (X-s_\\lambda)\\cdot H^{(\\lambda+1)}(X)\\equiv H^{(\\lambda)}(X)\\pmod{P(X)}\\ .\n" }, { "math_id": 10, "text": "\n H^{(\\lambda+1)}(X)\n =\\frac1{X-s_\\lambda}\\cdot\n \\left(\n H^{(\\lambda)}(X)-\\frac{H^{(\\lambda)}(s_\\lambda)}{P(s_\\lambda)}P(X) \n \\right)\\,,\n" }, { "math_id": 11, "text": "s_\\lambda" }, { "math_id": 12, "text": "\n\\left.\\begin{align}\nP(X)&=p(X)\\cdot(X-s_\\lambda)+P(s_\\lambda)\\\\\nH^{(\\lambda)}(X)&=h(X)\\cdot(X-s_\\lambda)+H^{(\\lambda)}(s_\\lambda)\\\\\n\\end{align}\\right\\}\n\\implies H^{(\\lambda+1)}(z)=h(z)-\\frac{H^{(\\lambda)}(s_\\lambda)}{P(s_\\lambda)}p(z). \n" }, { "math_id": 13, "text": "H^{(\\lambda+1)}(X)" }, { "math_id": 14, "text": "-\\tfrac{H^{(\\lambda)}(s_\\lambda)}{P(s_\\lambda)}" }, { "math_id": 15, "text": "\\begin{align}\n \\bar H^{(\\lambda+1)}(X)\n &=\\frac1{X-s_\\lambda}\\cdot\n \\left(\n P(X)-\\frac{P(s_\\lambda)}{H^{(\\lambda)}(s_\\lambda)}H^{(\\lambda)}(X) \n \\right)\\\\[1em]\n &=\\frac1{X-s_\\lambda}\\cdot\n \\left(\n P(X)-\\frac{P(s_\\lambda)}{\\bar H^{(\\lambda)}(s_\\lambda)}\\bar H^{(\\lambda)}(X) \n \\right)\\,.\\end{align}\n" }, { "math_id": 16, "text": "\\lambda = 0,1,\\dots, M-1" }, { "math_id": 17, "text": "s_\\lambda=0" }, { "math_id": 18, "text": "\nR^n+|a_{n-1}|\\,R^{n-1}+\\dots+|a_{1}|\\,R=|a_0|\\,.\n" }, { "math_id": 19, "text": "s=R\\cdot \\exp(i\\,\\phi_\\text{random})" }, { "math_id": 20, "text": "H^{(\\lambda+1)}(z)" }, { "math_id": 21, "text": "\\lambda=M,M+1,\\dots,L-1" }, { "math_id": 22, "text": "s_\\lambda = s" }, { "math_id": 23, "text": "t_\\lambda=s-\\frac{P(s)}{\\bar H^{(\\lambda)}(s)}" }, { "math_id": 24, "text": "\n |t_{\\lambda+1}-t_\\lambda|<\\tfrac12\\,|t_\\lambda|\n" }, { "math_id": 25, "text": "\n |t_\\lambda-t_{\\lambda-1}|<\\tfrac12\\,|t_{\\lambda-1}|\n" }, { "math_id": 26, "text": "s_{\\lambda},\\quad\\lambda=L,L+1,\\dots" }, { "math_id": 27, "text": "s_L = t_L = s- \\frac{P(s)}{\\bar H^{(L)}(s)}" }, { "math_id": 28, "text": "s_{\\lambda+1}=s_\\lambda- \\frac{P(s_\\lambda)}{\\bar H^{(\\lambda+1)}(s_\\lambda)}, \\quad \\lambda=L,L+1,\\dots," }, { "math_id": 29, "text": "\\bar H^{(\\lambda+1)}(z)" }, { "math_id": 30, "text": "H^{(\\lambda)}(z)" }, { "math_id": 31, "text": "z_{i+1}=z_i - \\frac{P(z_i)}{P^{\\prime}(z_i)}." }, { "math_id": 32, "text": "\\scriptstyle P^{\\prime}" }, { "math_id": 33, "text": "\ns_{\\lambda+1}\n =s_\\lambda- \\frac{P(s_\\lambda)}{\\bar H^{\\lambda+1}(s_\\lambda)}\n =s_\\lambda-\\frac{W^\\lambda(s_\\lambda)}{(W^\\lambda)'(s_\\lambda)}\n" }, { "math_id": 34, "text": "W^\\lambda(z)=\\frac{P(z)}{H^\\lambda(z)}." }, { "math_id": 35, "text": "\\lambda" }, { "math_id": 36, "text": "\\frac{P(z)}{\\bar H^{\\lambda}(z)}=W^\\lambda(z)\\,LC(H^{\\lambda})" }, { "math_id": 37, "text": "z-\\alpha_1, \\," }, { "math_id": 38, "text": "\\alpha_1" }, { "math_id": 39, "text": "P" }, { "math_id": 40, "text": "\\alpha_1,\\dots,\\alpha_n" }, { "math_id": 41, "text": "P_m(X)=\\frac{P(X)-P(\\alpha_m)}{X-\\alpha_m}." }, { "math_id": 42, "text": "\nH^{(\\lambda)}(X)\n =\\sum_{m=1}^n\n \\left[\n \\prod_{\\kappa=0}^{\\lambda-1}(\\alpha_m-s_\\kappa)\n \\right]^{-1}\\,P_m(X)\\ .\n" }, { "math_id": 43, "text": "\n\\bar H^{(\\lambda)}(X)\n =\\frac{\\sum_{m=1}^n\n \\left[\n \\prod_{\\kappa=0}^{\\lambda-1}(\\alpha_m-s_\\kappa)\n \\right]^{-1}\\,P_m(X)\n }{\n \\sum_{m=1}^n\n \\left[\n \\prod_{\\kappa=0}^{\\lambda-1}(\\alpha_m-s_\\kappa)\n \\right]^{-1}\n }\n= \\frac{P_1(X)+\\sum_{m=2}^n\n \\left[\n \\prod_{\\kappa=0}^{\\lambda-1}\\frac{\\alpha_1-s_\\kappa}{\\alpha_m-s_\\kappa}\n \\right]\\,P_m(X)\n }{\n 1+\\sum_{m=1}^n\n \\left[\n \\prod_{\\kappa=0}^{\\lambda-1}\\frac{\\alpha_1-s_\\kappa}{\\alpha_m-s_\\kappa}\n \\right]\n }\\ .\n" }, { "math_id": 44, "text": "|\\alpha_1-s_\\kappa|<\\min{}_{m=2,3,\\dots,n}|\\alpha_m-s_\\kappa|" }, { "math_id": 45, "text": "P_1(X)" }, { "math_id": 46, "text": "|\\alpha_1|<|\\alpha_2|=\\min{}_{m=2,3,\\dots,n}|\\alpha_m|" }, { "math_id": 47, "text": "\n H^{(\\lambda)}(X)\n =P_1(X)+O\\left(\\left|\\frac{\\alpha_1}{\\alpha_2}\\right|^\\lambda\\right).\n" }, { "math_id": 48, "text": "\n H^{(\\lambda)}(X)\n = P_1(X)\n +O\\left(\n \\left|\\frac{\\alpha_1}{\\alpha_2}\\right|^M\n \\cdot\n \\left|\\frac{\\alpha_1-s}{\\alpha_2-s}\\right|^{\\lambda-M}\\right)\n" }, { "math_id": 49, "text": "\n s-\\frac{P(s)}{\\bar H^{(\\lambda)}(s)}\n = \\alpha_1+O\\left(\\ldots\\cdot|\\alpha_1-s|\\right)." }, { "math_id": 50, "text": "\n H^{(\\lambda)}(X)\n =P_1(X)\n +O\\left(\\prod_{\\kappa=0}^{\\lambda-1}\n \\left|\\frac{\\alpha_1-s_\\kappa}{\\alpha_2-s_\\kappa}\\right|\n \\right)\n" }, { "math_id": 51, "text": "\n s_{\\lambda+1}=\n s_\\lambda-\\frac{P(s)}{\\bar H^{(\\lambda+1)}(s_\\lambda)}\n =\\alpha_1+O\\left(\\prod_{\\kappa=0}^{\\lambda-1}\n \\left|\\frac{\\alpha_1-s_\\kappa}{\\alpha_2-s_\\kappa}\\right|\n \\cdot\n \\frac{|\\alpha_1-s_\\lambda|^2}{|\\alpha_2-s_\\lambda|}\n \\right)\n" }, { "math_id": 52, "text": "\\phi^2=1+\\phi\\approx 2.61" }, { "math_id": 53, "text": "\\phi=\\tfrac12(1+\\sqrt5)" }, { "math_id": 54, "text": "P(X)=(X-\\alpha_1)\\cdot P_1(X)" }, { "math_id": 55, "text": "\\alpha_1\\in\\C" }, { "math_id": 56, "text": "P_1(X) = P(X) / (X-\\alpha_1)" }, { "math_id": 57, "text": "M_X(H) = (X\\cdot H(X)) \\bmod P(X)\\,." }, { "math_id": 58, "text": "0 = (M_X-\\alpha\\cdot id)(H)=((X-\\alpha)\\cdot H) \\bmod P\\,," }, { "math_id": 59, "text": "(X-\\alpha)\\cdot H)=C\\cdot P(X)" }, { "math_id": 60, "text": "(X-\\alpha)" }, { "math_id": 61, "text": "M_X" }, { "math_id": 62, "text": " M_X(H) = \\sum_{m=0}^{n-1}H_mX^{m+1}-H_{n-1}\\left(X^n+\\sum_{m=0}^{n-1}a_mX^m\\right) = \\sum_{m=1}^{n-1}(H_{m-1}-a_{m}H_{n-1})X^m-a_0H_{n-1}\\,," }, { "math_id": 63, "text": "A=\\begin{pmatrix}\n0 & 0 & \\dots & 0 & -a_0 \\\\\n1 & 0 & \\dots & 0 & -a_1 \\\\\n0 & 1 & \\dots & 0 & -a_2 \\\\\n\\vdots & \\vdots & \\ddots & \\vdots & \\vdots \\\\\n0 & 0 & \\dots & 1 & -a_{n-1}\n\\end{pmatrix}\\,." } ]
https://en.wikipedia.org/wiki?curid=12106314
12106740
Random geometric graph
In graph theory, the mathematically simplest spatial network In graph theory, a random geometric graph (RGG) is the mathematically simplest spatial network, namely an undirected graph constructed by randomly placing "N" nodes in some metric space (according to a specified probability distribution) and connecting two nodes by a link if and only if their distance is in a given range, e.g. smaller than a certain neighborhood radius, "r". Random geometric graphs resemble real human social networks in a number of ways. For instance, they spontaneously demonstrate community structure - clusters of nodes with high modularity. Other random graph generation algorithms, such as those generated using the Erdős–Rényi model or Barabási–Albert (BA) model do not create this type of structure. Additionally, random geometric graphs display degree assortativity according to their spatial dimension: "popular" nodes (those with many links) are particularly likely to be linked to other popular nodes. A real-world application of RGGs is the modeling of ad hoc networks. Furthermore they are used to perform benchmarks for graph algorithms. Definition. In the following, let  "G" = ("V", "E") denote an undirected Graph with a set of vertices V and a set of edges E ⊆ V × V. The set sizes are denoted by |"V"| = n and |"E"| = m. Additionally, if not noted otherwise, the metric space [0,1)d with the euclidean distance is considered, i.e. for any points formula_0 the euclidean distance of x and y is defined as formula_1. A random geometric graph (RGG) is an undirected geometric graph with nodes randomly sampled from the uniform distribution of the underlying space [0,1)d. Two vertices p, q ∈ V are connected if, and only if, their distance is less than a previously specified parameter r ∈ (0,1), excluding any loops. Thus, the parameters r and n fully characterize a RGG. Algorithms. Naive algorithm. The naive approach is to calculate the distance of every vertex to every other vertex. As there are formula_2possible connections that are checked, the time complexity of the naive algorithm is formula_3. The samples are generated by using a random number generator (RNG) on formula_4. Practically, one can implement this using d random number generators on formula_5, one RNG for every dimension. Pseudocode. "V" := generateSamples("n") "// Generates n samples in the unit cube." for each "p" ∈ "V" do for each "q" ∈ "V"\{p} do if distance("p", "q") ≤ "r" then addConnection("p", "q") "// Add the edge (p, q) to the edge data structure." end if end for end for As this algorithm is not scalable (every vertex needs information of every other vertex), Holtgrewe et al. and Funke et al. have introduced new algorithms for this problem. Distributed algorithms. Holtgrewe et al.. This algorithm, which was proposed by Holtgrewe et al., was the first distributed RGG generator algorithm for dimension 2. It partitions the unit square into equal sized cells with side length of at least formula_6. For a given number formula_7of processors, each processor is assigned formula_8cells, where formula_9For simplicity, formula_10 is assumed to be a square number, but this can be generalized to any number of processors. Each processor then generates formula_11vertices, which are then distributed to their respective owners. Then the vertices are sorted by the cell number they fall into, for example with Quicksort. Next, each processor then sends their adjacent processors the information about the vertices in the border cells, such that each processing unit can calculate the edges in their partition independent of the other units. The expected running time is formula_12. An upper bound for the communication cost of this algorithm is given by formula_13, where formula_14denotes the time for an all-to-all communication with messages of length l bits to c communication partners. formula_15is the time taken for a point-to-point communication for a message of length l bits. Since this algorithm is not communication free, Funke et al. proposed a scalable distributed RGG generator for higher dimensions, which works without any communication between the processing units. Funke et al.. The approach used in this algorithm is similar to the approach in Holtgrewe: Partition the unit cube into equal sized chunks with side length of at least r. So in d = 2 this will be squares, in d = 3 this will be cubes. As there can only fit at most formula_16 chunks per dimension, the number of chunks is capped at formula_17. As before, each processor is assigned formula_18chunks, for which it generates the vertices. To achieve a communication free process, each processor then generates the same vertices in the adjacent chunks by exploiting pseudorandomization of seeded hash functions. This way, each processor calculates the same vertices and there is no need for exchanging vertex information. For dimension 3, Funke et al. showed that the expected running time is formula_19, without any cost for communication between processing units. Properties. Isolated vertices and connectivity. The probability that a single vertex is isolated in a RGG is formula_20. Let formula_21 be the random variable counting how many vertices are isolated. Then the expected value of formula_21 is formula_22. The term formula_23provides information about the connectivity of the RGG. For formula_24, the RGG is asymptotically almost surely connected. For formula_25, the RGG is asymptotically almost surely disconnected. And for formula_26, the RGG has a giant component that covers more than formula_27vertices and formula_28 is Poisson distributed with parameter formula_29. It follows that if formula_26, the probability that the RGG is connected is formula_30and the probability that the RGG is not connected is formula_31. For any formula_32-Norm ( formula_33) and for any number of dimensions formula_34, a RGG possesses a sharp threshold of connectivity at formula_35with constant formula_36. In the special case of a two-dimensional space and the euclidean norm (formula_37 and formula_38) this yields formula_39. Hamiltonicity. It has been shown, that in the two-dimensional case, the threshold formula_39also provides information about the existence of a Hamiltonian cycle (Hamiltonian Path). For any formula_40, if formula_41, then the RGG has asymptotically almost surely no Hamiltonian cycle and if formula_42for any formula_43, then the RGG has asymptotically almost surely a Hamiltonian cycle. Clustering coefficient. The clustering coefficient of RGGs only depends on the dimension d of the underlying space [0,1)d. The clustering coefficient is formula_44for even formula_45 and formula_46for odd formula_45 whereformula_47For large formula_45, this simplifies to formula_48. Generalized random geometric graphs. In 1988 Waxman generalised the standard RGG by introducing a probabilistic connection function as opposed to the deterministic one suggested by Gilbert. The example introduced by Waxman was a stretched exponential where two nodes formula_49 and formula_50 connect with probability given by formula_51where formula_52 is the euclidean separation and formula_53, formula_54are parameters determined by the system. This type of RGG with probabilistic connection function is often referred to a soft random geometric Graph, which now has two sources of randomness; the location of nodes (vertices) and the formation of links (edges). This connection function has been generalized further in the literature formula_55which is often used to study wireless networks without interference. The parameter formula_56 represents how the signal decays with distance, when formula_57 is free space, formula_58 models a more cluttered environment like a town (= 6 models cities like New York) whilst formula_59 models highly reflective environments. We notice that for formula_60 is the Waxman model, whilst as formula_61 and formula_62 we have the standard RGG. Intuitively these type of connection functions model how the probability of a link being made decays with distance. Overview of some results for Soft RGG. In the high density limit for a network with exponential connection function the number of isolated nodes is Poisson distributed, and the resulting network contains a unique giant component and isolated nodes only. Therefore by ensuring there are no isolated nodes, in the dense regime, the network is a.a.s fully connected; similar to the results shown in for the disk model. Often the properties of these networks such as betweenness centrality and connectivity are studied in the limit as the density formula_63 which often means border effects become negligible. However, in real life where networks are finite, although can still be extremely dense, border effects will impact on full connectivity; in fact showed that for full connectivity, with an exponential connection function, is greatly impacted by boundary effects as nodes near the corner/face of a domain are less likely to connect compared with those in the bulk. As a result full connectivity can be expressed as a sum of the contributions from the bulk and the geometries boundaries. A more general analysis of the connection functions in wireless networks has shown that the probability of full connectivity can be well approximated expressed by a few moments of the connection function and the regions geometry.
[ { "math_id": 0, "text": "x, y \\in [0, 1)^d" }, { "math_id": 1, "text": "d(x, y) = ||x - y||_2 = \\sqrt{\\sum_{i=1}^d (x_{i} - y_{i})^2}" }, { "math_id": 2, "text": "\\frac{n(n-1)}{2}" }, { "math_id": 3, "text": "\\Theta(n^2)" }, { "math_id": 4, "text": "[0, 1)^d" }, { "math_id": 5, "text": "[0, 1)" }, { "math_id": 6, "text": "r" }, { "math_id": 7, "text": "P = p^2" }, { "math_id": 8, "text": "{k \\over p} \\times {k \\over p}" }, { "math_id": 9, "text": "k = \\left \\lfloor {1/r} \\right \\rfloor." }, { "math_id": 10, "text": "P" }, { "math_id": 11, "text": "\\frac{n}{P}" }, { "math_id": 12, "text": "O(\\frac{n}{P}\\log{\\frac{n}{P}})" }, { "math_id": 13, "text": "T_{all-to-all}(n/P, P) + T_{all-to-all}(1, P) + T_{point-to-point}(n/(k\\cdot{P}) + 2)" }, { "math_id": 14, "text": "T_{all-to-all}(l, c)" }, { "math_id": 15, "text": "T_{point-to-point}(l)" }, { "math_id": 16, "text": "{\\left \\lfloor {1/r} \\right \\rfloor}" }, { "math_id": 17, "text": "{\\left \\lfloor {1/r} \\right \\rfloor}^d" }, { "math_id": 18, "text": "{\\left \\lfloor {1/r} \\right \\rfloor}^d \\over P" }, { "math_id": 19, "text": "O(\\frac{m+n}{P} + \\log{P})" }, { "math_id": 20, "text": "(1- \\pi r^2)^{n-1}" }, { "math_id": 21, "text": "X" }, { "math_id": 22, "text": "E(X) = n(1- \\pi r^2)^{n-1} = ne^{-\\pi r^2 n}-O(r^4n)" }, { "math_id": 23, "text": "\\mu = ne^{-\\pi r^2 n}" }, { "math_id": 24, "text": "\\mu \\longrightarrow 0" }, { "math_id": 25, "text": "\\mu \\longrightarrow \\infin" }, { "math_id": 26, "text": "\\mu = \\Theta(1)" }, { "math_id": 27, "text": "\\frac{n} {2}" }, { "math_id": 28, "text": "X" }, { "math_id": 29, "text": "\\mu" }, { "math_id": 30, "text": "P[X=0] \\sim e^{-\\mu}" }, { "math_id": 31, "text": "P[X>0] \\sim 1-e^{-\\mu}" }, { "math_id": 32, "text": "l_p" }, { "math_id": 33, "text": "1 \\leq p \\leq \\infin" }, { "math_id": 34, "text": "d>2" }, { "math_id": 35, "text": "r \\sim\\left({\\ln (n) \\over \\alpha_{p,d}n}\\right)^{1 \\over d}" }, { "math_id": 36, "text": "\\alpha_{p,d}" }, { "math_id": 37, "text": "d=2" }, { "math_id": 38, "text": "p=2" }, { "math_id": 39, "text": "r \\sim \\sqrt{{\\ln (n) \\over \\pi n}}" }, { "math_id": 40, "text": "\\epsilon>0" }, { "math_id": 41, "text": "r \\sim \\sqrt{{\\ln (n) \\over (\\pi + \\epsilon) n}}" }, { "math_id": 42, "text": "r \\sim \\sqrt{{\\ln (n) \\over (\\pi - \\epsilon) n}}" }, { "math_id": 43, "text": "\\epsilon>0" }, { "math_id": 44, "text": "C_d = 1-H_d(1)" }, { "math_id": 45, "text": "d" }, { "math_id": 46, "text": "C_d = {3 \\over 2} -H_d({1 \\over 2})" }, { "math_id": 47, "text": "H_d(x) = {1 \\over \\sqrt{\\pi}} \\sum_{i=x}^{d \\over 2}\n{\\Gamma(i) \\over \\Gamma(i + {1 \\over 2})} \\left({3 \\over 4}\\right)^{i+{1 \\over 2}}" }, { "math_id": 48, "text": "C_d \\sim 3 \\sqrt{2 \\over \\pi d} \\left({3 \\over 4}\\right)^{d+1 \\over 2}" }, { "math_id": 49, "text": "i" }, { "math_id": 50, "text": "j" }, { "math_id": 51, "text": "H_{ij}=\\beta e^{-{r_{ij} \\over r_0}}" }, { "math_id": 52, "text": "r_{ij}" }, { "math_id": 53, "text": "\\beta" }, { "math_id": 54, "text": "r_0" }, { "math_id": 55, "text": "H_{ij}=\\beta e^{-\\left({r_{ij} \\over r_0}\\right)^\\eta}" }, { "math_id": 56, "text": "\\eta" }, { "math_id": 57, "text": "\\eta =2" }, { "math_id": 58, "text": "\\eta >2" }, { "math_id": 59, "text": "\\eta <2" }, { "math_id": 60, "text": "\\eta =1" }, { "math_id": 61, "text": "\\eta \\to \\infin" }, { "math_id": 62, "text": "\\beta =1" }, { "math_id": 63, "text": "\\to \\infin" } ]
https://en.wikipedia.org/wiki?curid=12106740
12106854
Statistical benchmarking
Method of using auxiliary information for better results In statistics, benchmarking is a method of using auxiliary information to adjust the sampling weights used in an estimation process, in order to yield more accurate estimates of totals. Suppose we have a population where each unit formula_0 has a "value" formula_1 associated with it. For example, formula_1 could be a wage of an employee formula_0, or the cost of an item formula_0. Suppose we want to estimate the sum formula_2 of all the formula_1. So we take a sample of the formula_0, get a sampling weight "W"("k") for all sampled formula_0, and then sum up formula_3 for all sampled formula_0. One property usually common to the weights formula_4 described here is that if we sum them over all sampled formula_0, then this sum is an estimate of the total number of units formula_0 in the population (for example, the total employment, or the total number of items). Because we have a sample, this estimate of the total number of units in the population will differ from the true population total. Similarly, the estimate of total formula_2 (where we sum formula_3 for all sampled formula_0) will also differ from true population total. We do not know what the true population total formula_2 value is (if we did, there would be no point in sampling!). Yet often we do know what the sum of the formula_4 are over all units in the population. For example, we may not know the total earnings of the population or the total cost of the population, but often we know the total employment or total volume of sales. And even if we don't know these exactly, there often are surveys done by other organizations or at earlier times, with very accurate estimates of these auxiliary quantities. One important function of a population census is to provide data that can be used for benchmarking smaller surveys. The benchmarking procedure begins by first breaking the population into benchmarking cells. Cells are formed by grouping units together that share common characteristics, for example, similar formula_1, yet anything can be used that enhances the accuracy of the final estimates. For each cell formula_5, we let formula_6 be the sum of all formula_4, where the sum is taken over all sampled formula_0 in the cell formula_5. For each cell formula_5, we let formula_7 be the auxiliary value for cell formula_5, which is commonly called the "benchmark target" for cell formula_5. Next, we compute a benchmark factor formula_8. Then, we adjust all weights formula_4 by multiplying it by its benchmark factor formula_9, for its cell formula_5. The net result is that the estimated formula_10 [formed by summing formula_11] will now equal the benchmark target total formula_12. But the more important benefit is that the estimate of the total of formula_2 [formed by summing formula_13] will tend to be more accurate. Relationship to stratified sampling. Benchmarking is sometimes referred to as 'post-stratification' because of its similarities to stratified sampling. The difference between the two is that in stratified sampling, we decide "in advance" how many units will be sampled from each stratum (equivalent to benchmarking cells); in benchmarking, we select units from the broader population, and the number chosen from each cell is a matter of chance. The advantage of stratified sampling is that the sample numbers in each stratum can be controlled for desired accuracy outcomes. Without this control, we may end up with too much sample in one stratum and not enough in another – indeed, it's possible that a sample will contain "no" members from a certain cell, in which case benchmarking fails because formula_14, leading to a divide-by-zero problem. In such cases, it is necessary to 'collapse' cells together so that each remaining cell has an adequate sample size. For this reason, benchmarking is generally used in situations where stratified sampling is impractical. For instance, when selecting people from a telephone directory, we can't tell what age they are so we can't easily stratify the sample by age. However, we can collect this information from the people sampled, allowing us to benchmark against demographic information.
[ { "math_id": 0, "text": "k" }, { "math_id": 1, "text": "Y(k)" }, { "math_id": 2, "text": "Y" }, { "math_id": 3, "text": "W(k) \\cdot Y(k)" }, { "math_id": 4, "text": "W(k)" }, { "math_id": 5, "text": "C" }, { "math_id": 6, "text": "W(C)" }, { "math_id": 7, "text": "T(C)" }, { "math_id": 8, "text": "F(C) = T(C) / W(C)" }, { "math_id": 9, "text": "F(C)" }, { "math_id": 10, "text": "W" }, { "math_id": 11, "text": "F(C) \\cdot W(k)" }, { "math_id": 12, "text": "T" }, { "math_id": 13, "text": "F(C) \\cdot F(k) \\cdot Y(k)" }, { "math_id": 14, "text": "W(C)=0" } ]
https://en.wikipedia.org/wiki?curid=12106854
1211056
Abstract polytope
Poset representing certain properties of a polytope In mathematics, an abstract polytope is an algebraic partially ordered set which captures the dyadic property of a traditional polytope without specifying purely geometric properties such as points and lines. A geometric polytope is said to be a "realization" of an abstract polytope in some real N-dimensional space, typically Euclidean. This abstract definition allows more general combinatorial structures than traditional definitions of a polytope, thus allowing new objects that have no counterpart in traditional theory. &lt;templatestyles src="Template:TOC limit/styles.css" /&gt; Introductory concepts. Traditional versus abstract polytopes. In Euclidean geometry, two shapes that are not similar can nonetheless share a common structure. For example, a square and a trapezoid both comprise an alternating chain of four vertices and four sides, which makes them quadrilaterals. They are said to be isomorphic or “structure preserving”. This common structure may be represented in an underlying abstract polytope, a purely algebraic partially ordered set which captures the pattern of connections (or "incidences)" between the various structural elements. The measurable properties of traditional polytopes such as angles, edge-lengths, skewness, straightness and convexity have no meaning for an abstract polytope. What is true for traditional polytopes (also called classical or geometric polytopes) may not be so for abstract ones, and vice versa. For example, a traditional polytope is regular if all its facets and vertex figures are regular, but this is not necessarily so for an abstract polytope. Realizations. A traditional polytope is said to be a "realization" of the associated abstract polytope. A realization is a mapping or injection of the abstract object into a real space, typically Euclidean, to construct a traditional polytope as a real geometric figure. The six quadrilaterals shown are all distinct realizations of the abstract quadrilateral, each with different geometric properties. Some of them do not conform to traditional definitions of a quadrilateral and are said to be "unfaithful" realizations. A conventional polytope is a faithful realization. Faces, ranks and ordering. In an abstract polytope, each structural element (vertex, edge, cell, etc.) is associated with a corresponding member of the set. The term "face" is used to refer to any such element e.g. a vertex (0-face), edge (1-face) or a general "k"-face, and not just a polygonal 2-face. The faces are "ranked" according to their associated real dimension: vertices have rank 0, edges rank 1 and so on. Incident faces of different ranks, for example, a vertex F of an edge G, are ordered by the relation F &lt; G. F is said to be a "subface" of G. F, G are said to be "incident" if either F = G or F &lt; G or G &lt; F. This usage of "incidence" also occurs in finite geometry, although it differs from traditional geometry and some other areas of mathematics. For example, in the square "ABCD", edges "AB" and "BC" are not abstractly incident (although they are both incident with vertex B). A polytope is then defined as a set of faces P with an order relation &lt;. Formally, P (with &lt;) will be a (strict) partially ordered set, or "poset". Least and greatest faces. Just as the number zero is necessary in mathematics, so also every set has the empty set ∅ as a subset. In an abstract polytope ∅ is by convention identified as the "least" or "null" face and is a subface of all the others. Since the least face is one level below the vertices or 0-faces, its rank is −1 and it may be denoted as "F"−1. Thus F−1 ≡ ∅ and the abstract polytope also contains the empty set as an element. It is not usually realized. There is also a single face of which all the others are subfaces. This is called the "greatest" face. In an "n"-dimensional polytope, the greatest face has rank = "n" and may be denoted as "F""n". It is sometimes realized as the interior of the geometric figure. These least and greatest faces are sometimes called "improper" faces, with all others being "proper" faces. A simple example. The faces of the abstract quadrilateral or square are shown in the table below: The relation &lt; comprises a set of pairs, which here include "F"−1−1&lt;X, ... , "F"−1&lt;G, ... , b&lt;Y, ... , c&lt;G, ... , Z&lt;G. Order relations are transitive, i.e. F &lt; G and G &lt; H implies that F &lt; H. Therefore, to specify the hierarchy of faces, it is not necessary to give every case of F &lt; H, only the pairs where one is the successor of the other, i.e. where F &lt; H and no G satisfies F &lt; G &lt; H. The edges W, X, Y and Z are sometimes written as ab, ad, bc, and cd respectively, but such notation is not always appropriate. All four edges are structurally similar and the same is true of the vertices. The figure therefore has the symmetries of a square and is usually referred to as the square. The Hasse diagram. Smaller posets, and polytopes in particular, are often best visualized in a Hasse diagram, as shown. By convention, faces of equal rank are placed on the same vertical level. Each "line" between faces, say F, G, indicates an ordering relation &lt; such that F &lt; G where F is below G in the diagram. The Hasse diagram defines the unique poset and therefore fully captures the structure of the polytope. Isomorphic polytopes give rise to isomorphic Hasse diagrams, and vice versa. The same is not generally true for the graph representation of polytopes. Rank. The "rank" of a face F is defined as ("m" − 2), where "m" is the maximum number of faces in any chain (F', F", ... , F) satisfying F' &lt; F" &lt; ... &lt; F. F' is always the least face, F−1. The "rank" of an abstract polytope P is the maximum rank n of any face. It is always the rank of the greatest face Fn. The rank of a face or polytope usually corresponds to the "dimension" of its counterpart in traditional theory. For some ranks, their face-types are named in the following table. † Traditionally "face" has meant a rank 2 face or 2-face. In abstract theory the term "face" denotes a face of "any" rank. Flags. In geometry, a flag is a maximal chain of faces, i.e. a (totally) ordered set Ψ of faces, each a subface of the next (if any), and such that Ψ is not a subset of any larger chain. Given any two distinct faces F, G in a flag, either F &lt; G or F &gt; G. For example, {ø, a, ab, abc} is a flag in the triangle abc. For a given polytope, all flags contain the same number of faces. Other posets do not, in general, satisfy this requirement. Sections. Any subset P' of a poset P is a poset (with the same relation &lt;, restricted to P'). In an abstract polytope, given any two faces "F", "H" of P with "F" ≤ "H", the set {"G" | "F" ≤ "G" ≤ "H"} is called a section of "P", and denoted "H"/"F". (In order theory, a section is called a closed interval of the poset and denoted ["F", "H"]. For example, in the prism abcxyz (see diagram) the section xyz/ø (highlighted green) is the triangle {ø, x, y, z, xy, xz, yz, xyz}. A "k"-section is a section of rank "k". P is thus a section of itself. This concept of section "does not" have the same meaning as in traditional geometry. Facets. The facet for a given "j"-face "F" is the ("j"−"1")-section "F"/∅, where "F""j" is the greatest face. For example, in the triangle abc, the facet at ab is ab/∅ = {∅, a, b, ab}, which is a line segment. The distinction between "F" and "F"/∅ is not usually significant and the two are often treated as identical. Vertex figures. The vertex figure at a given vertex "V" is the ("n"−1)-section "F""n"/"V", where "F""n" is the greatest face. For example, in the triangle abc, the vertex figure at b is abc/b = {b, ab, bc, abc}, which is a line segment. The vertex figures of a cube are triangles. Connectedness. A poset P is connected if P has rank ≤ 1, or, given any two proper faces F and G, there is a sequence of proper faces H1, H2, ... ,Hk such that F = H1, G = Hk, and each Hi, i &lt; k, is incident with its successor. The above condition ensures that a pair of disjoint triangles abc and xyz is "not" a (single) polytope. A poset P is strongly connected if every section of P (including P itself) is connected. With this additional requirement, two pyramids that share just a vertex are also excluded. However, two square pyramids, for example, "can", be "glued" at their square faces - giving an octahedron. The "common face" is "not" then a face of the octahedron. Formal definition. An abstract polytope is a partially ordered set, whose elements we call "faces", satisfying the 4 axioms: An "n"-polytope is a polytope of rank "n". The abstract polytope associated with a real convex polytope is also referred to as its face lattice. The simplest polytopes. Rank &lt; 1. There is just one poset for each rank −1 and 0. These are, respectively, the null face and the point. These are not always considered to be valid abstract polytopes. Rank 1: the line segment. There is only one polytope of rank 1, which is the line segment. It has a least face, just two 0-faces and a greatest face, for example {ø, a, b, ab}. It follows that the vertices a and b have rank 0, and that the greatest face ab, and therefore the poset, both have rank 1. Rank 2: polygons. For each "p", 3 ≤ "p" &lt; formula_0, we have (the abstract equivalent of) the traditional polygon with "p" vertices and "p" edges, or a "p"-gon. For p = 3, 4, 5, ... we have the triangle, square, pentagon, ... For "p" = 2, we have the digon, and "p" = formula_0 we get the apeirogon. The digon. A digon is a polygon with just 2 edges. Unlike any other polygon, both edges have the same two vertices. For this reason, it is "degenerate" in the Euclidean plane. Faces are sometimes described using "vertex notation" - e.g. {ø, a, b, c, ab, ac, bc, abc} for the triangle abc. This method has the advantage of "implying" the &lt; relation. With the digon this vertex notation "cannot be used". It is necessary to give the faces individual symbols and specify the subface pairs F &lt; G. Thus, a digon is defined as a set {ø, a, b, E', E", G} with the relation &lt; given by {ø&lt;/ref&gt; Two realizations are called congruent if the natural bijection between their sets of vertices is induced by an isometry of their ambient Euclidean spaces. If an abstract "n"-polytope is realized in "n"-dimensional space, such that the geometrical arrangement does not break any rules for traditional polytopes (such as curved faces, or ridges of zero size), then the realization is said to be "faithful". In general, only a restricted set of abstract polytopes of rank "n" may be realized faithfully in any given "n"-space. The characterization of this effect is an outstanding problem. For a regular abstract polytope, if the combinatorial automorphisms of the abstract polytope are realized by geometric symmetries then the geometric figure will be a regular polytope. Moduli space. The group "G" of symmetries of a realization "V" of an abstract polytope "P" is generated by two reflections, the product of which translates each vertex of "P" to the next. The product of the two reflections can be decomposed as a product of a non-zero translation, finitely many rotations, and possibly trivial reflection. Generally, the moduli space of realizations of an abstract polytope is a convex cone of infinite dimension. The realization cone of the abstract polytope has uncountably infinite algebraic dimension and cannot be closed in the Euclidean topology. The amalgamation problem and universal polytopes. An important question in the theory of abstract polytopes is the "amalgamation problem". This is a series of questions such as For given abstract polytopes "K" and "L", are there any polytopes "P" whose facets are "K" and whose vertex figures are "L" ? If so, are they all finite ? What finite ones are there ? For example, if "K" is the square, and "L" is the triangle, the answers to these questions are Yes, there are polytopes "P" with square faces, joined three per vertex (that is, there are polytopes of type {4,3}). Yes, they are all finite, specifically, There is the cube, with six square faces, twelve edges and eight vertices, and the hemi-cube, with three faces, six edges and four vertices. It is known that if the answer to the first question is 'Yes' for some regular "K" and "L", then there is a unique polytope whose facets are "K" and whose vertex figures are "L", called the universal polytope with these facets and vertex figures, which covers all other such polytopes. That is, suppose "P" is the universal polytope with facets "K" and vertex figures "L". Then any other polytope "Q" with these facets and vertex figures can be written "Q"="P"/"N", where "Q"="P"/"N" is called a quotient of "P", and we say "P" covers "Q". Given this fact, the search for polytopes with particular facets and vertex figures usually goes as follows: These two problems are, in general, very difficult. Returning to the example above, if "K" is the square, and "L" is the triangle, the universal polytope {"K","L"} is the cube (also written {4,3}). The hemicube is the quotient {4,3}/"N", where "N" is a group of symmetries (automorphisms) of the cube with just two elements - the identity, and the symmetry that maps each corner (or edge or face) to its opposite. If "L" is, instead, also a square, the universal polytope {"K","L"} (that is, {4,4}) is the tessellation of the Euclidean plane by squares. This tessellation has infinitely many quotients with square faces, four per vertex, some regular and some not. Except for the universal polytope itself, they all correspond to various ways to tessellate either a torus or an infinitely long cylinder with squares. The 11-cell and the 57-cell. The 11-cell, discovered independently by H. S. M. Coxeter and Branko Grünbaum, is an abstract 4-polytope. Its facets are hemi-icosahedra. Since its facets are, topologically, projective planes instead of spheres, the 11-cell is not a tessellation of any manifold in the usual sense. Instead, the 11-cell is a "locally" projective polytope. It is self-dual and universal: it is the "only" polytope with hemi-icosahedral facets and hemi-dodecahedral vertex figures. The 57-cell is also self-dual, with hemi-dodecahedral facets. It was discovered by H. S. M. Coxeter shortly after the discovery of the 11-cell. Like the 11-cell, it is also universal, being the only polytope with hemi-dodecahedral facets and hemi-icosahedral vertex figures. On the other hand, there are many other polytopes with hemi-dodecahedral facets and Schläfli type {5,3,5}. The universal polytope with hemi-dodecahedral facets and icosahedral (not hemi-icosahedral) vertex figures is finite, but very large, with 10006920 facets and half as many vertices. Local topology. The amalgamation problem has, historically, been pursued according to "local topology". That is, rather than restricting "K" and "L" to be particular polytopes, they are allowed to be any polytope with a given topology, that is, any polytope tessellating a given manifold. If "K" and "L" are "spherical" (that is, tessellations of a topological sphere), then "P" is called "locally spherical" and corresponds itself to a tessellation of some manifold. For example, if "K" and "L" are both squares (and so are topologically the same as circles), "P" will be a tessellation of the plane, torus or Klein bottle by squares. A tessellation of an "n"-dimensional manifold is actually a rank "n" + 1 polytope. This is in keeping with the common intuition that the Platonic solids are three dimensional, even though they can be regarded as tessellations of the two-dimensional surface of a ball. In general, an abstract polytope is called "locally X" if its facets and vertex figures are, topologically, either spheres or "X", but not both spheres. The 11-cell and 57-cell are examples of rank 4 (that is, four-dimensional) "locally projective" polytopes, since their facets and vertex figures are tessellations of real projective planes. There is a weakness in this terminology however. It does not allow an easy way to describe a polytope whose facets are tori and whose vertex figures are projective planes, for example. Worse still if different facets have different topologies, or no well-defined topology at all. However, much progress has been made on the complete classification of the locally toroidal regular polytopes Exchange maps. Let "Ψ" be a flag of an abstract "n"-polytope, and let −1 &lt; "i" &lt; "n". From the definition of an abstract polytope, it can be proven that there is a unique flag differing from "Ψ" by a rank "i" element, and the same otherwise. If we call this flag "Ψ"("i"), then this defines a collection of maps on the polytopes flags, say "φ""i". These maps are called exchange maps, since they swap pairs of flags : ("Ψφ""i")"φ""i" = "Ψ" always. Some other properties of the exchange maps : The exchange maps and the flag action in particular can be used to prove that "any" abstract polytope is a quotient of some regular polytope. Incidence matrices. A polytope can also be represented by tabulating its incidences. The following incidence matrix is that of a triangle: The table shows a 1 wherever a face is a subface of another, "or vice versa" (so the table is symmetric about the diagonal)- so in fact, the table has "redundant information"; it would suffice to show only a 1 when the row face ≤ the column face. Since both the body and the empty set are incident with all other elements, the first row and column as well as the last row and column are trivial and can conveniently be omitted. Square pyramid. Further information is gained by counting each occurrence. This numerative usage enables a symmetry grouping, as in the Hasse Diagram of the square pyramid: If vertices B, C, D, and E are considered symmetrically equivalent within the abstract polytope, then edges f, g, h, and j will be grouped together, and also edges k, l, m, and n, And finally also the triangles P, Q, R, and S. Thus the corresponding incidence matrix of this abstract polytope may be shown as: In this accumulated incidence matrix representation the diagonal entries represent the total counts of either element type. Elements of different type of the same rank clearly are never incident so the value will always be 0; however, to help distinguish such relationships, an asterisk (*) is used instead of 0. The sub-diagonal entries of each row represent the incidence counts of the relevant sub-elements, while the super-diagonal entries represent the respective element counts of the vertex-, edge- or whatever -figure. Already this simple square pyramid shows that the symmetry-accumulated incidence matrices are no longer symmetrical. But there is still a simple entity-relation (beside the generalised Euler formulae for the diagonal, respectively the sub-diagonal entities of each row, respectively the super-diagonal elements of each row - those at least whenever no holes or stars etc. are considered), as for any such incidence matrix formula_1 holds: formula_2 History. In the 1960s Branko Grünbaum issued a call to the geometric community to consider generalizations of the concept of regular polytopes that he called "polystromata". He developed a theory of polystromata, showing examples of new objects including the 11-cell. The 11-cell is a self-dual 4-polytope whose facets are not icosahedra, but are "hemi-icosahedra" — that is, they are the shape one gets if one considers opposite faces of the icosahedra to be actually the "same" face (Grünbaum, 1977). A few years after Grünbaum's discovery of the 11-cell, H.S.M. Coxeter discovered a similar polytope, the 57-cell (Coxeter 1982, 1984), and then independently rediscovered the 11-cell. With the earlier work by Branko Grünbaum, H. S. M. Coxeter and Jacques Tits having laid the groundwork, the basic theory of the combinatorial structures now known as abstract polytopes was first described by Egon Schulte in his 1980 PhD dissertation. In it he defined "regular incidence complexes" and "regular incidence polytopes". Subsequently, he and Peter McMullen developed the basics of the theory in a series of research articles that were later collected into a book. Numerous other researchers have since made their own contributions, and the early pioneers (including Grünbaum) have also accepted Schulte's definition as the "correct" one. Since then, research in the theory of abstract polytopes has focused mostly on "regular" polytopes, that is, those whose automorphism groups act transitively on the set of flags of the polytope. Notes. &lt;templatestyles src="Reflist/styles.css" /&gt; References. &lt;templatestyles src="Refbegin/styles.css" /&gt;
[ { "math_id": 0, "text": "\\infty" }, { "math_id": 1, "text": "I=(I_{ij})" }, { "math_id": 2, "text": "I_{ii} \\cdot I_{ij} = I_{ji} \\cdot I_{jj} \\ \\ (i<j)." } ]
https://en.wikipedia.org/wiki?curid=1211056
1211473
Dot gain
Phenomenon in offset lithography Dot gain, or tonal value increase, is a phenomenon in offset lithography and some other forms of printing which causes printed material to look darker than intended. It is caused by halftone dots growing in area between the original printing film and the final printed result. In practice, this means that an image that has not been adjusted to account for dot gain will appear too dark when it is printed. Dot gain calculations are often an important part of a CMYK color model. Definition. It is defined as the increase in the area fraction (of the inked or colored region) of a halftone dot during the prepress and printing processes. Total dot gain is the difference between the dot size on the film negative and the corresponding printed dot size. For example, a dot pattern that covers 30% of the image area on film, but covers 50% when printed, is said to show a total dot gain of 20%. However, with today's computer-to-plate imaging systems, which eliminates film completely, the measure of "film" is the original digital source "dot". Therefore, dot gain is now measured as the original digital dot versus the actual measured ink dot on paper. Mathematically, dot gain is defined as: formula_0 where "a"print is the ink area fraction of the print, and "a"form is the prepress area fraction to be inked. The latter may be the fraction of opaque material on a film positive (or transparent material on a film negative), or the relative command value in a digital prepress system. Causes. Dot gain is caused by ink spreading around halftone dots. Several factors can contribute to the increase in halftone dot area. Different paper types have different ink absorption rates; uncoated papers can absorb more ink than coated ones, and thus can show more gain. As printing pressure can squeeze the ink out of its dot shape causing gain, ink viscosity is a contributing factor with coated papers; higher viscosity inks can resist the pressure better. Halftone dots can also be surrounded by a small circumference of ink, in an effect called "rimming". Each halftone dot has a microscopic relief, and ink will fall off the edge before being eliminated entirely by the fountain solution (in the case of offset printing). Finally, halation of the printing film during exposure can contribute to dot gain. Yule–Nielsen effect and "optical dot gain". The Yule–Nielsen effect, sometimes known as "optical dot gain", is a phenomenon caused by absorption and scattering of light by the substrate. Light becomes diffused around dots, darkening the apparent tone. As a result, dots absorb more light than their size would suggest. The Yule–Nielsen effect is not strictly speaking a type of dot gain, because the size of the dot does not change, just its relative absorbance. Some densitometers automatically compute the absorption of a halftone relative to the absorption of a solid print using the Murray–Davies formula. Controlling dot gain. Not all halftone dots show the same amount of gain. The area of greatest gain is in midtones (40–60%); above this, as the dots contact one another, the perimeter available for dot gain is reduced. Dot gain becomes more noticeable with finer screen ruling, and is one of the factors affecting the choice of screen. Dot gain can be measured using a densitometer and color bars in absolute percentages. Dot gain is usually measured with 40% and 80% tones as reference values. A common value for dot gain is around 23% in the 40% tone for a 150 lines per inch screen and coated paper. Thus a dot gain of 19% means that a tint area of 40% will result in a 59% tone in the actual print. Modern prepress software usually includes utility to achieve the desired dot gain values using special compensation curves for each machine -- a tone reproduction curve (TRC). Computing the area of a halftone pattern. The inked area (coverage) fraction of the dot may be computed using the Yule-Nielsen model. This requires the optical densities of the substrate, the solid-covered area, and the halftone tint, as well as the value of the Yule-Nielsen parameter, "n". Pearson has suggested a value of 1.7 be used in absence of more specific information. However, it will tend to be larger when the halftone pattern in finer and when the substrate has a wider point spread function. Models for dot gain. Another factor upon which dot gain depends is the dot's area fraction. Dots with relatively large perimeters will tend to have greater dot gain than dots with smaller perimeters. This makes it useful to have a model for the amount of dot gain as a function of prepress dot area fraction. An early model. Tollenaar and Ernst tacitly suggested a model in their 1963 IARIGAI paper. It was formula_1 where "avf", the shadow critical area fraction, is the area fraction on the form at which the halftone pattern just appears solid on the print. This model, while simple, has dots with relatively small perimeter (in the shadows) exhibiting greater gain than dots with relatively larger perimeter (in the midtones). Haller's model. Karl Haller, of FOGRA in Munich, proposed a different model, one in which dots with larger perimeters tended to exhibit greater dot gain than those with smaller perimeters. One result derivable from his work is that dot gains depend on the shape of the halftone dots. The GRL model. Viggiano suggested an alternate model, based on the radius (or other fundamental dimension) of the dot growing in relative proportion to the perimeter of the dot, with empirical correction the duplicated areas which result when the corners of adjacent dots join. Mathematically, his model is: formula_2 where Δ0,50 is the dot gain when the input area fraction is &lt;templatestyles src="Fraction/styles.css" /&gt;1⁄2; the highlight critical printing area, "awf", is computed as: formula_3 and the shadow critical printing area, "avf", is computed according to formula_4 Note that, unless Δ0,50 = 0, either the highlight critical printing fraction, "awf", will be nonzero, or the shadow critical printing fraction, "avf" will not be 1, depending on the sign of Δ0,50. In instances in which both critical printing fractions are non-trivial, Viggiano recommended that a cascade of two (or possibly more) applications of the dot gain model be applied. Empirical models. Sometimes the exact form of a dot gain curve is difficult to model on the basis of geometry, and empirical modeling is used instead. To a certain extent, the models described above are empirical, as their parameters cannot be accurately determined from physical aspects of image microstructure and first principles. However, polynomials, cubic splines, and interpolation are completely empirical, and do not involve any image-related parameters. Such models were used by Pearson and Pobboravsky, for example, in their program to compute dot area fractions needed to produce a particular color in lithography. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "DG=a_{\\text{print}}-a_{\\text{form}}" }, { "math_id": 1, "text": "\\mathrm{gain}_{\\mathit{TE}}=a_{\\mathrm{form}} \\,\\left(1 - a_{\\mathit{vf}}\\right)" }, { "math_id": 2, "text": "\\mathrm{gain}_{\\mathit{GRL}}=\\begin{cases}\na_{\\mathrm{form}}-a_{\\mathit{wf}}, & \\mathrm{for}\\ a_{\\mathrm{form}}\\leq a_{\\mathit{wf}}\\\\[6pt]\n2\\,\\Delta_{0,50}\\sqrt{a_{\\mathrm{form}}\\left(1-a_{\\mathrm{form}}\\right)}, & \\mathrm{for}\\ a_{\\mathit{wf}}<a_{\\mathrm{form}}<a_{\\mathit{vf}}\\\\[6pt]\na_{\\mathrm{form}}-a_{\\mathit{vf}}, & \\mathrm{for}\\ a_{\\mathrm{form}}\\geq a_{\\mathit{vf}}\\end{cases}" }, { "math_id": 3, "text": "a_{\\mathit{wf}}=\\begin{cases}\n\\dfrac{4\\Delta_{0,50}^{2}}{1+4\\Delta_{0,50}^{2}}, & \\mathrm{for}\\ \\Delta_{0,50}<0\\\\[6pt]\n0, & \\mathrm{for}\\ \\Delta_{0,50}\\geq0\\end{cases}" }, { "math_id": 4, "text": "a_{\\mathit{vf}}=\\begin{cases}\n1, & \\mathrm{for}\\ \\Delta_{0,50}\\leq0\\\\[6pt]\n\\dfrac{1}{1+4\\Delta_{0,50}^{2}}, & \\mathrm{for}\\ \\Delta_{0,50}>0\\end{cases}" } ]
https://en.wikipedia.org/wiki?curid=1211473
1211474
Heat current
A heat current or thermal current is a kinetic exchange rate between molecules, relative to the material in which the kinesis occurs. It is defined as the net rate of flow of heat. The SI unit of heat current is the Watt, which is the flow of heat across a surface at the rate of one Joule per second. For conduction, heat current is defined by Fourier's law as formula_0 where formula_1 is the amount of heat transferred per unit time [W] and formula_2 is an oriented surface area element [m2] The above differential equation, when integrated for a homogeneous material of 1-D geometry between two endpoints at constant temperature, gives the heat flow rate as: formula_3 where "A" is the cross-sectional surface area, formula_4 is the temperature difference between the ends, formula_5 is the distance between the ends. For thermal radiation, heat current is defined as formula_6 where the constant of proportionality formula_7 is the Stefan–Boltzmann constant, formula_8 is the radiating surface area, and formula_9 is temperature. Heat current can also be thought of as the total phonon distribution multiplied by the energy of one phonon, times the group velocity of the phonons. The phonon distribution of a particular phonon mode is given by the Bose-Einstein factor, which is dependent on temperature and phonon energy. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": " \\frac{\\partial Q}{\\partial t} = -k \\oint_S{\\overrightarrow{\\nabla} T \\cdot \\,\\overrightarrow{dS}} " }, { "math_id": 1, "text": "\\big. \\frac{\\partial Q}{\\partial t}\\big." }, { "math_id": 2, "text": "\\overrightarrow{dS}" }, { "math_id": 3, "text": " \\big. \\frac{\\Delta Q}{\\Delta t} = -k A \\frac{\\Delta T}{\\Delta x} " }, { "math_id": 4, "text": "\\Delta T" }, { "math_id": 5, "text": "\\Delta x" }, { "math_id": 6, "text": "W = \\sigma \\cdot A \\cdot T^4" }, { "math_id": 7, "text": "\\sigma" }, { "math_id": 8, "text": "A" }, { "math_id": 9, "text": "T" } ]
https://en.wikipedia.org/wiki?curid=1211474
1211714
Lens mount
Interface between a camera body and lens A lens mount is an interface – mechanical and often also electrical – between a photographic camera body and a lens. It is a feature of camera systems where the body allows interchangeable lenses, most usually the rangefinder camera, single lens reflex type, single lens mirrorless type or any movie camera of 16 mm or higher gauge. Lens mounts are also used to connect optical components in instrumentation that may not involve a camera, such as the modular components used in optical laboratory prototyping which join via C-mount or T-mount elements. Mount types. A lens mount may be a screw-threaded type, a bayonet-type, or a breech-lock (friction lock) type. Modern still camera lens mounts are of the bayonet type, because the bayonet mechanism precisely aligns mechanical and electrical features between lens and body. Screw-threaded mounts are fragile and do not align the lens in a reliable rotational position, yet types such as the C-mount interface are still widely in use for other applications like video cameras and optical instrumentation. Bayonet mounts generally have a number of tabs (often three) around the base of the lens, which fit into appropriately sized recesses in the lens mounting plate on the front of the camera. The tabs are often "keyed" in some way to ensure that the lens is inserted in only one orientation, often by making one tab a different size. Once inserted the lens is fastened by turning it a small amount. It is then locked in place by a spring-loaded pin, which can be operated to remove the lens. Lens mounts of competing manufacturers (Sony, Nikon, Canon, Contax/Yashica, Pentax, etc.) are almost always incompatible. In addition to the mechanical and electrical interface variations, the flange focal distance from the lens mount to the film or sensor can also be different. Many allege that these incompatibilities are due to the desire of manufacturers to "lock in" consumers to their brand. In movie cameras, the two most popular mounts in current usage on professional digital cinematography cameras are Arri's PL-mount and Panavision's PV-mount. The PL-Mount is used both on Arri and RED digital cinematography cameras, which as of 2012[ [update]] are the most used cameras for films shot in digital. The Panavision mounts are exclusively used with Panavision lenses, and thus are only available on Panaflex cameras or third-party cameras "Panavised" by a Panavision rental house, whereas the PL-mount style is favored with most other cameras and cine lens manufacturers. Both of these mounts are held in place with locating pins and friction locking rings. Other mounts which are now largely historical or a minority in relation to current practices are listed below. List of lens mounts. For small camera modules, used in e.g. CCTV systems and machine vision, a range of metric thread mounts exists. The smallest ones can be found also in e.g. cellphones and endoscopes. The most common by far is the M12x0.5, followed by M8x0.5 and M10x0.5. Focusing lens mount. The axial adjustment range for focusing Ultra wide angle lenses and some Wide-angle lenses in large format cameras is usually very small. So some manufacturers (e.g. Linhof) offered special focusing lens mounts, so-called wide-angle focusing accessories for their cameras. With such a device, the lens could be focused precisely without moving the entire front standard. Secondary lens mount. Secondary lens refers to a multi-element lens mounted either in front of a camera's primary lens, or in between the camera body and the primary lens. (D)SLR camera &amp; interchangeable-lens manufacturers offer lens accessories like extension tubes and secondary lenses like teleconverters, which mount in between the camera body and the primary lens, both using and providing a primary lens mount. Various lensmakers also offer optical accessories that mount in front of the lens; these may include wide-angle, telephoto, fisheye, and close-up or macro adapters. Canon PowerShot A and Canon PowerShot G cameras have a built-in or non-interchangeable primary (zoom) lens, and Canon has "conversion tube" accessories available for some Canon PowerShot camera models which provide either a 52mm or 58mm "accessory/filter" screw thread. Canon's close-up, wide- (WC-DC), and tele-conversion (TC-DC) lenses have 2, 3, and 4-element lenses respectively, so they are multi-element lenses and not diopter "filters". Lens mount adapters. Lens mount adapters are designed to attach a lens to a camera body with non-matching mounts. Generally, a lens can be easily adapted to a camera body with a smaller flange focal distance by simply adding space between the camera and the lens. When attempting to adapt a lens to a camera body with a larger flange focal distance, the adapter must include a secondary lens in order to compensate. This has the side effect of decreasing the amount of light that reaches the sensor, as well as adding a crop factor to the lens. Without the secondary lens, these adapters will function as an extension tube and will not be able to focus to infinity. Notes. ^ A: The authoritative normative source for 4/3 standards information is Four-Thirds.Org and not 3rd-party reviews. 4/3's published facts: So: NOTE: Some published reviews of 4/3 instead cite the (female) "outside diameter" of the lens or mount as ~50mm (and micro-4/3 as ~44mm), and not the appropriate "major" diameter (D) ~44mm which is the camera body's female mount inside-diameter and the lens's male mount outside-diameter (micro-4/3 ~38mm). References. &lt;templatestyles src="Reflist/styles.css" /&gt; Sources. &lt;templatestyles src="Refbegin/styles.css" /&gt;
[ { "math_id": 0, "text": "\\mathrm{(21.63\\ mm)^2 = 17.3\\ mm ^ 2 + 12.98\\ mm ^ 2}" }, { "math_id": 1, "text": "5^2 = 4^2 + 3^2" } ]
https://en.wikipedia.org/wiki?curid=1211714
12118054
Laplace expansion (potential)
In physics, the Laplace expansion of potentials that are directly proportional to the inverse of the distance (formula_0), such as Newton's gravitational potential or Coulomb's electrostatic potential, expresses them in terms of the spherical Legendre polynomials. In quantum mechanical calculations on atoms the expansion is used in the evaluation of integrals of the inter-electronic repulsion. Formulation. The Laplace expansion is in fact the expansion of the inverse distance between two points. Let the points have position vectors formula_1 and formula_2, then the Laplace expansion is formula_3 Here formula_1 has the spherical polar coordinates formula_4 and formula_2 has formula_5 with homogeneous polynomials of degree formula_6. Further "r"&lt; is min("r", "r"′) and "r"&gt; is max("r", "r"′). The function formula_7 is a normalized spherical harmonic function. The expansion takes a simpler form when written in terms of solid harmonics, formula_8 Derivation. The derivation of this expansion is simple. By the law of cosines, formula_9 We find here the generating function of the Legendre polynomials formula_10: formula_11 Use of the spherical harmonic addition theorem formula_12 gives the desired result. Neumann expansion. A similar equation has been derived by Carl Gottfried Neumann that allows expression of formula_13 in prolate spheroidal coordinates as a series: formula_14 where formula_15 and formula_16 are associated Legendre functions of the first and second kind, respectively, defined such that they are real for formula_17. In analogy to the spherical coordinate case above, the relative sizes of the radial coordinates are important, as formula_18 and formula_19. References. &lt;templatestyles src="Reflist/styles.css" /&gt; &lt;templatestyles src="Refbegin/styles.css" /&gt;
[ { "math_id": 0, "text": "1 / r " }, { "math_id": 1, "text": "\\textbf{r} " }, { "math_id": 2, "text": "\\textbf{r}' " }, { "math_id": 3, "text": "\n\\frac{1}{\\|\\mathbf{r}-\\mathbf{r}'\\|} = \\sum_{\\ell=0}^\\infty \\frac{4\\pi}{2\\ell+1} \\sum_{m=-\\ell}^{\\ell} (-1)^m \\frac{r_{{\\scriptscriptstyle<}}^\\ell }{r_{\\scriptscriptstyle>}^{\\ell+1} } Y^{-m}_\\ell(\\theta, \\varphi) Y^m_\\ell(\\theta', \\varphi').\n" }, { "math_id": 4, "text": "(r, \\theta, \\varphi) " }, { "math_id": 5, "text": "(r', \\theta', \\varphi') " }, { "math_id": 6, "text": "\\ell " }, { "math_id": 7, "text": "Y^m_\\ell" }, { "math_id": 8, "text": "\n\\frac{1}{\\|\\mathbf{r}-\\mathbf{r}'\\|} = \\sum_{\\ell=0}^\\infty \n\\sum_{m=-\\ell}^\\ell (-1)^m I^{-m}_\\ell(\\mathbf{r}) R^{m}_\\ell(\\mathbf{r}')\\quad\\text{with}\\quad \\|\\mathbf{r}\\| > \\|\\mathbf{r}'\\|.\n" }, { "math_id": 9, "text": "\n\\frac{1}{\\|\\mathbf{r}-\\mathbf{r}'\\|} = \\frac{1}{\\sqrt{r^2 + (r')^2 - 2 r r' \\cos\\gamma}} = \n\\frac{1}{r\\sqrt{1 + h^2 - 2 h \\cos\\gamma}} \\quad\\hbox{with}\\quad h := \\frac{r'}{r} . \n" }, { "math_id": 10, "text": "P_\\ell(\\cos\\gamma)" }, { "math_id": 11, "text": "\n\\frac{1}{\\sqrt{1 + h^2 - 2 h \\cos\\gamma}} = \\sum_{\\ell=0}^\\infty h^\\ell P_\\ell(\\cos\\gamma).\n" }, { "math_id": 12, "text": "\nP_{\\ell}(\\cos \\gamma) = \\frac{4\\pi}{2\\ell + 1} \\sum_{m=-\\ell}^\\ell (-1)^m Y^{-m}_\\ell(\\theta, \\varphi) Y^m_\\ell (\\theta', \\varphi')\n" }, { "math_id": 13, "text": "1/r" }, { "math_id": 14, "text": "\\frac{1}{|\\mathbf{r}-\\mathbf{r}'|} = \\frac{4\\pi}{a} \\sum_{\\ell=0}^\\infty \\sum_{m=-\\ell}^\\ell (-1)^m \\frac{(\\ell-|m|)!}{(\\ell+|m|)!} \\mathcal{P}_\\ell^{|m|}(\\sigma_{<}) \\mathcal{Q}_\\ell^{|m|}(\\sigma_{>}) Y_\\ell^m(\\arccos\\tau,\\varphi) Y_\\ell^{m*}(\\arccos\\tau',\\varphi') " }, { "math_id": 15, "text": "\\mathcal{P}_\\ell^{m}(z)" }, { "math_id": 16, "text": "\\mathcal{Q}_\\ell^{m}(z)" }, { "math_id": 17, "text": "z\\in(1, \\infty)" }, { "math_id": 18, "text": "\\sigma_{<}=\\min(\\sigma, \\sigma')" }, { "math_id": 19, "text": "\\sigma_{>}=\\max(\\sigma, \\sigma')" } ]
https://en.wikipedia.org/wiki?curid=12118054
1211913
Least fixed point
Smallest fixed point of a function from a poset In order theory, a branch of mathematics, the least fixed point (lfp or LFP, sometimes also smallest fixed point) of a function from a partially ordered set ("poset" for short) to itself is the fixed point which is less than each other fixed point, according to the order of the poset. A function need not have a least fixed point, but if it does then the least fixed point is unique. Examples. With the usual order on the real numbers, the least fixed point of the real function "f"("x") = "x"2 is "x" = 0 (since the only other fixed point is 1 and 0 &lt; 1). In contrast, "f"("x") = "x" + 1 has no fixed points at all, so has no least one, and "f"("x") = "x" has infinitely many fixed points, but has no least one. Let formula_0 be a directed graph and formula_1 be a vertex. The set of vertices accessible from formula_1 can be defined as the least fixed-point of the function formula_2, defined as formula_3 The set of vertices which are co-accessible from formula_1 is defined by a similar least fix-point. The strongly connected component of formula_1 is the intersection of those two least fixed-points. Let formula_4 be a context-free grammar. The set formula_5 of symbols which produces the empty string formula_6 can be obtained as the least fixed-point of the function formula_2, defined as formula_7, where formula_8 denotes the power set of formula_9. Applications. Many fixed-point theorems yield algorithms for locating the least fixed point. Least fixed points often have desirable properties that arbitrary fixed points do not. Denotational semantics. In computer science, the "denotational semantics" approach uses least fixed points to obtain from a given program text a corresponding mathematical function, called its semantics. To this end, an artificial mathematical object, formula_10, is introduced, denoting the exceptional value "undefined". Given e.g. the program datatype codice_0, its mathematical counterpart is defined as formula_11 it is made a partially ordered set by defining formula_12 for each formula_13 and letting any two different members formula_14 be uncomparable w.r.t. formula_15, see picture. The semantics of a program definition codice_1 is some mathematical function formula_16 If the program definition codice_2 does not terminate for some input codice_3, this can be expressed mathematically as formula_17 The set of all mathematical functions is made partially ordered by defining formula_18 if, for each formula_19 the relation formula_20 holds, that is, if formula_21 is less defined or equal to formula_22 For example, the semantics of the expression codice_4 is less defined than that of codice_5, since the former, but not the latter, maps formula_23 to formula_24 and they agree otherwise. Given some program text codice_2, its mathematical counterpart is obtained as least fixed point of some mapping from functions to functions that can be obtained by "translating" codice_2. For example, the C definition is translated to a mapping formula_25 defined as formula_26 The mapping formula_27 is defined in a non-recursive way, although codice_8 was defined recursively. Under certain restrictions (see Kleene fixed-point theorem), which are met in the example, formula_27 necessarily has a least fixed point, formula_28, that is formula_29 for all formula_30. It is possible to show that formula_31 A larger fixed point of formula_27 is e.g. the function formula_32 defined by formula_33 however, this function does not correctly reflect the behavior of the above program text for negative formula_34 e.g. the call codice_9 will not terminate at all, let alone return codice_10. Only the "least" fixed point, formula_35 can reasonably be used as a mathematical program semantic. Descriptive complexity. Immerman and Vardi independently showed the descriptive complexity result that the polynomial-time computable properties of linearly ordered structures are definable in FO(LFP), i.e. in first-order logic with a least fixed point operator. However, FO(LFP) is too weak to express all polynomial-time properties of unordered structures (for instance that a structure has even size). Greatest fixed points. The greatest fixed point of a function can be defined analogously to the least fixed point, as the fixed point which is greater than any other fixed point, according to the order of the poset. In computer science, greatest fixed points are much less commonly used than least fixed points. Specifically, the posets found in domain theory usually do not have a greatest element, hence for a given function, there may be multiple, mutually incomparable maximal fixed points, and the greatest fixed point of that function may not exist. To address this issue, the "optimal fixed point" has been defined as the most-defined fixed point compatible with all other fixed points. The optimal fixed point always exists, and is the greatest fixed point if the greatest fixed point exists. The optimal fixed point allows formal study of recursive and corecursive functions that do not converge with the least fixed point. Unfortunately, whereas Kleene's recursion theorem shows that the least fixed point is effectively computable, the optimal fixed point of a computable function may be a non-computable function. Notes. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "G = (V, A)" }, { "math_id": 1, "text": "v" }, { "math_id": 2, "text": "f: \\wp(V) \\to \\wp(V)" }, { "math_id": 3, "text": "f(X) = \\{ v \\} \\cup \\{ x \\in V: \\text{ for some } w \\in X \\text{ there is an arc from } w \\text{ to } x \\} ." }, { "math_id": 4, "text": "G = (V, \\Sigma, R, S_0)" }, { "math_id": 5, "text": "E" }, { "math_id": 6, "text": "\\varepsilon" }, { "math_id": 7, "text": "f ( X ) = \\{ S \\in V: \\; S \\in X \\text{ or } (S \\to \\varepsilon) \\in R \\text{ or } (S \\to S^1 \\dots S^n) \\in R \\text{ and } S^i \\in X \\text{, for all } i \\}" }, { "math_id": 8, "text": "\\wp(V)" }, { "math_id": 9, "text": "V" }, { "math_id": 10, "text": "\\bot" }, { "math_id": 11, "text": "\\mathbb{Z}_\\bot = \\mathbb{Z} \\cup \\{ \\bot \\} ;" }, { "math_id": 12, "text": "\\bot \\sqsubset n" }, { "math_id": 13, "text": "n \\in \\mathbb{Z}" }, { "math_id": 14, "text": "n,m \\in \\mathbb{Z}" }, { "math_id": 15, "text": "\\sqsubset" }, { "math_id": 16, "text": "f: \\mathbb{Z}_\\bot \\to \\mathbb{Z}_\\bot ." }, { "math_id": 17, "text": "f(n) = \\bot ." }, { "math_id": 18, "text": "f \\sqsubseteq g" }, { "math_id": 19, "text": "n ," }, { "math_id": 20, "text": "f(n) \\sqsubseteq g(n)" }, { "math_id": 21, "text": "f(n)" }, { "math_id": 22, "text": "g(n) ." }, { "math_id": 23, "text": "0" }, { "math_id": 24, "text": "\\bot ," }, { "math_id": 25, "text": "F: (\\mathbb{Z}_\\bot \\to \\mathbb{Z}_\\bot) \\to (\\mathbb{Z}_\\bot \\to \\mathbb{Z}_\\bot) ," }, { "math_id": 26, "text": "(F(f))(n) = \\begin{cases} 1 & \\text{if } n = 0, \\\\ n \\cdot f(n-1) & \\text{if } n \\neq \\bot \\text{ and } n \\neq 0, \\\\ \\bot & \\text{if } n = \\bot. \\\\ \\end{cases}" }, { "math_id": 27, "text": "F" }, { "math_id": 28, "text": "\\operatorname{fact}" }, { "math_id": 29, "text": "(F(\\operatorname{fact}))(n) = \\operatorname{fact}(n)" }, { "math_id": 30, "text": "n \\in \\mathbb{Z}_\\bot" }, { "math_id": 31, "text": "\\operatorname{fact}(n) = \\begin{cases} n! & \\text{if } n \\geq 0, \\\\ \\bot & \\text{if } n < 0 \\text{ or } n = \\bot. \\end{cases}" }, { "math_id": 32, "text": "\\operatorname{fact}_0 ," }, { "math_id": 33, "text": "\\operatorname{fact}_0(n) = \\begin{cases} n! & \\text{if } n \\geq 0, \\\\ 0 & \\text{if } n < 0, \\\\ \\bot & \\text{if } n = \\bot, \\end{cases}" }, { "math_id": 34, "text": "n ;" }, { "math_id": 35, "text": "\\operatorname{fact} ," } ]
https://en.wikipedia.org/wiki?curid=1211913
1211923
Bryant Tuckerman
American mathematician (1915–2002) Louis Bryant Tuckerman, III (November 28, 1915 – May 19, 2002) was an American mathematician born in Lincoln, Nebraska. He was a member of the team that developed the Data Encryption Standard (DES). He studied topology at Princeton, where he invented the Tuckerman traverse method for revealing all the faces of a flexagon. On March 4, 1971, he discovered the 24th Mersenne prime, a titanic prime, with a value of formula_0. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "2^{19937}-1" } ]
https://en.wikipedia.org/wiki?curid=1211923
1211986
Virtual work
Work done by a force to move a particle along a virtual displacement &lt;templatestyles src="Hlist/styles.css"/&gt; In mechanics, virtual work arises in the application of the "principle of least action" to the study of forces and movement of a mechanical system. The work of a force acting on a particle as it moves along a displacement is different for different displacements. Among all the possible displacements that a particle may follow, called virtual displacements, one will minimize the action. This displacement is therefore the displacement followed by the particle according to the principle of least action. The work of a force on a particle along a virtual displacement is known as the virtual work. Historically, virtual work and the associated calculus of variations were formulated to analyze systems of rigid bodies, but they have also been developed for the study of the mechanics of deformable bodies. History. The principle of virtual work had always been used in some form since antiquity in the study of statics. It was used by the Greeks, medieval Arabs and Latins, and Renaissance Italians as "the law of lever". The idea of virtual work was invoked by many notable physicists of the 17th century, such as Galileo, Descartes, Torricelli, Wallis, and Huygens, in varying degrees of generality, when solving problems in statics. Working with Leibnizian concepts, Johann Bernoulli systematized the virtual work principle and made explicit the concept of infinitesimal displacement. He was able to solve problems for both rigid bodies as well as fluids. Bernoulli's version of virtual work law appeared in his letter to Pierre Varignon in 1715, which was later published in Varignon's second volume of "Nouvelle mécanique ou Statique" in 1725. This formulation of the principle is today known as the principle of virtual velocities and is commonly considered as the prototype of the contemporary virtual work principles. In 1743 D'Alembert published his "Traité de Dynamique" where he applied the principle of virtual work, based on Bernoulli's work, to solve various problems in dynamics. His idea was to convert a dynamical problem into static problem by introducing "inertial force". In 1768, Lagrange presented the virtual work principle in a more efficient form by introducing generalized coordinates and presented it as an alternative principle of mechanics by which all problems of equilibrium could be solved. A systematic exposition of Lagrange's program of applying this approach to all of mechanics, both static and dynamic, essentially D'Alembert's principle, was given in his "Mécanique Analytique" of 1788. Although Lagrange had presented his version of least action principle prior to this work, he recognized the virtual work principle to be more fundamental mainly because it could be assumed alone as the foundation for all mechanics, unlike the modern understanding that least action does not account for non-conservative forces. Overview. If a force acts on a particle as it moves from point formula_0 to point formula_1, then, for each possible trajectory that the particle may take, it is possible to compute the total work done by the force along the path. The "principle of virtual work", which is the form of the principle of least action applied to these systems, states that the path actually followed by the particle is the one for which the difference between the work along this path and other nearby paths is zero (to the first order). The formal procedure for computing the difference of functions evaluated on nearby paths is a generalization of the derivative known from differential calculus, and is termed "the calculus of variations". Consider a point particle that moves along a path which is described by a function formula_2 from point formula_0, where formula_3, to point formula_1, where formula_4. It is possible that the particle moves from formula_0 to formula_1 along a nearby path described by formula_5, where formula_6 is called the variation of formula_2. The variation formula_6 satisfies the requirement formula_7. The scalar components of the variation formula_8, formula_9 and formula_10 are called virtual displacements. This can be generalized to an arbitrary mechanical system defined by the generalized coordinates formula_11, formula_12. In which case, the variation of the trajectory formula_13 is defined by the virtual displacements formula_14, formula_12. Virtual work is the total work done by the applied forces and the inertial forces of a mechanical system as it moves through a set of virtual displacements. When considering forces applied to a body in static equilibrium, the principle of least action requires the virtual work of these forces to be zero. Mathematical treatment. Consider a particle "P" that moves from a point "A" to a point "B" along a trajectory r("t"), while a force F(r("t")) is applied to it. The work done by the force F is given by the integral formula_15 where "d"r is the differential element along the curve that is the trajectory of "P", and v is its velocity. It is important to notice that the value of the work "W" depends on the trajectory r("t"). Now consider particle "P" that moves from point "A" to point "B" again, but this time it moves along the nearby trajectory that differs from r("t") by the variation "δr("t") = "εh("t"), where "ε" is a scaling constant that can be made as small as desired and h("t") is an arbitrary function that satisfies h("t"0) = h("t"1) = 0. Suppose the force F(r("t") + "ε"h("t")) is the same as F(r("t")). The work done by the force is given by the integral formula_16 The variation of the work "δW" associated with this nearby path, known as the "virtual work", can be computed to be formula_17 If there are no constraints on the motion of "P", then 3 parameters are needed to completely describe "P"'s position at any time "t". If there are "k" ("k" ≤ 3) constraint forces, then "n" = (3 − "k") parameters are needed. Hence, we can define "n" generalized coordinates "q""i" ("t") ("i" = 1...,"n"), and express r("t") and "δr = "εh("t") in terms of the generalized coordinates. That is, formula_18 formula_19 Then, the derivative of the variation "δr = "εh("t") is given by formula_20 then we have formula_21 The requirement that the virtual work be zero for an arbitrary variation "δr("t") = "εh("t") is equivalent to the set of requirements formula_22 The terms "Qi" are called the "generalized forces" associated with the virtual displacement "δ"r. Static equilibrium. Static equilibrium is a state in which the net force and net torque acted upon the system is zero. In other words, both linear momentum and angular momentum of the system are conserved. The principle of virtual work states that "the virtual work of the applied forces is zero for all virtual movements of the system from static equilibrium". This principle can be generalized such that three dimensional rotations are included: the virtual work of the applied forces and applied moments is zero for all virtual movements of the system from static equilibrium. That is formula_23 where Fi" , "i" = 1, 2, ..., "m" and Mj" , "j" = 1, 2, ..., "n" are the applied forces and applied moments, respectively, and "δri" , "i" = 1, 2, ..., "m" and "δφj", "j" = 1, 2, ..., "n" are the virtual displacements and virtual rotations, respectively. Suppose the system consists of "N" particles, and it has "f" ("f" ≤ 6"N") degrees of freedom. It is sufficient to use only "f" coordinates to give a complete description of the motion of the system, so "f" generalized coordinates "qk" , "k" = 1, 2, ..., "f" are defined such that the virtual movements can be expressed in terms of these generalized coordinates. That is, formula_24 formula_25 The virtual work can then be reparametrized by the generalized coordinates: formula_26 where the generalized forces "Qk" are defined as formula_27 Kane shows that these generalized forces can also be formulated in terms of the ratio of time derivatives. That is, formula_28 The principle of virtual work requires that the virtual work done on a system by the forces F"i" and moments M"j" vanishes if it is in equilibrium. Therefore, the generalized forces "Q""k" are zero, that is formula_29 Constraint forces. An important benefit of the principle of virtual work is that only forces that do work as the system moves through a virtual displacement are needed to determine the mechanics of the system. There are many forces in a mechanical system that do no work during a virtual displacement, which means that they need not be considered in this analysis. The two important examples are (i) the internal forces in a rigid body, and (ii) the constraint forces at an ideal joint. Lanczos presents this as the postulate: "The virtual work of the forces of reaction is always zero for any virtual displacement which is in harmony with the given kinematic constraints." The argument is as follows. The principle of virtual work states that in equilibrium the virtual work of the forces applied to a system is zero. Newton's laws state that at equilibrium the applied forces are equal and opposite to the reaction, or constraint forces. This means the virtual work of the constraint forces must be zero as well. Law of the lever. A lever is modeled as a rigid bar connected to a ground frame by a hinged joint called a fulcrum. The lever is operated by applying an input force F"A" at a point "A" located by the coordinate vector r"A" on the bar. The lever then exerts an output force F"B" at the point "B" located by r"B". The rotation of the lever about the fulcrum "P" is defined by the rotation angle "θ". Let the coordinate vector of the point "P" that defines the fulcrum be r"P", and introduce the lengths formula_30 which are the distances from the fulcrum to the input point "A" and to the output point "B", respectively. Now introduce the unit vectors e"A" and e"B" from the fulcrum to the point "A" and "B", so formula_31 This notation allows us to define the velocity of the points "A" and "B" as formula_32 where e"A"⊥ and e"B"⊥ are unit vectors perpendicular to e"A" and e"B", respectively. The angle "θ" is the generalized coordinate that defines the configuration of the lever, therefore using the formula above for forces applied to a one degree-of-freedom mechanism, the generalized force is given by formula_33 Now, denote as "F""A" and "F""B" the components of the forces that are perpendicular to the radial segments "PA" and "PB". These forces are given by formula_34 This notation and the principle of virtual work yield the formula for the generalized force as formula_35 The ratio of the output force "F""B" to the input force "F""A" is the mechanical advantage of the lever, and is obtained from the principle of virtual work as formula_36 This equation shows that if the distance "a" from the fulcrum to the point "A" where the input force is applied is greater than the distance "b" from fulcrum to the point "B" where the output force is applied, then the lever amplifies the input force. If the opposite is true that the distance from the fulcrum to the input point "A" is less than from the fulcrum to the output point "B", then the lever reduces the magnitude of the input force. This is the "law of the lever", which was proven by Archimedes using geometric reasoning. Gear train. A gear train is formed by mounting gears on a frame so that the teeth of the gears engage. Gear teeth are designed to ensure the pitch circles of engaging gears roll on each other without slipping, this provides a smooth transmission of rotation from one gear to the next. For this analysis, we consider a gear train that has one degree-of-freedom, which means the angular rotation of all the gears in the gear train are defined by the angle of the input gear. The size of the gears and the sequence in which they engage define the ratio of the angular velocity "ωA" of the input gear to the angular velocity "ωB" of the output gear, known as the speed ratio, or gear ratio, of the gear train. Let "R" be the speed ratio, then formula_37 The input torque "T""A" acting on the input gear "G""A" is transformed by the gear train into the output torque "T""B" exerted by the output gear "G""B". If we assume, that the gears are rigid and that there are no losses in the engagement of the gear teeth, then the principle of virtual work can be used to analyze the static equilibrium of the gear train. Let the angle "θ" of the input gear be the generalized coordinate of the gear train, then the speed ratio "R" of the gear train defines the angular velocity of the output gear in terms of the input gear, that is formula_38 The formula above for the principle of virtual work with applied torques yields the generalized force formula_39 The mechanical advantage of the gear train is the ratio of the output torque "T""B" to the input torque "T""A", and the above equation yields formula_40 Thus, the speed ratio of a gear train also defines its mechanical advantage. This shows that if the input gear rotates faster than the output gear, then the gear train amplifies the input torque. And, if the input gear rotates slower than the output gear, then the gear train reduces the input torque. Dynamic equilibrium for rigid bodies. If the principle of virtual work for applied forces is used on individual particles of a rigid body, the principle can be generalized for a rigid body: "When a rigid body that is in equilibrium is subject to virtual compatible displacements, the total virtual work of all external forces is zero; and conversely, if the total virtual work of all external forces acting on a rigid body is zero then the body is in equilibrium". If a system is not in static equilibrium, D'Alembert showed that by introducing the acceleration terms of Newton's laws as inertia forces, this approach is generalized to define dynamic equilibrium. The result is D'Alembert's form of the principle of virtual work, which is used to derive the equations of motion for a mechanical system of rigid bodies. The expression "compatible displacements" means that the particles remain in contact and displace together so that the work done by pairs of action/reaction inter-particle forces cancel out. Various forms of this principle have been credited to Johann (Jean) Bernoulli (1667–1748) and Daniel Bernoulli (1700–1782). Generalized inertia forces. Let a mechanical system be constructed from n rigid bodies, Bi, i=1...,n, and let the resultant of the applied forces on each body be the force-torque pairs, Fi and Ti, "i" = 1...,"n". Notice that these applied forces do not include the reaction forces where the bodies are connected. Finally, assume that the velocity Vi and angular velocities ωi, "i"=1...,"n", for each rigid body, are defined by a single generalized coordinate q. Such a system of rigid bodies is said to have one degree of freedom. Consider a single rigid body which moves under the action of a resultant force F and torque T, with one degree of freedom defined by the generalized coordinate q. Assume the reference point for the resultant force and torque is the center of mass of the body, then the generalized inertia force Q* associated with the generalized coordinate q is given by formula_41 This inertia force can be computed from the kinetic energy of the rigid body, formula_42 by using the formula formula_43 A system of n rigid bodies with m generalized coordinates has the kinetic energy formula_44 which can be used to calculate the m generalized inertia forces formula_45 D'Alembert's form of the principle of virtual work. D'Alembert's form of the principle of virtual work states that a system of rigid bodies is in dynamic equilibrium when the virtual work of the sum of the applied forces and the inertial forces is zero for any virtual displacement of the system. Thus, dynamic equilibrium of a system of n rigid bodies with m generalized coordinates requires that formula_46 for any set of virtual displacements "δqj". This condition yields "m" equations, formula_47 which can also be written as formula_48 The result is a set of m equations of motion that define the dynamics of the rigid body system, known as Lagrange's equations or the generalized equations of motion. If the generalized forces Qj are derivable from a potential energy "V"("q"1...,"q""m"), then these equations of motion take the form formula_49 In this case, introduce the Lagrangian, "L" = "T" − "V", so these equations of motion become formula_50 These are known as the Euler-Lagrange equations for a system with m degrees of freedom, or Lagrange's equations of the second kind. Virtual work principle for a deformable body. Consider now the free body diagram of a deformable body, which is composed of an infinite number of differential cubes. Let's define two unrelated states for the body: The superscript * emphasizes that the two states are unrelated. Other than the above stated conditions, there is no need to specify if any of the states are real or virtual. Imagine now that the forces and stresses in the formula_51-State undergo the displacements and deformations in the formula_52-State: We can compute the total virtual (imaginary) work done by all forces acting on the faces of all cubes in two different ways: Equating the two results leads to the principle of virtual work for a deformable body: where the total external virtual work is done by T and f. Thus, The right-hand-side of (d,e) is often called the internal virtual work. The principle of virtual work then states: "External virtual work is equal to internal virtual work when equilibrated forces and stresses undergo unrelated but consistent displacements and strains". It includes the principle of virtual work for rigid bodies as a special case where the internal virtual work is zero. Proof of equivalence between the principle of virtual work and the equilibrium equation. We start by looking at the total work done by surface traction on the body going through the specified deformation: formula_60 Applying divergence theorem to the right hand side yields: formula_61 Now switch to indicial notation for the ease of derivation. formula_62 To continue our derivation, we substitute in the equilibrium equation formula_63. Then formula_64 The first term on the right hand side needs to be broken into a symmetric part and a skew part as follows: formula_65 where formula_66 is the strain that is consistent with the specified displacement field. The 2nd to last equality comes from the fact that the stress matrix is symmetric and that the product of a skew matrix and a symmetric matrix is zero. Now recap. We have shown through the above derivation that formula_67 Move the 2nd term on the right hand side of the equation to the left: formula_68 The physical interpretation of the above equation is, "the External virtual work is equal to internal virtual work when equilibrated forces and stresses undergo unrelated but consistent displacements and strains". For practical applications: These two general scenarios give rise to two often stated variational principles. They are valid irrespective of material behaviour. Principle of virtual displacements. Depending on the purpose, we may specialize the virtual work equation. For example, to derive the principle of virtual displacements in variational notations for supported bodies, we specify: The virtual work equation then becomes the principle of virtual displacements: This relation is equivalent to the set of equilibrium equations written for a differential element in the deformable body as well as of the stress boundary conditions on the part formula_71 of the surface. Conversely, (f) can be reached, albeit in a non-trivial manner, by starting with the differential equilibrium equations and the stress boundary conditions on formula_71, and proceeding in the manner similar to (a) and (b). Since virtual displacements are automatically compatible when they are expressed in terms of continuous, single-valued functions, we often mention only the need for consistency between strains and displacements. The virtual work principle is also valid for large real displacements; however, Eq.(f) would then be written using more complex measures of stresses and strains. Principle of virtual forces. Here, we specify: The virtual work equation becomes the principle of virtual forces: This relation is equivalent to the set of strain-compatibility equations as well as of the displacement boundary conditions on the part formula_72. It has another name: the principle of complementary virtual work. Alternative forms. A specialization of the principle of virtual forces is the unit dummy force method, which is very useful for computing displacements in structural systems. According to D'Alembert's principle, inclusion of inertial forces as additional body forces will give the virtual work equation applicable to dynamical systems. More generalized principles can be derived by: These are described in some of the references. Among the many energy principles in structural mechanics, the virtual work principle deserves a special place due to its generality that leads to powerful applications in structural analysis, solid mechanics, and finite element method in structural mechanics. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "A" }, { "math_id": 1, "text": "B" }, { "math_id": 2, "text": "\\mathbf{r}(t)" }, { "math_id": 3, "text": "\\mathbf{r}(t=t_0)" }, { "math_id": 4, "text": "\\mathbf{r}(t=t_1)" }, { "math_id": 5, "text": "\\mathbf{r}(t) + \\delta \\mathbf{r}(t)" }, { "math_id": 6, "text": "\\delta \\mathbf{r}(t)" }, { "math_id": 7, "text": "\\delta \\mathbf{r}(t_0) = \\delta \\mathbf{r}(t_1) = 0" }, { "math_id": 8, "text": "\\delta r_1(t)" }, { "math_id": 9, "text": "\\delta r_2(t)" }, { "math_id": 10, "text": "\\delta r_3(t)" }, { "math_id": 11, "text": "q_i" }, { "math_id": 12, "text": "i = 1,2,...,n" }, { "math_id": 13, "text": "q_i(t)" }, { "math_id": 14, "text": "\\delta q_i" }, { "math_id": 15, "text": " W = \\int_{\\mathbf{r}(t_0)=A}^{\\mathbf{r}(t_1)=B} \\mathbf{F} \\cdot d\\mathbf{r} = \\int_{t_0}^{t_1} \\mathbf{F} \\cdot \\frac{d\\mathbf{r}}{dt}~dt = \\int_{t_0}^{t_1}\\mathbf{F} \\cdot \\mathbf{v} ~ dt," }, { "math_id": 16, "text": "\\bar{W} = \\int_{\\mathbf{r}(t_0)=A}^{\\mathbf{r}(t_1)=B} \\mathbf{F} \\cdot d(\\mathbf{r}+\\varepsilon \\mathbf{h}) = \\int_{t_0}^{t_1} \\mathbf{F} \\cdot \\frac{d(\\mathbf{r}(t) + \\varepsilon\\mathbf{h}(t))}{dt}~ dt = \\int_{t_0}^{t_1}\\mathbf{F} \\cdot (\\mathbf{v} + \\varepsilon \\dot{\\mathbf{h}}) ~ dt ." }, { "math_id": 17, "text": " \\delta W = \\bar{W}-W = \\int_{t_0}^{t_1} (\\mathbf{F} \\cdot \\varepsilon \\dot{\\mathbf{h}}) ~dt." }, { "math_id": 18, "text": "\\mathbf{r}(t) = \\mathbf{r}(q_1,q_2,\\dots,q_n;t)," }, { "math_id": 19, "text": "\\mathbf{h}(t) = \\mathbf{h}(q_1,q_2,\\dots,q_n;t)." }, { "math_id": 20, "text": " \\frac{d}{dt} \\delta \\mathbf{r} = \\frac{d}{dt} \\varepsilon\\mathbf{h} = \\sum_{i=1}^n \\frac{\\partial \\mathbf{h}}{\\partial q_i} \\varepsilon \\dot{q}_i," }, { "math_id": 21, "text": " \\delta W = \\int_{t_0}^{t_1} \\left(\\sum_{i=1}^n \\mathbf{F} \\cdot \\frac{\\partial\\mathbf{h}}{\\partial q_i} \\varepsilon \\dot{q}_i\\right) dt = \\sum_{i=1}^n \\left(\\int_{t_0}^{t_1} \\mathbf{F} \\cdot \\frac{\\partial\\mathbf{h}}{\\partial q_i} \\varepsilon\\dot{q}_i ~dt\\right)." }, { "math_id": 22, "text": " Q_i = \\mathbf{F} \\cdot \\frac{\\partial \\mathbf{h}}{\\partial q_i} = 0, \\quad i=1, \\ldots, n." }, { "math_id": 23, "text": " \\delta W = \\sum_{i=1}^m \\mathbf{F}_i \\cdot \\delta\\mathbf{r}_i + \\sum_{j=1}^n \\mathbf{M}_j \\cdot \\delta\\mathbf{\\phi}_j = 0 ," }, { "math_id": 24, "text": " \\delta \\mathbf{r}_i (q_1, q_2, \\dots, q_f; t), \\quad i = 1, 2, \\dots, m ; " }, { "math_id": 25, "text": " \\delta \\phi_j (q_1, q_2, \\dots, q_f; t), \\quad j = 1, 2, \\dots, n . " }, { "math_id": 26, "text": " \\delta W = \\sum_{k=1}^f \\left[ \\left( \\sum_{i=1}^m \\mathbf{F}_i \\cdot \\frac{\\partial \\mathbf{r}_i}{\\partial q_k} + \\sum_{j=1}^n \\mathbf{M}_j \\cdot \\frac{\\partial \\mathbf{\\phi}_j}{\\partial q_k} \\right) \\delta q_k \\right] = \\sum_{k=1}^f Q_k \\delta q_k ," }, { "math_id": 27, "text": " Q_k = \\sum_{i=1}^m \\mathbf{F}_i \\cdot \\frac{\\partial \\mathbf{r}_i}{\\partial q_k} + \\sum_{j=1}^n \\mathbf{M}_j \\cdot \\frac{\\partial \\mathbf{\\phi}_j}{\\partial q_k} , \\quad k = 1, 2, \\dots, f ." }, { "math_id": 28, "text": " Q_k = \\sum_{i=1}^m \\mathbf{F}_i \\cdot \\frac{\\partial \\mathbf{v}_i}{\\partial \\dot{q}_k} + \\sum_{j=1}^n \\mathbf{M}_j \\cdot \\frac{\\partial \\mathbf{\\omega}_j}{\\partial \\dot{q}_k} , \\quad k = 1, 2, \\dots, f . " }, { "math_id": 29, "text": " \\delta W=0 \\quad \\Rightarrow \\quad Q_k = 0 \\quad k =1, 2, \\dots, f . " }, { "math_id": 30, "text": " a = |\\mathbf{r}_A - \\mathbf{r}_P|, \\quad b = |\\mathbf{r}_B - \\mathbf{r}_P|, " }, { "math_id": 31, "text": " \\mathbf{r}_A - \\mathbf{r}_P = a\\mathbf{e}_A, \\quad \\mathbf{r}_B - \\mathbf{r}_P = b\\mathbf{e}_B." }, { "math_id": 32, "text": " \\mathbf{v}_A = \\dot{\\theta} a \\mathbf{e}_A^\\perp, \\quad \\mathbf{v}_B = \\dot{\\theta} b \\mathbf{e}_B^\\perp," }, { "math_id": 33, "text": " Q = \\mathbf{F}_A \\cdot \\frac{\\partial\\mathbf{v}_A}{\\partial\\dot{\\theta}} - \\mathbf{F}_B \\cdot \\frac{\\partial\\mathbf{v}_B}{\\partial\\dot{\\theta}} = a(\\mathbf{F}_A \\cdot \\mathbf{e}_A^\\perp) - b(\\mathbf{F}_B \\cdot \\mathbf{e}_B^\\perp)." }, { "math_id": 34, "text": " F_A = \\mathbf{F}_A \\cdot \\mathbf{e}_A^\\perp, \\quad F_B = \\mathbf{F}_B \\cdot \\mathbf{e}_B^\\perp." }, { "math_id": 35, "text": " Q = a F_A - b F_B = 0. " }, { "math_id": 36, "text": " MA = \\frac{F_B}{F_A} = \\frac{a}{b}." }, { "math_id": 37, "text": " \\frac{\\omega_A}{\\omega_B} = R." }, { "math_id": 38, "text": " \\omega_A = \\omega, \\quad \\omega_B = \\omega/R." }, { "math_id": 39, "text": " Q = T_A \\frac{\\partial\\omega_A}{\\partial\\omega} - T_B \\frac{\\partial \\omega_B}{\\partial\\omega} = T_A - T_B/R = 0." }, { "math_id": 40, "text": " MA = \\frac{T_B}{T_A} = R." }, { "math_id": 41, "text": " Q^* = -(M\\mathbf{A}) \\cdot \\frac{\\partial \\mathbf{V}}{\\partial \\dot{q}} - ([I_R]\\alpha+ \\omega\\times[I_R]\\omega) \\cdot \\frac{\\partial \\boldsymbol{\\omega}}{\\partial \\dot{q}}." }, { "math_id": 42, "text": " T = \\frac{1}{2} M \\mathbf{V} \\cdot \\mathbf{V} + \\frac{1}{2} \\boldsymbol{\\omega} \\cdot [I_R] \\boldsymbol{\\omega}," }, { "math_id": 43, "text": " Q^* = -\\left(\\frac{d}{dt} \\frac{\\partial T}{\\partial \\dot{q}} -\\frac{\\partial T}{\\partial q}\\right)." }, { "math_id": 44, "text": "T = \\sum_{i=1}^n \\left(\\frac{1}{2} M \\mathbf{V}_i \\cdot \\mathbf{V}_i + \\frac{1}{2} \\boldsymbol{\\omega}_i \\cdot [I_R] \\boldsymbol{\\omega}_i\\right)," }, { "math_id": 45, "text": " Q^*_j = -\\left(\\frac{d}{dt} \\frac{\\partial T}{\\partial \\dot{q}_j} -\\frac{\\partial T}{\\partial q_j}\\right), \\quad j=1, \\ldots, m." }, { "math_id": 46, "text": " \\delta W = (Q_1 + Q^*_1)\\delta q_1 + \\dots + (Q_m + Q^*_m)\\delta q_m = 0," }, { "math_id": 47, "text": " Q_j + Q^*_j = 0, \\quad j=1, \\ldots, m," }, { "math_id": 48, "text": " \\frac{d}{dt} \\frac{\\partial T}{\\partial \\dot{q}_j} -\\frac{\\partial T}{\\partial q_j} = Q_j, \\quad j=1,\\ldots,m." }, { "math_id": 49, "text": " \\frac{d}{dt} \\frac{\\partial T}{\\partial \\dot{q}_j} -\\frac{\\partial T}{\\partial q_j} = -\\frac{\\partial V}{\\partial q_j}, \\quad j=1,\\ldots,m." }, { "math_id": 50, "text": " \\frac{d}{dt} \\frac{\\partial L}{\\partial \\dot{q}_j} - \\frac{\\partial L}{\\partial q_j} = 0 \\quad j=1,\\ldots,m." }, { "math_id": 51, "text": " \\boldsymbol{\\sigma} " }, { "math_id": 52, "text": " \\boldsymbol{\\epsilon} " }, { "math_id": 53, "text": " \\mathbf {u}^* " }, { "math_id": 54, "text": " \\boldsymbol{\\epsilon}^* " }, { "math_id": 55, "text": " F_A " }, { "math_id": 56, "text": " F_B " }, { "math_id": 57, "text": " F_B \\left( u^* + \\frac{ \\partial u^*}{\\partial x} dx \\right ) - F_A u^* \\approx \\frac{ \\partial u^* }{\\partial x} \\sigma dV + u^* \\frac{ \\partial \\sigma }{\\partial x} dV = \\epsilon^* \\sigma dV - u^* f dV " }, { "math_id": 58, "text": " \\frac{ \\partial \\sigma }{\\partial x}+f=0 " }, { "math_id": 59, "text": "\\int_{V} \\boldsymbol{\\epsilon}^{*T} \\boldsymbol{\\sigma} \\, dV " }, { "math_id": 60, "text": " \\int_{S} \\mathbf u \\cdot \\mathbf T dS = \\int_{S} \\mathbf u \\cdot \\boldsymbol \\sigma \\cdot \\mathbf n dS " }, { "math_id": 61, "text": " \\int_S \\mathbf{u \\cdot \\boldsymbol \\sigma \\cdot n} dS = \\int_V \\nabla \\cdot \\left( \\mathbf{u} \\cdot \\boldsymbol \\sigma \\right) dV " }, { "math_id": 62, "text": "\\begin{align}\n\\int_V \\nabla \\cdot \\left( \\mathbf{u} \\cdot \\boldsymbol \\sigma \\right) dV \n &= \\int_V \\frac{\\partial}{\\partial x_j} \\left( u_i \\sigma_{ij} \\right) dV \\\\\n &= \\int_V \\left( \\frac{\\partial u_i}{\\partial x_j} \\sigma_{ij} + u_i \\frac{\\partial \\sigma_{ij}}{\\partial x_j}\\right) dV\n\\end{align}" }, { "math_id": 63, "text": " \\frac{\\partial \\sigma_{ij}}{\\partial x_j} + f_i = 0 " }, { "math_id": 64, "text": "\\int_V \\left(\\frac{\\partial u_i}{\\partial x_j} \\sigma_{ij} + u_i \\frac{\\partial \\sigma_{ij}}{\\partial x_j}\\right) dV\n = \\int_V \\left(\\frac{\\partial u_i}{\\partial x_j} \\sigma_{ij} - u_i f_i\\right) dV" }, { "math_id": 65, "text": "\\begin{align}\n\\int_V\\left( \\frac{\\partial u_i}{\\partial x_j} \\sigma_{ij} - u_i f_i\\right) dV\n &= \\int_V\\left( \\frac12 \\left[ \\left( \\frac{\\partial u_i}{\\partial x_j} + \\frac{\\partial u_j}{\\partial x_i} \\right) \n + \\left( \\frac{\\partial u_i}{\\partial x_j} - \\frac{\\partial u_j}{\\partial x_i} \\right) \\right] \\sigma_{ij} - u_i f_i \\right) dV \\\\\n &= \\int_V \\left( \\left[ \\epsilon_{ij} \n + \\frac12 \\left( \\frac{\\partial u_i}{\\partial x_j} - \\frac{\\partial u_j}{\\partial x_i} \\right) \\right] \\sigma_{ij} - u_i f_i\\right) dV \\\\\n &= \\int_V\\left( \\epsilon_{ij} \\sigma_{ij} - u_i f_i \\right) dV\\\\\n &= \\int_V \\left( \\boldsymbol\\epsilon : \\boldsymbol\\sigma - \\mathbf u \\cdot \\mathbf f \\right) dV\n\\end{align}" }, { "math_id": 66, "text": " \\boldsymbol\\epsilon " }, { "math_id": 67, "text": " \\int_{S} \\mathbf{u \\cdot T} dS = \\int_V \\boldsymbol\\epsilon : \\boldsymbol\\sigma dV - \\int_V \\mathbf u \\cdot \\mathbf f dV " }, { "math_id": 68, "text": " \\int_{S} \\mathbf{u \\cdot T} dS + \\int_V \\mathbf u \\cdot \\mathbf f dV = \\int_V \\boldsymbol\\epsilon : \\boldsymbol\\sigma dV " }, { "math_id": 69, "text": " \\delta\\ \\mathbf {u} \\equiv \\mathbf{u}^* " }, { "math_id": 70, "text": " \\delta\\ \\boldsymbol {\\epsilon} \\equiv \\boldsymbol {\\epsilon}^* " }, { "math_id": 71, "text": " S_t " }, { "math_id": 72, "text": " S_u " } ]
https://en.wikipedia.org/wiki?curid=1211986
1212009
Descriptive complexity theory
Branch of mathematical logic Descriptive complexity is a branch of computational complexity theory and of finite model theory that characterizes complexity classes by the type of logic needed to express the languages in them. For example, PH, the union of all complexity classes in the polynomial hierarchy, is precisely the class of languages expressible by statements of second-order logic. This connection between complexity and the logic of finite structures allows results to be transferred easily from one area to the other, facilitating new proof methods and providing additional evidence that the main complexity classes are somehow "natural" and not tied to the specific abstract machines used to define them. Specifically, each logical system produces a set of queries expressible in it. The queries – when restricted to finite structures – correspond to the computational problems of traditional complexity theory. The first main result of descriptive complexity was Fagin's theorem, shown by Ronald Fagin in 1974. It established that NP is precisely the set of languages expressible by sentences of existential second-order logic; that is, second-order logic excluding universal quantification over relations, functions, and subsets. Many other classes were later characterized in such a manner. The setting. When we use the logic formalism to describe a computational problem, the input is a finite structure, and the elements of that structure are the domain of discourse. Usually the input is either a string (of bits or over an alphabet) and the elements of the logical structure represent positions of the string, or the input is a graph and the elements of the logical structure represent its vertices. The length of the input will be measured by the size of the respective structure. Whatever the structure is, we can assume that there are relations that can be tested, for example "formula_0 is true if and only if there is an edge from x to y" (in case of the structure being a graph), or "formula_1 is true if and only if the nth letter of the string is 1." These relations are the predicates for the first-order logic system. We also have constants, which are special elements of the respective structure, for example if we want to check reachability in a graph, we will have to choose two constants "s" (start) and "t" (terminal). In descriptive complexity theory we often assume that there is a total order over the elements and that we can check equality between elements. This lets us consider elements as numbers: the element x represents the number n if and only if there are formula_2 elements y with formula_3. Thanks to this we also may have the primitive predicate "bit", where formula_4 is true if only the kth bit of the binary expansion of x is 1. (We can replace addition and multiplication by ternary relations such that formula_5 is true if and only if formula_6 and formula_7 is true if and only if formula_8). Overview of characterisations of complexity classes. If we restrict ourselves to ordered structures with a successor relation and basic arithmetical predicates, then we get the following characterisations: Sub-polynomial time. FO without any operators. In circuit complexity, first-order logic with arbitrary predicates can be shown to be equal to AC0, the first class in the AC hierarchy. Indeed, there is a natural translation from FO's symbols to nodes of circuits, with formula_9 being formula_10 and formula_11 of size n. First-order logic in a signature with arithmetical predicates characterises the restriction of the AC0 family of circuits to those constructible in alternating logarithmic time. First-order logic in a signature with only the order relation corresponds to the set of star-free languages. Transitive closure logic. First-order logic gains substantially in expressive power when it is augmented with an operator that computes the transitive closure of a binary relation. The resulting transitive closure logic is known to characterise non-deterministic logarithmic space (NL) on ordered structures. This was used by Immerman to show that NL is closed under complement (i. e. that NL = co-NL). When restricting the transitive closure operator to deterministic transitive closure, the resulting logic exactly characterises logarithmic space on ordered structures. Second-order Krom formulae. On structures that have a successor function, NL can also be characterised by second-order Krom formulae. SO-Krom is the set of boolean queries definable with second-order formulae in conjunctive normal form such that the first-order quantifiers are universal and the quantifier-free part of the formula is in Krom form, which means that the first-order formula is a conjunction of disjunctions, and in each "disjunction" there are at most two variables. Every second-order Krom formula is equivalent to an existential second-order Krom formula. SO-Krom characterises NL on structures with a successor function. Polynomial time. On ordered structures, first-order least fixed-point logic captures PTIME: First-order least fixed-point logic. FO[LFP] is the extension of first-order logic by a least fixed-point operator, which expresses the fixed-point of a monotone expression. This augments first-order logic with the ability to express recursion. The Immerman–Vardi theorem, shown independently by Immerman and Vardi, shows that FO[LFP] characterises PTIME on ordered structures. As of 2022, it is still open whether there is a natural logic characterising PTIME on unordered structures. The Abiteboul–Vianu theorem states that FO[LFP]=FO[PFP] on all structures if and only if FO[LFP]=FO[PFP]; hence if and only if P=PSPACE. This result has been extended to other fixpoints. Second-order Horn formulae. In the presence of a successor function, PTIME can also be characterised by second-order Horn formulae. SO-Horn is the set of boolean queries definable with SO formulae in disjunctive normal form such that the first-order quantifiers are all universal and the quantifier-free part of the formula is in Horn form, which means that it is a big AND of OR, and in each "OR" every variable except possibly one are negated. This class is equal to P on structures with a successor function. Those formulae can be transformed to prenex formulas in existential second-order Horn logic. Non-deterministic polynomial time. Fagin's theorem. Ronald Fagin's 1974 proof that the complexity class NP was characterised exactly by those classes of structures axiomatizable in existential second-order logic was the starting point of descriptive complexity theory. Since the complement of an existential formula is a universal formula, it follows immediately that co-NP is characterized by universal second-order logic. SO, unrestricted second-order logic, is equal to the Polynomial hierarchy PH. More precisely, we have the following generalisation of Fagin's theorem: The set of formulae in prenex normal form where existential and universal quantifiers of second order alternate "k" times characterise the "k"th level of the polynomial hierarchy. Unlike most other characterisations of complexity classes, Fagin's theorem and its generalisation do not presuppose a total ordering on the structures. This is because existential second-order logic is itself sufficiently expressive to refer to the possible total orders on a structure using second-order variables. Beyond NP. Partial fixed point is PSPACE. The class of all problems computable in polynomial space, PSPACE, can be characterised by augmenting first-order logic with a more expressive partial fixed-point operator. Partial fixed-point logic, FO[PFP], is the extension of first-order logic with a partial fixed-point operator, which expresses the fixed-point of a formula if there is one and returns 'false' otherwise. Partial fixed-point logic characterises PSPACE on ordered structures. Transitive closure is PSPACE. Second-order logic can be extended by a transitive closure operator in the same way as first-order logic, resulting in SO[TC]. The TC operator can now also take second-order variables as argument. SO[TC] characterises PSPACE. Since ordering can be referenced in second-order logic, this characterisation does not presuppose ordered structures. Elementary functions. The time complexity class ELEMENTARY of elementary functions can be characterised by HO, the complexity class of structures that can be recognized by formulas of higher-order logic. Higher-order logic is an extension of first-order logic and second-order logic with higher-order quantifiers. There is a relation between the formula_12th order and non-deterministic algorithms the time of which is bounded by formula_13 levels of exponentials. Definition. We define higher-order variables. A variable of order formula_14 has an arity formula_15 and represents any set of formula_15-tuples of elements of order formula_13. They are usually written in upper-case and with a natural number as exponent to indicate the order. Higher-order logic is the set of first-order formulae where we add quantification over higher-order variables; hence we will use the terms defined in the FO article without defining them again. HOformula_16 is the set of formulae with variables of order at most formula_12. HOformula_17 is the subset of formulae of the form formula_18, where formula_19 is a quantifier and formula_20 means that formula_21 is a tuple of variable of order formula_12 with the same quantification. So HOformula_17 is the set of formulae with formula_22 alternations of quantifiers of order formula_12, beginning with formula_23, followed by a formula of order formula_13. Using the standard notation of the tetration, formula_24 and formula_25. formula_26 with formula_12 times formula_27 Normal form. Every formula of order formula_12th is equivalent to a formula in prenex normal form, where we first write quantification over variable of formula_12th order and then a formula of order formula_13 in normal form. Relation to complexity classes. HO is equal to the class ELEMENTARY of elementary functions. To be more precise, formula_28, meaning a tower of formula_29 2s, ending with formula_30, where formula_31 is a constant. A special case of this is that formula_32, which is exactly Fagin's theorem. Using oracle machines in the polynomial hierarchy, formula_33 Notes. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "E(x,y)" }, { "math_id": 1, "text": "P(n)" }, { "math_id": 2, "text": "(n-1)" }, { "math_id": 3, "text": "y<x" }, { "math_id": 4, "text": "bit(x,k)" }, { "math_id": 5, "text": "plus(x,y,z)" }, { "math_id": 6, "text": "x+y=z" }, { "math_id": 7, "text": "times(x,y,z)" }, { "math_id": 8, "text": "x*y=z" }, { "math_id": 9, "text": "\\forall, \\exists" }, { "math_id": 10, "text": "\\land" }, { "math_id": 11, "text": "\\lor" }, { "math_id": 12, "text": "i" }, { "math_id": 13, "text": "i-1" }, { "math_id": 14, "text": "i>1" }, { "math_id": 15, "text": "k" }, { "math_id": 16, "text": "^i" }, { "math_id": 17, "text": "^i_j" }, { "math_id": 18, "text": "\\phi=\\exists \\overline{X^i_1}\\forall\\overline{X_2^i}\\dots Q \\overline{X_j^i}\\psi" }, { "math_id": 19, "text": "Q" }, { "math_id": 20, "text": "Q \\overline{X^i}" }, { "math_id": 21, "text": "\\overline{X^i}" }, { "math_id": 22, "text": "j" }, { "math_id": 23, "text": "\\exists" }, { "math_id": 24, "text": "\\exp_2^0(x)=x" }, { "math_id": 25, "text": " \\exp_2^{i+1}(x)=2^{\\exp_2^{i}(x)}" }, { "math_id": 26, "text": " \\exp_2^{i+1}(x)=2^{2^{2^{2^{\\dots^{2^{x}}}}}}" }, { "math_id": 27, "text": "2" }, { "math_id": 28, "text": "\\mathsf{HO}^i_0 = \\mathsf{NTIME}(\\exp_2^{i-2}(n^{O(1)}))" }, { "math_id": 29, "text": "(i-2)" }, { "math_id": 30, "text": "n^c" }, { "math_id": 31, "text": "c" }, { "math_id": 32, "text": "\\exists\\mathsf{SO}=\\mathsf{HO}^2_0=\\mathsf{NTIME}(n^{O(1)})={\\color{Blue}\\mathsf{NP}}" }, { "math_id": 33, "text": "\\mathsf{HO}^i_j={\\color{Blue}\\mathsf{NTIME}}(\\exp_2^{i-2}(n^{O(1)})^{\\Sigma_j^{\\mathsf P}})" } ]
https://en.wikipedia.org/wiki?curid=1212009
12120792
Legendre wavelet
Type of wavelet In functional analysis, compactly supported wavelets derived from Legendre polynomials are termed Legendre wavelets or spherical harmonic wavelets. Legendre functions have widespread applications in which spherical coordinate system is appropriate. As with many wavelets there is no nice analytical formula for describing these harmonic spherical wavelets. The low-pass filter associated to Legendre multiresolution analysis is a finite impulse response (FIR) filter. Wavelets associated to FIR filters are commonly preferred in most applications. An extra appealing feature is that the Legendre filters are "linear phase" FIR (i.e. multiresolution analysis associated with linear phase filters). These wavelets have been implemented on MATLAB (wavelet toolbox). Although being compactly supported wavelet, legdN are not orthogonal (but for "N" = 1). Legendre multiresolution filters. Associated Legendre polynomials are the colatitudinal part of the spherical harmonics which are common to all separations of Laplace's equation in spherical polar coordinates. The radial part of the solution varies from one potential to another, but the harmonics are always the same and are a consequence of spherical symmetry. Spherical harmonics formula_0 are solutions of the Legendre formula_1-order differential equation, "n" integer: formula_2 formula_3 polynomials can be used to define the smoothing filter formula_4 of a multiresolution analysis (MRA). Since the appropriate boundary conditions for an MRA are formula_5 and formula_6, the smoothing filter of an MRA can be defined so that the magnitude of the low-pass formula_7 can be associated to Legendre polynomials according to: formula_8 formula_9 Illustrative examples of filter transfer functions for a Legendre MRA are shown in figure 1, for formula_10 A low-pass behaviour is exhibited for the filter "H", as expected. The number of zeroes within formula_11 is equal to the degree of the Legendre polynomial. Therefore, the roll-off of side-lobes with frequency is easily controlled by the parameter formula_12. The low-pass filter transfer function is given by formula_13 The transfer function of the high-pass analysing filter formula_14 is chosen according to Quadrature mirror filter condition, yielding: formula_15 Indeed, formula_16 and formula_17, as expected. Legendre multiresolution filter coefficients. A suitable phase assignment is done so as to properly adjust the transfer function formula_18 to the form formula_19 The filter coefficients formula_20 are given by: formula_21 from which the symmetry: formula_22 follows. There are just formula_23 non-zero filter coefficients on formula_24, so that the Legendre wavelets have compact support for every odd integer formula_12. "Table I - Smoothing Legendre FIR filter coefficients for formula_25 (formula_26 is the wavelet order.)" N.B. The minus signal can be suppressed. MATLAB implementation of Legendre wavelets. Legendre wavelets can be easily loaded into the MATLAB wavelet toolbox—The m-files to allow the computation of Legendre wavelet transform, details and filter are (freeware) available. The finite support width Legendre family is denoted by legd (short name). Wavelets: 'legdN'. The parameter "N" in the legdN family is found according to formula_27 (length of the MRA filters). Legendre wavelets can be derived from the low-pass reconstruction filter by an iterative procedure (the cascade algorithm). The wavelet has compact support and finite impulse response AMR filters (FIR) are used (table 1). The first wavelet of the Legendre's family is exactly the well-known Haar wavelet. Figure 2 shows an emerging pattern that progressively looks like the wavelet's shape. The Legendre wavelet shape can be visualised using the wavemenu command of MATLAB. Figure 3 shows legd8 wavelet displayed using MATLAB. Legendre Polynomials are also associated with windows families. Legendre wavelet packets. Wavelet packets (WP) systems derived from Legendre wavelets can also be easily accomplished. Figure 5 illustrates the WP functions derived from legd2. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "P_n(z)" }, { "math_id": 1, "text": "2^{nd}" }, { "math_id": 2, "text": "\\left (1-z^2 \\right ) \\frac {d^2y} {dz^2} - 2z \\frac {dy} {dz} + n(n+1)y=0." }, { "math_id": 3, "text": "P_n(\\cos(\\theta))" }, { "math_id": 4, "text": "H(\\omega)" }, { "math_id": 5, "text": "|H(0)|=1" }, { "math_id": 6, "text": "|H(\\pi)|=0" }, { "math_id": 7, "text": "|H(\\omega)|" }, { "math_id": 8, "text": "\\nu=2n+1." }, { "math_id": 9, "text": "|H_{\\nu}(\\omega)|= \\left | \\frac {P_{\\nu} \\left ( \\cos \\left ( \\frac{\\omega}{2} \\right ) \\right ) } {P_{\\nu} \\cos (0)} \\right |" }, { "math_id": 10, "text": "\\nu=1,3,5." }, { "math_id": 11, "text": "- \\pi < \\omega < \\pi" }, { "math_id": 12, "text": "\\nu" }, { "math_id": 13, "text": "H_{\\nu} (\\omega)=-e^{-j \\nu \\frac {\\omega - \\pi} {2}} P_{\\nu} \\left ( \\cos \\left ( \\tfrac{\\omega}{2} \\right ) \\right )" }, { "math_id": 14, "text": "G_{\\nu} (\\omega)" }, { "math_id": 15, "text": "H_{\\nu} (\\omega)=-e^{-j {(\\nu-2)} \\frac {\\omega} {2}} P_{\\nu} \\left ( \\sin \\left ( \\tfrac{\\omega}{2} \\right ) \\right )" }, { "math_id": 16, "text": "|G_{\\nu}(0)|=0" }, { "math_id": 17, "text": "|G_{\\nu}( \\pi)|=1" }, { "math_id": 18, "text": "H_{\\nu} (\\omega)" }, { "math_id": 19, "text": "H_{\\nu} (\\omega)= \\frac {1} {\\sqrt {2}} \\sum_{k \\in Z} h_k^{\\nu} e^{-j \\omega k}" }, { "math_id": 20, "text": "\\{ h_k \\}_{k \\in \\Z}" }, { "math_id": 21, "text": "h_k^{\\nu}= - \\frac {\\sqrt {2}} {2^{2 \\nu}} \\binom{2k}{k} \\binom{2 \\nu -2k}{\\nu -k}" }, { "math_id": 22, "text": "{h_k^{\\nu}}={h_{\\nu -k}^{\\nu}}," }, { "math_id": 23, "text": "\\nu+1" }, { "math_id": 24, "text": "H_n (\\omega)" }, { "math_id": 25, "text": "\\nu=1,3,5" }, { "math_id": 26, "text": "N" }, { "math_id": 27, "text": "2N = \\nu+1" } ]
https://en.wikipedia.org/wiki?curid=12120792
12123967
Heat loss due to linear thermal bridging
Used to calculate the energy performance of buildings The heat loss due to linear thermal bridging (formula_0) is a physical quantity used when calculating the energy performance of buildings. It appears in both United Kingdom and Irish methodologies. Calculation. The calculation of the heat loss due to linear thermal bridging is relatively simple, given by the formula below: formula_1 In the formula, formula_2 if Accredited Construction details used, and formula_3 otherwise, and formula_4 is the sum of all the exposed areas of the building envelope, References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "H_{TB}" }, { "math_id": 1, "text": "H_{TB} = y \\sum A_{exp}" }, { "math_id": 2, "text": "y = 0.08" }, { "math_id": 3, "text": "y = 0.15" }, { "math_id": 4, "text": "\\sum A_{exp}" } ]
https://en.wikipedia.org/wiki?curid=12123967
1212524
Bloom syndrome
Genetic disorder Medical condition Bloom syndrome (often abbreviated as BS in literature) is a rare autosomal recessive genetic disorder characterized by short stature, predisposition to the development of cancer, and genomic instability. BS is caused by mutations in the "BLM" gene which is a member of the RecQ DNA helicase family. Mutations in genes encoding other members of this family, namely "WRN" and "RECQL4", are associated with the clinical entities Werner syndrome and Rothmund–Thomson syndrome, respectively. More broadly, Bloom syndrome is a member of a class of clinical entities that are characterized by chromosomal instability, genomic instability, or both and by cancer predisposition. Cells from a person with Bloom syndrome exhibit a striking genomic instability that includes excessive crossovers between homologous chromosomes and sister chromatid exchanges (SCEs). The condition was discovered and first described by New York dermatologist Dr. David Bloom in 1954. Bloom syndrome has also appeared in the older literature as Bloom–Torre–Machacek syndrome. Presentation. The most prominent feature of Bloom syndrome is proportional small size. The small size is apparent in utero. At birth, neonates exhibit rostral to caudal lengths, head circumferences, and birth weights that are typically below the third percentile. The second most commonly noted feature is a rash on the face that develops early in life as a result of sun exposure. The facial rash appears most prominently on the cheeks, nose, and around the lips. It is described as erythematous, that is red and inflamed, and telangiectatic, that is characterized by dilated blood vessels at the skin's surface. The rash commonly also affects the backs of the hands and neck, and it can develop on any other sun-exposed areas of the skin. The rash is variably expressed, being present in a majority but not all persons with Bloom syndrome, and it is on average less severe in females than in males. Moreover, the sun sensitivity can resolve in adulthood. There are other dermatologic changes, including hypo-pigmented and hyper-pigmented areas, cafe-au-lait spots, and telangiectasias, which can appear on the face and on the ocular surface. There is a characteristic facial appearance that includes a long, narrow face; prominent nose, cheeks, and ears; and micrognathism or undersized jaw. The voice is high-pitched and squeaky. There are a variety of other features that are commonly associated with Bloom syndrome. There is a moderate immune deficiency, characterized by deficiency in certain immunoglobulin classes and a generalized proliferative defect of B and T cells. The immune deficiency is thought to be the cause of recurrent pneumonia and middle ear infections in persons with the syndrome. Infants can exhibit frequent gastrointestinal upsets, with reflux, vomiting, and diarrhea, and there is a remarkable lack in interest in food. There are endocrine disturbances, particularly abnormalities of carbohydrate metabolism, insulin resistance and susceptibility to type 2 diabetes, dyslipidemia, and compensated hypothyroidism. Persons with Bloom syndrome exhibit a paucity of subcutaneous fat. There is reduced fertility, characterized by a failure in males to produce sperm (azoospermia) and premature cessation of menses (premature menopause) in females. Despite these reductions, several women with Bloom syndrome have had children, and there is a single report of a male with Bloom syndrome bearing children. Although some persons with Bloom syndrome can struggle in school with subjects that require abstract thought, there is no evidence that intellectual disability is more common in Bloom syndrome than in other people. The most serious and frequent complication of Bloom syndrome is cancer. In the 281 persons followed by the Bloom Syndrome Registry, 145 persons (51.6%) have been diagnosed with a malignant neoplasm, and there have been 227 malignancies. The types of cancer and the anatomic sites at which they develop resemble the cancers that affect persons in the general population. The age of diagnosis for these cancers is earlier than for the same cancer in normal persons, and many persons with Bloom syndrome have been diagnosed with multiple cancers. The average life span is approximately 27 years. The most common cause of death in Bloom syndrome is from cancer. Other complications of the disorder include chronic obstructive lung disease and type 2 diabetes. There are a variety of excellent sources for more detailed clinical information about Bloom syndrome. There is a closely related entity that is now referred to as Bloom-syndrome-like disorder (BSLD) which is caused by mutations in components of the same protein complex to which the "BLM" gene product belongs, including "TOP3A", which encodes the type I topoisomerase, topoisomerase 3 alpha, "RMI1", and "RMI2". The features of BSLD include small size and dermatologic findings, such as cafe-au-lait spots, and the presence of the once pathognomonic elevated SCEs is reported for persons with mutations in "TOP3A" and "RMI1". Bloom syndrome shares some features with Fanconi anemia possibly because there is overlap in the function of the proteins mutated in this related disorder. Genetics. Bloom syndrome is an autosomal recessive disorder, caused by mutations in the maternally- and paternally-derived copies of the gene "BLM". As in other autosomal recessive conditions, the parents of an individual with Bloom syndrome do not necessarily exhibit any features of the syndrome. The mutations in BLM associated with Bloom syndrome are nulls and missense mutations that are catalytically inactive. The cells from persons with Bloom syndrome exhibit a striking genomic instability that is characterized by hyper-recombination and hyper-mutation. Human BLM cells are sensitive to DNA damaging agents such as UV and methyl methanesulfonate, indicating deficient repair capability. At the level of the chromosomes, the rate of sister chromatid exchange in Bloom's syndrome is approximately 10 fold higher than normal and quadriradial figures, which are the cytologic manifestations of crossing-over between homologous chromosome, are highly elevated. Other chromosome manifestations include chromatid breaks and gaps, telomere associations, and fragmented chromosomes. The hyper-recombination can also be detected by molecular assays The "BLM" gene encodes a member of the protein family referred to as RecQ helicases. The diffusion of BLM has been measured to 1.34 formula_0 in nucleoplasm and 0.13 formula_1 at nucleoli DNA helicases are enzymes that attach to DNA and temporarily unravel the double helix of the DNA molecule. DNA helicases function in DNA replication and DNA repair. BLM very likely functions in DNA replication, as cells from persons with Bloom syndrome exhibit multiple defects in DNA replication, and they are sensitive to agents that obstruct DNA replication. The BLM helicase is a member of a protein complex with topoisomerase III alpha, RMI1 and RMI2, also known as BTRR, Bloom Syndrome complex or the dissolvasome. Disruption of the proper assembly of the Bloom Syndrome complex leads to genome stability, genetic dependence on cellular nucleases GEN1 and MUS81, and loss of normal cell growth. Bloom-like phenotypes have been associated with mutations in topoisomerase III alpha, RMI1 and RMI2 genes. Relationship to cancer and aging. As noted above, there is greatly elevated rate of mutation in Bloom syndrome and the genomic instability is associated with a high risk of cancer in affected individuals. The cancer predisposition is characterized by 1) broad spectrum, including leukemias, lymphomas, and carcinomas, 2) early age of onset relative to the same cancer in the general population, and 3) multiplicity, that is, synchronous or metachronous cancers. There is at least one person with Bloom syndrome who had five independent primary cancers. Persons with Bloom syndrome may develop cancer at any age. The average age of cancer diagnoses in the cohort is approximately 26 years old. Pathophysiology. When a cell prepares to divide to form two cells, the chromosomes are duplicated so that each new cell will get a complete set of chromosomes. The duplication process is called DNA replication. Errors made during DNA replication can lead to mutations. The BLM protein is important in maintaining the stability of the DNA during the replication process. Lack of BLM protein or protein activity leads to an increase in mutations; however, the molecular mechanism(s) by which BLM maintains stability of the chromosomes is still a very active area of research. Persons with Bloom syndrome have an enormous increase in exchange events between homologous chromosomes or sister chromatids (the two DNA molecules that are produced by the DNA replication process); and there are increases in chromosome breakage and rearrangements compared to persons who do not have Bloom's syndrome. Direct connections between the molecular processes in which BLM operates and the chromosomes themselves are under investigation. The relationships between molecular defects in Bloom syndrome cells, the chromosome mutations that accumulate in somatic cells (the cells of the body), and the many clinical features seen in Bloom syndrome are also areas of intense research. Diagnosis. Bloom syndrome is diagnosed using any of three tests - the presence of quadriradial (Qr, a four-armed chromatid interchange) in cultured blood lymphocytes, and/or the elevated levels of sister chromatid exchange in cells of any type, and/or the mutation in the BLM gene. The US Food and Drug Administration (FDA) announced on February 19, 2015, that they have authorized marketing of a direct-to-consumer genetic test from 23andMe. The test is designed to identify healthy individuals who carry a gene that could cause Bloom Syndrome in their offspring. Treatment. Bloom syndrome has no specific treatment; however, avoiding sun exposure and using sunscreens can help prevent some of the cutaneous changes associated with photo-sensitivity. Efforts to minimize exposure to other known environmental mutagens are also advisable in multiple forms. Epidemiology. Bloom syndrome is an extremely rare disorder in most populations and the frequency of the disease has not been measured in most populations. However, the disorder is relatively more common amongst people of Central and Eastern European Ashkenazi Jewish background. Approximately 1 in 48,000 Ashkenazi Jews are affected by Bloom syndrome, who account for about one-third of affected individuals worldwide. Bloom's Syndrome Registry. The Bloom's Syndrome Registry lists 283 individuals reported to have this rare disorder (as of 2020), collected from the time it was first recognized in 1954. The registry was developed as a surveillance mechanism to observe the effects of cancer in the patients, which has shown 122 individuals have been diagnosed with cancer. It also acts as a report to show current findings and data on all aspects of the disorder. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": " \\tfrac{\\mathrm{\\mu m}^2}{\\mathrm{s}} " }, { "math_id": 1, "text": " \\textstyle \\tfrac{\\mathrm{\\mu m}^2}{\\mathrm{s}} " } ]
https://en.wikipedia.org/wiki?curid=1212524
12130843
Dynkin's formula
Theorem in stochastic analysis In mathematics — specifically, in stochastic analysis — Dynkin's formula is a theorem giving the expected value of any suitably smooth function applied to a Feller process at a stopping time. It may be seen as a stochastic generalization of the (second) fundamental theorem of calculus. It is named after the Russian mathematician Eugene Dynkin. Statement of the theorem. Let formula_0 be a Feller process with infinitesimal generator formula_1. For a point formula_2 in the state-space of formula_0, let formula_3 denote the law of formula_0 given initial datum formula_4, and let formula_5 denote expectation with respect to formula_3. Then for any function formula_6 in the domain of formula_1, and any stopping time formula_7 with formula_8, Dynkin's formula holds: formula_9 Example: Itô diffusions. Let formula_0 be the formula_10-valued Itô diffusion solving the stochastic differential equation formula_11 The infinitesimal generator formula_1 of formula_0 is defined by its action on compactly-supported formula_12 (twice differentiable with continuous second derivative) functions formula_13 as formula_14 or, equivalently, formula_15 Since this formula_0 is a Feller process, Dynkin's formula holds. In fact, if formula_7 is the first exit time of a bounded set formula_16 with formula_8, then Dynkin's formula holds for all formula_12 functions formula_6, without the assumption of compact support. Application: Brownian motion exiting the ball. Dynkin's formula can be used to find the expected first exit time formula_17 of a Brownian motion formula_18 from the closed ball formula_19 which, when formula_18 starts at a point formula_20 in the interior of formula_21, is given by formula_22 This is shown as follows. Fix an integer "j". The strategy is to apply Dynkin's formula with formula_23, formula_24, and a compactly-supported formula_25 with formula_26 on formula_21. The generator of Brownian motion is formula_27, where formula_28 denotes the Laplacian operator. Therefore, by Dynkin's formula, formula_29 Hence, for any formula_30, formula_31 Now let formula_32 to conclude that formula_33 almost surely, and so formula_34 as claimed. References. &lt;templatestyles src="Reflist/styles.css" /&gt; Sources
[ { "math_id": 0, "text": "X" }, { "math_id": 1, "text": "A" }, { "math_id": 2, "text": "x" }, { "math_id": 3, "text": "\\mathbf P^x" }, { "math_id": 4, "text": "X_0=x" }, { "math_id": 5, "text": "\\mathbf E^x" }, { "math_id": 6, "text": "f" }, { "math_id": 7, "text": "\\tau" }, { "math_id": 8, "text": "\\mathbf E[\\tau]<+\\infty" }, { "math_id": 9, "text": "\n\\mathbf{E}^{x} [f(X_{\\tau})] = f(x) + \\mathbf{E}^{x} \\left[ \\int_{0}^{\\tau} A f (X_{s}) \\, \\mathrm{d} s \\right].\n" }, { "math_id": 10, "text": "\\mathbf R^n" }, { "math_id": 11, "text": "\\mathrm{d} X_{t} = b(X_{t}) \\, \\mathrm{d} t + \\sigma (X_{t}) \\, \\mathrm{d} B_{t}." }, { "math_id": 12, "text": "C^2" }, { "math_id": 13, "text": "f:\\mathbf R^n \\to \\mathbf R" }, { "math_id": 14, "text": "A f (x) = \\lim_{t \\downarrow 0} \\frac{\\mathbf{E}^{x} [f(X_{t})] - f(x)}{t}" }, { "math_id": 15, "text": "A f (x) = \\sum_{i} b_{i} (x) \\frac{\\partial f}{\\partial x_{i}} (x) + \\frac1{2} \\sum_{i, j} \\big( \\sigma \\sigma^{\\top} \\big)_{i, j} (x) \\frac{\\partial^{2} f}{\\partial x_{i}\\, \\partial x_{j}} (x)." }, { "math_id": 16, "text": "B\\subset\\mathbf R^n" }, { "math_id": 17, "text": "\\tau_K" }, { "math_id": 18, "text": "B" }, { "math_id": 19, "text": "K= \\{ x \\in \\mathbf{R}^{n} : \\, | x | \\leq R \\}," }, { "math_id": 20, "text": "a" }, { "math_id": 21, "text": "K" }, { "math_id": 22, "text": "\\mathbf{E}^{a} [\\tau_{K}] = \\frac1{n} \\big( R^{2} - | a |^{2} \\big)." }, { "math_id": 23, "text": "X=B" }, { "math_id": 24, "text": "\\tau=\\sigma_j=\\min\\{j,\\tau_K\\}" }, { "math_id": 25, "text": "f\\in C^2" }, { "math_id": 26, "text": "f(x)=|x|^2" }, { "math_id": 27, "text": "\\Delta/2" }, { "math_id": 28, "text": "\\Delta" }, { "math_id": 29, "text": "\\begin{align}\n\\mathbf{E}^{a} \\left[ f \\big( B_{\\sigma_{j}} \\big) \\right]\n&= f(a) + \\mathbf{E}^{a} \\left[ \\int_{0}^{\\sigma_{j}} \\frac1{2} \\Delta f (B_{s}) \\, \\mathrm{d} s \\right] \\\\\n&= | a |^{2} + \\mathbf{E}^{a} \\left[ \\int_{0}^{\\sigma_{j}} n \\, \\mathrm{d} s \\right]\n= | a |^{2} + n \\mathbf{E}^{a} [\\sigma_{j}].\n\\end{align}" }, { "math_id": 30, "text": "j" }, { "math_id": 31, "text": "\\mathbf{E}^{a} [\\sigma_{j}] \\leq \\frac1{n} \\big( R^{2} - | a |^{2} \\big)." }, { "math_id": 32, "text": "j\\to+\\infty" }, { "math_id": 33, "text": "\\tau_K=\\lim_{j\\to+\\infty}\\sigma_j<+\\infty" }, { "math_id": 34, "text": "\\mathbf{E}^{a} [\\tau_{K}] =( R^{2} - | a |^{2})/n" } ]
https://en.wikipedia.org/wiki?curid=12130843
12131888
Bloch oscillation
Bloch oscillation is a phenomenon from solid state physics. It describes the oscillation of a particle (e.g. an electron) confined in a periodic potential when a constant force is acting on it. It was first pointed out by Felix Bloch and Clarence Zener while studying the electrical properties of crystals. In particular, they predicted that the motion of electrons in a perfect crystal under the action of a constant electric field would be oscillatory instead of uniform. While in natural crystals this phenomenon is extremely hard to observe due to the scattering of electrons by lattice defects, it has been observed in semiconductor superlattices and in different physical systems such as cold atoms in an optical potential and ultrasmall Josephson junctions. Derivation. The one-dimensional equation of motion for an electron with wave vector formula_0 in a constant electric field formula_1 is: formula_2 which has the solution formula_3 The group velocity formula_4 of the electron is given by formula_5 where formula_6 denotes the dispersion relation for the given energy band. Suppose that the latter has the (tight-binding) form formula_7 where formula_8 is the lattice parameter and formula_9 is a constant. Then formula_10 is given by formula_11 and the electron position formula_12 can be computed as a function of time: formula_13 This shows that the electron oscillates in real space. The angular frequency of the oscillations is given by formula_14. Discovery and experimental realizations. Bloch oscillations were predicted by Nobel laureate Felix Bloch in 1929. However, they were not experimentally observed for a long time, because in natural solid-state bodies, formula_15 is (even with very high electric field strengths) not large enough to allow for full oscillations of the charge carriers within the diffraction and tunneling times, due to relatively small lattice periods. The development in semiconductor technology has recently led to the fabrication of structures with super lattice periods that are now sufficiently large, based on artificial semiconductors. The oscillation period in those structures is smaller than the diffraction time of the electrons, hence more oscillations can be observed in a time window below the diffraction time. For the first time the experimental observation of Bloch oscillations in such super lattices at very low temperatures was shown by Jochen Feldmann and Karl Leo in 1992. Other realizations were References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "k" }, { "math_id": 1, "text": "E" }, { "math_id": 2, "text": "\\frac{dp}{dt} = \\hbar \\frac{dk}{dt} = -eE," }, { "math_id": 3, "text": "k(t) = k(0) - \\frac{eE}{\\hbar} t." }, { "math_id": 4, "text": "v" }, { "math_id": 5, "text": "v(k)=\\frac{1}{\\hbar}\\frac{d\\mathcal{E}}{dk}," }, { "math_id": 6, "text": "\\mathcal{E}(k)" }, { "math_id": 7, "text": "\\mathcal{E}(k)= A \\cos{ak} ," }, { "math_id": 8, "text": "a" }, { "math_id": 9, "text": "A" }, { "math_id": 10, "text": "v(k)" }, { "math_id": 11, "text": "v(k) = \\frac{1}{\\hbar} \\frac{d\\mathcal{E}}{dk} = -\\frac{Aa}{\\hbar} \\sin{ak}," }, { "math_id": 12, "text": "x" }, { "math_id": 13, "text": "x(t) = \\int_0^t {v(k(t'))}{dt'} = x(0) + \\frac{A}{eE} \\cos\\left(\\frac{aeE}{\\hbar}t\\right)." }, { "math_id": 14, "text": "\\omega_B = ae|E| / \\hbar" }, { "math_id": 15, "text": "\\omega_B" } ]
https://en.wikipedia.org/wiki?curid=12131888
12135521
Method of simulated moments
In econometrics, the method of simulated moments (MSM) (also called simulated method of moments) is a structural estimation technique introduced by Daniel McFadden. It extends the generalized method of moments to cases where theoretical moment functions cannot be evaluated directly, such as when moment functions involve high-dimensional integrals. MSM's earliest and principal applications have been to research in industrial organization, after its development by Ariel Pakes, David Pollard, and others, though applications in consumption are emerging. Although the method requires the user to specify the distribution from which the simulations are to be drawn, this requirement can be relaxed through the use of an entropy maximizing distribution. GMM v.s. MSM. where formula_1 is the moment condition and W is a matrix. Using the optimal W matrix leads to efficient estimator. where formula_3 is the simulated moment condition and formula_4 MSM v.s. Indirect Inference. MSM is a special case of Indirect Inference. While Indirect Inference allows the researcher to use any of the features of sample statistics as a basis for comparison of moments and data, the name MSM applies only when those statistics are moments of the data, i.e. averages, across the sample of functions defined for a single sample element. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\hat{\\beta}_{GMM}=\\operatorname{argmin}\\,m(x,\\beta)'Wm(x,\\beta)" }, { "math_id": 1, "text": "m(x,\\beta)" }, { "math_id": 2, "text": "\\hat{\\beta}_{MSM}=\\operatorname{argmin}\\,\\hat{m}(x,\\beta)'W\\hat{m}(x,\\beta)" }, { "math_id": 3, "text": "\\hat{m}(x,\\beta)" }, { "math_id": 4, "text": "E[\\hat{m}(x,\\beta)]=m(x,\\beta)" } ]
https://en.wikipedia.org/wiki?curid=12135521
12139198
Equilibrium fractionation
Partial separation of isotopes in chemical equilibrium Equilibrium isotope fractionation is the partial separation of isotopes between two or more substances in chemical equilibrium. Equilibrium fractionation is strongest at low temperatures, and (along with kinetic isotope effects) forms the basis of the most widely used isotopic paleothermometers (or climate proxies): D/H and 18O/16O records from ice cores, and 18O/16O records from calcium carbonate. It is thus important for the construction of geologic temperature records. Isotopic fractionations attributed to equilibrium processes have been observed in many elements, from hydrogen (D/H) to uranium (238U/235U). In general, the light elements (especially hydrogen, boron, carbon, nitrogen, oxygen and sulfur) are most susceptible to fractionation, and their isotopes tend to be separated to a greater degree than heavier elements. Definition. Most equilibrium fractionations are thought to result from the reduction in vibrational energy (especially zero-point energy) when a more massive isotope is substituted for a less massive one. This leads to higher concentrations of the massive isotopes in substances where the vibrational energy is most sensitive to isotope substitution, i.e., those with the highest bond force constants. In a reaction involving the exchange of two isotopes, lX and hX, of element "X" in molecules AX and BX, &lt;chem&gt;{A^\mathit{l} X} + B^\mathit{h} X &lt;=&gt; {A^\mathit{h} X} + B^\mathit{l} X&lt;/chem&gt; each reactant molecule is identical to a product except for the distribution of isotopes (i.e., they are isotopologues). The amount of isotopic fractionation in an exchange reaction can be expressed as a fractionation factor: formula_0 formula_1 indicates that the isotopes are distributed evenly between AX and BX, with no isotopic fractionation. formula_2 indicates that hX is concentrated in substance AX, and formula_3 indicates hX is concentrated in substance BX. α is closely related to the equilibrium constant (Keq): formula_4 where formula_5 is the product of the rotational symmetry numbers of the products (right side of the exchange reaction), formula_6 is the product of the rotational symmetry numbers of the reactants (left side of the exchange reaction), and n is the number of atoms exchanged. An example of equilibrium isotope fractionation is the concentration of heavy isotopes of oxygen in liquid water, relative to water vapor, &lt;chem&gt;{H2{^{16}O}{(l)}} + {H2{^{18}O}{(g)}} &lt;=&gt; {H2{^{18}O}{(l)}} + {H2{^{16}O}{(g)}}&lt;/chem&gt; At 20 °C, the equilibrium fractionation factor for this reaction is formula_7 Equilibrium fractionation is a type of mass-dependent isotope fractionation, while mass-independent fractionation is usually assumed to be a non-equilibrium process. For non-equilibrium reactions, isotopic effects are better described by the GEBIK and GEBIF equations for transient kinetic isotope fractionation, which generalize non-steady isotopic effects in any chemical and biochemical reactions. Example. When water vapor condenses (an equilibrium fractionation), the heavier water isotopes (H218O and 2H2O) become enriched in the liquid phase while the lighter isotopes (H216O and 1H2O) tend toward the vapor phase. References. Chacko T., Cole D.R., and Horita J. (2001) Equilibrium oxygen, hydrogen and carbon isotope fractionation factors applicable to geologic systems. Reviews in Mineralogy and Geochemistry, v. 43, p. 1-81. Horita J. and Wesolowski D.J. (1994) Liquid-vapor fractionation of oxygen and hydrogen isotopes of water from the freezing to the critical temperature. Geochimica et Cosmochimica Acta, v. 58, p. 3425-2437. External links. AlphaDelta: Stable Isotope fractionation calculator - http://www2.ggl.ulaval.ca/cgi-bin/isotope/generisotope.cgi
[ { "math_id": 0, "text": "\\alpha = \\frac{(^h\\ce{X}/^l\\ce{X})_\\ce{AX}}{(^h\\ce{X}/^l\\ce{X})_\\ce{BX}}" }, { "math_id": 1, "text": "\\alpha = 1" }, { "math_id": 2, "text": "\\alpha > 1" }, { "math_id": 3, "text": "\\alpha < 1 " }, { "math_id": 4, "text": "\\alpha = (K_{eq} \\cdot \\Pi \\sigma_{Products}/ \\Pi \\sigma_{Reactants})^{1/n}" }, { "math_id": 5, "text": "\\Pi\\sigma_{Products}" }, { "math_id": 6, "text": "\\Pi\\sigma_{Reactants}" }, { "math_id": 7, "text": "\\alpha = \\frac\\ce{(^{18}O/^{16}O)_{Liquid}}\\ce{(^{18}O/^{16}O)_{Vapor}} = 1.0098 " } ]
https://en.wikipedia.org/wiki?curid=12139198
12139471
Tanaka equation
In mathematics, Tanaka's equation is an example of a stochastic differential equation which admits a weak solution but has no strong solution. It is named after the Japanese mathematician Hiroshi Tanaka (Tanaka Hiroshi). Tanaka's equation is the one-dimensional stochastic differential equation formula_0 driven by canonical Brownian motion "B", with initial condition "X"0 = 0, where sgn denotes the sign function formula_1 (Note the unconventional value for sgn(0).) The signum function does not satisfy the Lipschitz continuity condition required for the usual theorems guaranteeing existence and uniqueness of strong solutions. The Tanaka equation has no strong solution, i.e. one for which the version "B" of Brownian motion is given in advance and the solution "X" is adapted to the filtration generated by "B" and the initial conditions. However, the Tanaka equation does have a weak solution, one for which the process "X" and version of Brownian motion are both specified as part of the solution, rather than the Brownian motion being given "a priori". In this case, simply choose "X" to be any Brownian motion formula_2 and define formula_3 by formula_4 i.e. formula_5 Hence, formula_6 and so "X" is a weak solution of the Tanaka equation. Furthermore, this solution is weakly unique, i.e. any other weak solution must have the same law. Another counterexample of this type is Tsirelson's stochastic differential equation.
[ { "math_id": 0, "text": "\\mathrm{d} X_t = \\sgn (X_t) \\, \\mathrm{d} B_t," }, { "math_id": 1, "text": "\\sgn (x) = \\begin{cases} +1, & x \\geq 0; \\\\ -1, & x < 0. \\end{cases}" }, { "math_id": 2, "text": "\\hat{B}" }, { "math_id": 3, "text": "\\tilde{B}" }, { "math_id": 4, "text": "\\tilde{B}_t = \\int_0^t \\sgn \\big( \\hat{B}_s \\big) \\, \\mathrm{d} \\hat{B}_s = \\int_0^t \\sgn \\big( X_s \\big) \\, \\mathrm{d} X_s," }, { "math_id": 5, "text": "\\mathrm{d} \\tilde{B}_t = \\sgn (X_t) \\, \\mathrm{d} X_t." }, { "math_id": 6, "text": "\\mathrm{d} X_t = \\sgn (X_t) \\, \\mathrm{d} \\tilde{B}_{t}," } ]
https://en.wikipedia.org/wiki?curid=12139471
12139922
Itô diffusion
Solution to a specific type of stochastic differential equation In mathematics – specifically, in stochastic analysis – an Itô diffusion is a solution to a specific type of stochastic differential equation. That equation is similar to the Langevin equation used in physics to describe the Brownian motion of a particle subjected to a potential in a viscous fluid. Itô diffusions are named after the Japanese mathematician Kiyosi Itô. Overview. A (time-homogeneous) Itô diffusion in "n"-dimensional Euclidean space formula_0 is a process "X" : [0, +∞) × Ω → R"n" defined on a probability space (Ω, Σ, P) and satisfying a stochastic differential equation of the form formula_1 where "B" is an "m"-dimensional Brownian motion and "b" : R"n" → R"n" and σ : R"n" → R"n"×"m" satisfy the usual Lipschitz continuity condition formula_2 for some constant "C" and all "x", "y" ∈ R"n"; this condition ensures the existence of a unique strong solution "X" to the stochastic differential equation given above. The vector field "b" is known as the drift coefficient of "X"; the matrix field σ is known as the diffusion coefficient of "X". It is important to note that "b" and σ do not depend upon time; if they were to depend upon time, "X" would be referred to only as an "Itô process", not a diffusion. Itô diffusions have a number of nice properties, which include In particular, an Itô diffusion is a continuous, strongly Markovian process such that the domain of its characteristic operator includes all twice-continuously differentiable functions, so it is a "diffusion" in the sense defined by Dynkin (1965). Continuity. Sample continuity. An Itô diffusion "X" is a sample continuous process, i.e., for almost all realisations "Bt"(ω) of the noise, "Xt"(ω) is a continuous function of the time parameter, "t". More accurately, there is a "continuous version" of "X", a continuous process "Y" so that formula_3 This follows from the standard existence and uniqueness theory for strong solutions of stochastic differential equations. Feller continuity. In addition to being (sample) continuous, an Itô diffusion "X" satisfies the stronger requirement to be a Feller-continuous process. For a point "x" ∈ R"n", let P"x" denote the law of "X" given initial datum "X"0 = "x", and let E"x" denote expectation with respect to P"x". Let "f" : R"n" → R be a Borel-measurable function that is bounded below and define, for fixed "t" ≥ 0, "u" : R"n" → R by formula_4 The behaviour of the function "u" above when the time "t" is varied is addressed by the Kolmogorov backward equation, the Fokker–Planck equation, etc. (See below.) The Markov property. The Markov property. An Itô diffusion "X" has the important property of being "Markovian": the future behaviour of "X", given what has happened up to some time "t", is the same as if the process had been started at the position "Xt" at time 0. The precise mathematical formulation of this statement requires some additional notation: Let Σ∗ denote the natural filtration of (Ω, Σ) generated by the Brownian motion "B": for "t" ≥ 0, formula_5 It is easy to show that "X" is adapted to Σ∗ (i.e. each "Xt" is Σ"t"-measurable), so the natural filtration "F"∗ = "F"∗"X" of (Ω, Σ) generated by "X" has "Ft" ⊆ Σ"t" for each "t" ≥ 0. Let "f" : R"n" → R be a bounded, Borel-measurable function. Then, for all "t" and "h" ≥ 0, the conditional expectation conditioned on the σ-algebra Σ"t" and the expectation of the process "restarted" from "Xt" satisfy the Markov property: formula_6 In fact, "X" is also a Markov process with respect to the filtration "F"∗, as the following shows: formula_7 The strong Markov property. The strong Markov property is a generalization of the Markov property above in which "t" is replaced by a suitable random time τ : Ω → [0, +∞] known as a stopping time. So, for example, rather than "restarting" the process "X" at time "t" = 1, one could "restart" whenever "X" first reaches some specified point "p" of R"n". As before, let "f" : R"n" → R be a bounded, Borel-measurable function. Let τ be a stopping time with respect to the filtration Σ∗ with τ &lt; +∞ almost surely. Then, for all "h" ≥ 0, formula_8 The generator. Definition. Associated to each Itô diffusion, there is a second-order partial differential operator known as the "generator" of the diffusion. The generator is very useful in many applications and encodes a great deal of information about the process "X". Formally, the infinitesimal generator of an Itô diffusion "X" is the operator "A", which is defined to act on suitable functions "f" : R"n" → R by formula_9 The set of all functions "f" for which this limit exists at a point "x" is denoted "DA"("x"), while "DA" denotes the set of all "f" for which the limit exists for all "x" ∈ R"n". One can show that any compactly-supported "C"2 (twice differentiable with continuous second derivative) function "f" lies in "DA" and that formula_10 or, in terms of the gradient and scalar and Frobenius inner products, formula_11 An example. The generator "A" for standard "n"-dimensional Brownian motion "B", which satisfies the stochastic differential equation d"Xt" = d"Bt", is given by formula_12, i.e., "A" = Δ/2, where Δ denotes the Laplace operator. The Kolmogorov and Fokker–Planck equations. The generator is used in the formulation of Kolmogorov's backward equation. Intuitively, this equation tells us how the expected value of any suitably smooth statistic of "X" evolves in time: it must solve a certain partial differential equation in which time "t" and the initial position "x" are the independent variables. More precisely, if "f" ∈ "C"2(R"n"; R) has compact support and "u" : [0, +∞) × R"n" → R is defined by formula_13 then "u"("t", "x") is differentiable with respect to "t", "u"("t", ·) ∈ "DA" for all "t", and "u" satisfies the following partial differential equation, known as Kolmogorov's backward equation: formula_14 The Fokker–Planck equation (also known as "Kolmogorov's forward equation") is in some sense the "adjoint" to the backward equation, and tells us how the probability density functions of "Xt" evolve with time "t". Let ρ("t", ·) be the density of "Xt" with respect to Lebesgue measure on R"n", i.e., for any Borel-measurable set "S" ⊆ R"n", formula_15 Let "A"∗ denote the Hermitian adjoint of "A" (with respect to the "L"2 inner product). Then, given that the initial position "X"0 has a prescribed density ρ0, ρ("t", "x") is differentiable with respect to "t", ρ("t", ·) ∈ "DA"* for all "t", and ρ satisfies the following partial differential equation, known as the Fokker–Planck equation: formula_16 The Feynman–Kac formula. The Feynman–Kac formula is a useful generalization of Kolmogorov's backward equation. Again, "f" is in "C"2(R"n"; R) and has compact support, and "q" : R"n" → R is taken to be a continuous function that is bounded below. Define a function "v" : [0, +∞) × R"n" → R by formula_17 The Feynman–Kac formula states that "v" satisfies the partial differential equation formula_18 Moreover, if "w" : [0, +∞) × R"n" → R is "C"1 in time, "C"2 in space, bounded on "K" × R"n" for all compact "K", and satisfies the above partial differential equation, then "w" must be "v" as defined above. Kolmogorov's backward equation is the special case of the Feynman–Kac formula in which "q"("x") = 0 for all "x" ∈ R"n". The characteristic operator. Definition. The characteristic operator of an Itô diffusion "X" is a partial differential operator closely related to the generator, but somewhat more general. It is more suited to certain problems, for example in the solution of the Dirichlet problem. The characteristic operator formula_19 of an Itô diffusion "X" is defined by formula_20 where the sets "U" form a sequence of open sets "Uk" that decrease to the point "x" in the sense that formula_21 and formula_22 is the first exit time from "U" for "X". formula_23 denotes the set of all "f" for which this limit exists for all "x" ∈ R"n" and all sequences {"Uk"}. If E"x"[τ"U"] = +∞ for all open sets "U" containing "x", define formula_24 Relationship with the generator. The characteristic operator and infinitesimal generator are very closely related, and even agree for a large class of functions. One can show that formula_25 and that formula_26 In particular, the generator and characteristic operator agree for all "C"2 functions "f", in which case formula_27 Application: Brownian motion on a Riemannian manifold. Above, the generator (and hence characteristic operator) of Brownian motion on R"n" was calculated to be Δ, where Δ denotes the Laplace operator. The characteristic operator is useful in defining Brownian motion on an "m"-dimensional Riemannian manifold ("M", "g"): a Brownian motion on "M" is defined to be a diffusion on "M" whose characteristic operator formula_19 in local coordinates "xi", 1 ≤ "i" ≤ "m", is given by ΔLB, where ΔLB is the Laplace-Beltrami operator given in local coordinates by formula_28 where ["gij"] = ["gij"]−1 in the sense of the inverse of a square matrix. The resolvent operator. In general, the generator "A" of an Itô diffusion "X" is not a bounded operator. However, if a positive multiple of the identity operator I is subtracted from "A" then the resulting operator is invertible. The inverse of this operator can be expressed in terms of "X" itself using the resolvent operator. For α &gt; 0, the resolvent operator "R"α, acting on bounded, continuous functions "g" : R"n" → R, is defined by formula_29 It can be shown, using the Feller continuity of the diffusion "X", that "R"α"g" is itself a bounded, continuous function. Also, "R"α and αI − "A" are mutually inverse operators: formula_30 formula_31 Invariant measures. Sometimes it is necessary to find an invariant measure for an Itô diffusion "X", i.e. a measure on R"n" that does not change under the "flow" of "X": i.e., if "X"0 is distributed according to such an invariant measure μ∞, then "Xt" is also distributed according to μ∞ for any "t" ≥ 0. The Fokker–Planck equation offers a way to find such a measure, at least if it has a probability density function ρ∞: if "X"0 is indeed distributed according to an invariant measure μ∞ with density ρ∞, then the density ρ("t", ·) of "Xt" does not change with "t", so ρ("t", ·) = ρ∞, and so ρ∞ must solve the (time-independent) partial differential equation formula_32 This illustrates one of the connections between stochastic analysis and the study of partial differential equations. Conversely, a given second-order linear partial differential equation of the form Λ"f" = 0 may be hard to solve directly, but if Λ = "A"∗ for some Itô diffusion "X", and an invariant measure for "X" is easy to compute, then that measure's density provides a solution to the partial differential equation. Invariant measures for gradient flows. An invariant measure is comparatively easy to compute when the process "X" is a stochastic gradient flow of the form formula_33 where β &gt; 0 plays the role of an inverse temperature and Ψ : R"n" → R is a scalar potential satisfying suitable smoothness and growth conditions. In this case, the Fokker–Planck equation has a unique stationary solution ρ∞ (i.e. "X" has a unique invariant measure μ∞ with density ρ∞) and it is given by the Gibbs distribution: formula_34 where the partition function "Z" is given by formula_35 Moreover, the density ρ∞ satisfies a variational principle: it minimizes over all probability densities ρ on R"n" the free energy functional "F" given by formula_36 where formula_37 plays the role of an energy functional, and formula_38 is the negative of the Gibbs-Boltzmann entropy functional. Even when the potential Ψ is not well-behaved enough for the partition function "Z" and the Gibbs measure μ∞ to be defined, the free energy "F"[ρ("t", ·)] still makes sense for each time "t" ≥ 0, provided that the initial condition has "F"[ρ(0, ·)] &lt; +∞. The free energy functional "F" is, in fact, a Lyapunov function for the Fokker–Planck equation: "F"[ρ("t", ·)] must decrease as "t" increases. Thus, "F" is an "H"-function for the "X"-dynamics. Example. Consider the Ornstein-Uhlenbeck process "X" on R"n" satisfying the stochastic differential equation formula_39 where "m" ∈ R"n" and β, κ &gt; 0 are given constants. In this case, the potential Ψ is given by formula_40 and so the invariant measure for "X" is a Gaussian measure with density ρ∞ given by formula_41. Heuristically, for large "t", "Xt" is approximately normally distributed with mean "m" and variance (βκ)−1. The expression for the variance may be interpreted as follows: large values of κ mean that the potential well Ψ has "very steep sides", so "Xt" is unlikely to move far from the minimum of Ψ at "m"; similarly, large values of β mean that the system is quite "cold" with little noise, so, again, "Xt" is unlikely to move far away from "m". The martingale property. In general, an Itô diffusion "X" is not a martingale. However, for any "f" ∈ "C"2(R"n"; R) with compact support, the process "M" : [0, +∞) × Ω → R defined by formula_42 where "A" is the generator of "X", is a martingale with respect to the natural filtration "F"∗ of (Ω, Σ) by "X". The proof is quite simple: it follows from the usual expression of the action of the generator on smooth enough functions "f" and Itô's lemma (the stochastic chain rule) that formula_43 Since Itô integrals are martingales with respect to the natural filtration Σ∗ of (Ω, Σ) by "B", for "t" &gt; "s", formula_44 Hence, as required, formula_45 since "Ms" is "Fs"-measurable. Dynkin's formula. Dynkin's formula, named after Eugene Dynkin, gives the expected value of any suitably smooth statistic of an Itô diffusion "X" (with generator "A") at a stopping time. Precisely, if τ is a stopping time with E"x"[τ] &lt; +∞, and "f" : R"n" → R is "C"2 with compact support, then formula_46 Dynkin's formula can be used to calculate many useful statistics of stopping times. For example, canonical Brownian motion on the real line starting at 0 exits the interval (−"R", +"R") at a random time τ"R" with expected value formula_47 Dynkin's formula provides information about the behaviour of "X" at a fairly general stopping time. For more information on the distribution of "X" at a hitting time, one can study the "harmonic measure" of the process. Associated measures. The harmonic measure. In many situations, it is sufficient to know when an Itô diffusion "X" will first leave a measurable set "H" ⊆ R"n". That is, one wishes to study the first exit time formula_48 Sometimes, however, one also wishes to know the distribution of the points at which "X" exits the set. For example, canonical Brownian motion "B" on the real line starting at 0 exits the interval (−1, 1) at −1 with probability and at 1 with probability , so "B"τ(−1, 1) is uniformly distributed on the set {−1, 1}. In general, if "G" is compactly embedded within R"n", then the harmonic measure (or hitting distribution) of "X" on the boundary ∂"G" of "G" is the measure μ"G""x" defined by formula_49 for "x" ∈ "G" and "F" ⊆ ∂"G". Returning to the earlier example of Brownian motion, one can show that if "B" is a Brownian motion in R"n" starting at "x" ∈ R"n" and "D" ⊂ R"n" is an open ball centred on "x", then the harmonic measure of "B" on ∂"D" is invariant under all rotations of "D" about "x" and coincides with the normalized surface measure on ∂"D". The harmonic measure satisfies an interesting mean value property: if "f" : R"n" → R is any bounded, Borel-measurable function and φ is given by formula_50 then, for all Borel sets "G" ⊂⊂ "H" and all "x" ∈ "G", formula_51 The mean value property is very useful in the solution of partial differential equations using stochastic processes. The Green measure and Green formula. Let "A" be a partial differential operator on a domain "D" ⊆ R"n" and let "X" be an Itô diffusion with "A" as its generator. Intuitively, the Green measure of a Borel set "H" is the expected length of time that "X" stays in "H" before it leaves the domain "D". That is, the Green measure of "X" with respect to "D" at "x", denoted "G"("x", ·), is defined for Borel sets "H" ⊆ R"n" by formula_52 or for bounded, continuous functions "f" : "D" → R by formula_53 The name "Green measure" comes from the fact that if "X" is Brownian motion, then formula_54 where "G"("x", "y") is Green's function for the operator Δ on the domain "D". Suppose that E"x"[τ"D"] &lt; +∞ for all "x" ∈ "D". Then the Green formula holds for all "f" ∈ "C"2(R"n"; R) with compact support: formula_55 In particular, if the support of "f" is compactly embedded in "D", formula_56
[ { "math_id": 0, "text": "\\boldsymbol{\\textbf{R}}^n" }, { "math_id": 1, "text": "\\mathrm{d} X_{t} = b(X_t) \\, \\mathrm{d} t + \\sigma (X_{t}) \\, \\mathrm{d} B_{t}," }, { "math_id": 2, "text": "| b(x) - b(y) | + | \\sigma (x) - \\sigma (y) | \\leq C | x - y |" }, { "math_id": 3, "text": "\\mathbf{P} [ X_t = Y_t] = 1 \\mbox{ for all } t." }, { "math_id": 4, "text": "u(x) = \\mathbf{E}^{x}[ f(X_t) ]." }, { "math_id": 5, "text": "\\Sigma_{t} = \\Sigma_{t}^{B} = \\sigma \\left \\{ B_{s}^{-1} (A) \\subseteq \\Omega \\ : \\ 0 \\leq s \\leq t, A \\subseteq \\mathbf{R}^{n} \\mbox{ Borel} \\right\\}." }, { "math_id": 6, "text": "\\mathbf{E}^{x} \\big[ f(X_{t+h}) \\big| \\Sigma_{t} \\big] (\\omega) = \\mathbf{E}^{X_{t} (\\omega)}[ f(X_{h})]." }, { "math_id": 7, "text": "\\begin{align}\n\\mathbf{E}^{x} \\left [ f(X_{t+h}) \\big| F_{t} \\right ] &= \\mathbf{E}^{x} \\left [ \\mathbf{E}^{x} \\left [ f(X_{t+h}) \\big| \\Sigma_{t} \\right] \\big| F_{t} \\right] \\\\\n&= \\mathbf{E}^{x} \\left [ \\mathbf{E}^{X_{t}} \\left [ f(X_{h}) \\right] \\big| F_{t} \\right] \\\\\n&= \\mathbf{E}^{X_{t}} \\left [ f(X_{h}) \\right ].\n\\end{align}" }, { "math_id": 8, "text": "\\mathbf{E}^{x} \\big[ f(X_{\\tau+h}) \\big| \\Sigma_{\\tau} \\big] = \\mathbf{E}^{X_{\\tau}} \\big[ f(X_{h}) \\big]." }, { "math_id": 9, "text": "A f (x) = \\lim_{t \\downarrow 0} \\frac{\\mathbf{E}^{x} [f(X_{t})] - f(x)}{t}." }, { "math_id": 10, "text": "Af(x) = \\sum_{i} b_{i} (x) \\frac{\\partial f}{\\partial x_i} (x) + \\tfrac{1}{2} \\sum_{i, j} \\left( \\sigma (x) \\sigma (x)^{\\top} \\right)_{i, j} \\frac{\\partial^{2} f}{\\partial x_i \\, \\partial x_{j}} (x)," }, { "math_id": 11, "text": "A f (x) = b(x) \\cdot \\nabla_{x} f(x) + \\tfrac1{2} \\left( \\sigma(x) \\sigma(x)^{\\top} \\right ) : \\nabla_{x} \\nabla_{x} f(x)." }, { "math_id": 12, "text": "A f (x) = \\tfrac1{2} \\sum_{i, j} \\delta_{ij} \\frac{\\partial^{2} f}{\\partial x_{i} \\, \\partial x_{j}} (x) = \\tfrac1{2} \\sum_{i} \\frac{\\partial^{2} f}{\\partial x_{i}^{2}} (x)" }, { "math_id": 13, "text": "u(t, x) = \\mathbf{E}^{x} [ f(X_t)]," }, { "math_id": 14, "text": "\\begin{cases} \\dfrac{\\partial u}{\\partial t}(t, x) = A u (t, x), & t > 0, x \\in \\mathbf{R}^{n}; \\\\ u(0, x) = f(x), & x \\in \\mathbf{R}^{n}. \\end{cases}" }, { "math_id": 15, "text": "\\mathbf{P} \\left [ X_t \\in S \\right ] = \\int_{S} \\rho(t, x) \\, \\mathrm{d} x." }, { "math_id": 16, "text": "\\begin{cases} \\dfrac{\\partial \\rho}{\\partial t}(t, x) = A^{*} \\rho (t, x), & t > 0, x \\in \\mathbf{R}^{n}; \\\\ \\rho(0, x) = \\rho_{0} (x), & x \\in \\mathbf{R}^{n}. \\end{cases}" }, { "math_id": 17, "text": "v(t, x) = \\mathbf{E}^{x} \\left[ \\exp \\left( - \\int_{0}^{t} q(X_{s}) \\, \\mathrm{d} s \\right) f(X_{t}) \\right]." }, { "math_id": 18, "text": "\\begin{cases} \\dfrac{\\partial v}{\\partial t}(t, x) = A v (t, x) - q(x) v(t, x), & t > 0, x \\in \\mathbf{R}^{n}; \\\\ v(0, x) = f(x), & x \\in \\mathbf{R}^{n}. \\end{cases}" }, { "math_id": 19, "text": "\\mathcal{A}" }, { "math_id": 20, "text": "\\mathcal{A} f (x) = \\lim_{U \\downarrow x} \\frac{\\mathbf{E}^{x} \\left [ f(X_{\\tau_{U}}) \\right ] - f(x)}{\\mathbf{E}^{x} [\\tau_{U}]}," }, { "math_id": 21, "text": "U_{k + 1} \\subseteq U_{k} \\mbox{ and } \\bigcap_{k = 1}^{\\infty} U_{k} = \\{ x \\}," }, { "math_id": 22, "text": "\\tau_{U} = \\inf \\{ t \\geq 0 \\ : \\ X_{t} \\not \\in U \\}" }, { "math_id": 23, "text": "D_{\\mathcal{A}}" }, { "math_id": 24, "text": "\\mathcal{A} f (x) = 0." }, { "math_id": 25, "text": "D_{A} \\subseteq D_{\\mathcal{A}}" }, { "math_id": 26, "text": "A f = \\mathcal{A} f \\mbox{ for all } f \\in D_{A}." }, { "math_id": 27, "text": "\\mathcal{A} f(x) = \\sum_i b_i (x) \\frac{\\partial f}{\\partial x_{i}} (x) + \\tfrac1{2} \\sum_{i, j} \\left( \\sigma (x) \\sigma (x)^{\\top} \\right)_{i, j} \\frac{\\partial^{2} f}{\\partial x_{i} \\, \\partial x_{j}} (x)." }, { "math_id": 28, "text": "\\Delta_{\\mathrm{LB}} = \\frac1{\\sqrt{\\det(g)}} \\sum_{i = 1}^{m} \\frac{\\partial}{\\partial x_{i}} \\left( \\sqrt{\\det(g)} \\sum_{j = 1}^{m} g^{ij} \\frac{\\partial}{\\partial x_{j}} \\right)," }, { "math_id": 29, "text": "R_{\\alpha} g (x) = \\mathbf{E}^{x} \\left[ \\int_{0}^{\\infty} e^{- \\alpha t} g(X_{t}) \\, \\mathrm{d} t \\right]." }, { "math_id": 30, "text": "R_{\\alpha} (\\alpha \\mathbf{I} - A) f = f;" }, { "math_id": 31, "text": "(\\alpha \\mathbf{I} - A) R_{\\alpha} g = g." }, { "math_id": 32, "text": "A^{*} \\rho_{\\infty} (x) = 0, \\quad x \\in \\mathbf{R}^{n}." }, { "math_id": 33, "text": "\\mathrm{d} X_{t} = - \\nabla \\Psi (X_{t}) \\, \\mathrm{d} t + \\sqrt{2 \\beta^{-1}} \\, \\mathrm{d} B_{t}," }, { "math_id": 34, "text": "\\rho_{\\infty} (x) = Z^{-1} \\exp ( - \\beta \\Psi (x) )," }, { "math_id": 35, "text": "Z = \\int_{\\mathbf{R}^{n}} \\exp ( - \\beta \\Psi (x) ) \\, \\mathrm{d} x." }, { "math_id": 36, "text": "F[\\rho] = E[\\rho] + \\frac1{\\beta} S[\\rho]," }, { "math_id": 37, "text": "E[\\rho] = \\int_{\\mathbf{R}^{n}} \\Psi(x) \\rho(x) \\, \\mathrm{d} x" }, { "math_id": 38, "text": "S[\\rho] = \\int_{\\mathbf{R}^{n}} \\rho(x) \\log \\rho(x) \\, \\mathrm{d} x" }, { "math_id": 39, "text": "\\mathrm{d} X_{t} = - \\kappa ( X_{t} - m) \\, \\mathrm{d} t + \\sqrt{2 \\beta^{-1}} \\, \\mathrm{d} B_{t}," }, { "math_id": 40, "text": "\\Psi(x) = \\tfrac{1}{2} \\kappa |x - m|^2," }, { "math_id": 41, "text": "\\rho_{\\infty} (x) = \\left( \\frac{\\beta \\kappa}{2 \\pi} \\right)^{\\frac{n}{2}} \\exp \\left( - \\frac{\\beta \\kappa | x - m |^{2}}{2} \\right)" }, { "math_id": 42, "text": "M_{t} = f(X_{t}) - \\int_{0}^{t} A f(X_{s}) \\, \\mathrm{d} s," }, { "math_id": 43, "text": "f(X_{t}) = f(x) + \\int_{0}^{t} A f(X_{s}) \\, \\mathrm{d} s + \\int_{0}^{t} \\nabla f(X_{s})^{\\top} \\sigma(X_{s}) \\, \\mathrm{d} B_{s}." }, { "math_id": 44, "text": "\\mathbf{E}^{x} \\big[ M_{t} \\big| \\Sigma_{s} \\big] = M_{s}." }, { "math_id": 45, "text": "\\mathbf{E}^{x}[M_t | F_s] = \\mathbf{E}^{x} \\left[ \\mathbf{E}^{x} \\big[ M_{t} \\big| \\Sigma_{s} \\big] \\big| F_{s} \\right] = \\mathbf{E}^{x} \\big[ M_{s} \\big| F_{s} \\big] = M_{s}," }, { "math_id": 46, "text": "\\mathbf{E}^{x} [f(X_{\\tau})] = f(x) + \\mathbf{E}^{x} \\left[ \\int_{0}^{\\tau} A f (X_{s}) \\, \\mathrm{d} s \\right]." }, { "math_id": 47, "text": "\\mathbf{E}^{0} [\\tau_{R}] = R^{2}." }, { "math_id": 48, "text": "\\tau_{H} (\\omega) = \\inf \\{ t \\geq 0 | X_{t} \\not \\in H \\}." }, { "math_id": 49, "text": "\\mu_{G}^{x} (F) = \\mathbf{P}^{x} \\left [ X_{\\tau_{G}} \\in F \\right ]" }, { "math_id": 50, "text": "\\varphi (x) = \\mathbf{E}^{x} \\left [ f(X_{\\tau_{H}}) \\right]," }, { "math_id": 51, "text": "\\varphi (x) = \\int_{\\partial G} \\varphi (y) \\, \\mathrm{d} \\mu_{G}^{x} (y)." }, { "math_id": 52, "text": "G(x, H) = \\mathbf{E}^{x} \\left[ \\int_{0}^{\\tau_{D}} \\chi_{H} (X_{s}) \\, \\mathrm{d} s \\right]," }, { "math_id": 53, "text": "\\int_{D} f(y) \\, G(x, \\mathrm{d} y) = \\mathbf{E}^{x} \\left[ \\int_{0}^{\\tau_{D}} f(X_{s}) \\, \\mathrm{d} s \\right]." }, { "math_id": 54, "text": "G(x, H) = \\int_{H} G(x, y) \\, \\mathrm{d} y," }, { "math_id": 55, "text": "f(x) = \\mathbf{E}^{x} \\left[ f \\left( X_{\\tau_{D}} \\right) \\right] - \\int_{D} A f (y) \\, G(x, \\mathrm{d} y)." }, { "math_id": 56, "text": "f(x) = - \\int_{D} A f (y) \\, G(x, \\mathrm{d} y)." } ]
https://en.wikipedia.org/wiki?curid=12139922
12141074
Harmonic measure
In mathematics, especially potential theory, harmonic measure is a concept related to the theory of harmonic functions that arises from the solution of the classical Dirichlet problem. In probability theory, the harmonic measure of a subset of the boundary of a bounded domain in Euclidean space formula_0, formula_1 is the probability that a Brownian motion started inside a domain hits that subset of the boundary. More generally, harmonic measure of an Itō diffusion "X" describes the distribution of "X" as it hits the boundary of "D". In the complex plane, harmonic measure can be used to estimate the modulus of an analytic function inside a domain "D" given bounds on the modulus on the boundary of the domain; a special case of this principle is Hadamard's three-circle theorem. On simply connected planar domains, there is a close connection between harmonic measure and the theory of conformal maps. The term "harmonic measure" was introduced by Rolf Nevanlinna in 1928 for planar domains, although Nevanlinna notes the idea appeared implicitly in earlier work by Johansson, F. Riesz, M. Riesz, Carleman, Ostrowski and Julia (original order cited). The connection between harmonic measure and Brownian motion was first identified by Kakutani ten years later in 1944. Definition. Let "D" be a bounded, open domain in "n"-dimensional Euclidean space R"n", "n" ≥ 2, and let ∂"D" denote the boundary of "D". Any continuous function "f" : ∂"D" → R determines a unique harmonic function "H""f" that solves the Dirichlet problem formula_2 If a point "x" ∈ "D" is fixed, by the Riesz–Markov–Kakutani representation theorem and the maximum principle "H""f"("x") determines a probability measure "ω"("x", "D") on ∂"D" by formula_3 The measure "ω"("x", "D") is called the harmonic measure (of the domain "D" with pole at "x"). formula_4 formula_5 Hence, for each "x" and "D", "ω"("x", "D") is a probability measure on ∂"D". Properties. Since explicit formulas for harmonic measure are not typically available, we are interested in determining conditions which guarantee a set has harmonic measure zero. The harmonic measure of a diffusion. Consider an R"n"-valued Itō diffusion "X" starting at some point "x" in the interior of a domain "D", with law P"x". Suppose that one wishes to know the distribution of the points at which "X" exits "D". For example, canonical Brownian motion "B" on the real line starting at 0 exits the interval (−1, +1) at −1 with probability and at +1 with probability , so "B""τ"(−1, +1) is uniformly distributed on the set {−1, +1}. In general, if "G" is compactly embedded within R"n", then the harmonic measure (or hitting distribution) of "X" on the boundary ∂"G" of "G" is the measure "μ""G""x" defined by formula_39 for "x" ∈ "G" and "F" ⊆ ∂"G". Returning to the earlier example of Brownian motion, one can show that if "B" is a Brownian motion in R"n" starting at "x" ∈ R"n" and "D" ⊂ R"n" is an open ball centred on "x", then the harmonic measure of "B" on ∂"D" is invariant under all rotations of "D" about "x" and coincides with the normalized surface measure on ∂"D"
[ { "math_id": 0, "text": "R^n" }, { "math_id": 1, "text": "n\\geq 2" }, { "math_id": 2, "text": "\\begin{cases} - \\Delta H_{f} (x) = 0, & x \\in D; \\\\ H_{f} (x) = f(x), & x \\in \\partial D. \\end{cases}" }, { "math_id": 3, "text": "H_{f} (x) = \\int_{\\partial D} f(y) \\, \\mathrm{d} \\omega(x, D) (y)." }, { "math_id": 4, "text": "0 \\leq \\omega(x, D)(E) \\leq 1;" }, { "math_id": 5, "text": "1 - \\omega(x, D)(E) = \\omega(x, D)(\\partial D \\setminus E);" }, { "math_id": 6, "text": "y \\mapsto\\omega(y,D)(E)" }, { "math_id": 7, "text": "D\\subset\\mathbb{R}^2" }, { "math_id": 8, "text": "H^1(\\partial D)<\\infty" }, { "math_id": 9, "text": "E\\subset\\partial D" }, { "math_id": 10, "text": "\\omega(X,D)(E)=0" }, { "math_id": 11, "text": "H^1(E)=0" }, { "math_id": 12, "text": "H^s(E)=0" }, { "math_id": 13, "text": "s<1" }, { "math_id": 14, "text": "\\omega(x,D)(E)=0" }, { "math_id": 15, "text": "D\\subset\\mathbb{R}^n" }, { "math_id": 16, "text": "H^{n-1}(E)=0" }, { "math_id": 17, "text": "\\mathbb{D}=\\{X\\in\\mathbb{R}^2:|X|<1\\}" }, { "math_id": 18, "text": "\\mathbb{D}" }, { "math_id": 19, "text": "\\omega(0,\\mathbb{D})(E)=|E|/2\\pi" }, { "math_id": 20, "text": "E\\subset S^1" }, { "math_id": 21, "text": "|E|" }, { "math_id": 22, "text": "E" }, { "math_id": 23, "text": "X\\in \\mathbb{D}" }, { "math_id": 24, "text": "\\omega(X,\\mathbb{D})(E)=\\int_E \\frac{1-|X|^2}{|X-Q|^2}\\frac{dH^1(Q)}{2\\pi}" }, { "math_id": 25, "text": "H^1" }, { "math_id": 26, "text": "d\\omega(X,\\mathbb{D})/dH^1" }, { "math_id": 27, "text": "\\mathbb{B}^n=\\{X\\in\\mathbb{R}^n:|X|<1\\}" }, { "math_id": 28, "text": " X\\in \\mathbb{B}^n" }, { "math_id": 29, "text": "\\omega(X,\\mathbb{B}^n)(E)=\\int_E \\frac{1-|X|^2}{|X-Q|^n}\\frac{dH^{n-1}(Q)}{\\sigma_{n-1}}" }, { "math_id": 30, "text": "E\\subset S^{n-1}" }, { "math_id": 31, "text": "H^{n-1}" }, { "math_id": 32, "text": "S^{n-1}" }, { "math_id": 33, "text": "H^{n-1}(S^{n-1})=\\sigma_{n-1}" }, { "math_id": 34, "text": "\\in" }, { "math_id": 35, "text": "\\omega(X,D)(E)=|f^{-1}(E)|/2\\pi" }, { "math_id": 36, "text": "f:\\mathbb{D}\\rightarrow D" }, { "math_id": 37, "text": "f(0)=X" }, { "math_id": 38, "text": "\\omega(X,D)(E)=1" }, { "math_id": 39, "text": "\\mu_{G}^{x} (F) = \\mathbf{P}^{x} \\big[ X_{\\tau_{G}} \\in F \\big]" } ]
https://en.wikipedia.org/wiki?curid=12141074
1214226
Block LU decomposition
In linear algebra, a Block LU decomposition is a matrix decomposition of a block matrix into a lower block triangular matrix "L" and an upper block triangular matrix "U". This decomposition is used in numerical analysis to reduce the complexity of the block matrix formula. formula_0 Block Cholesky decomposition. Consider a block matrix: formula_1 where the matrix formula_2 is assumed to be non-singular, formula_3 is an identity matrix with proper dimension, and formula_4 is a matrix whose elements are all zero. We can also rewrite the above equation using the half matrices: formula_5 where the Schur complement of formula_2 in the block matrix is defined by formula_6 and the half matrices can be calculated by means of Cholesky decomposition or LDL decomposition. The half matrices satisfy that formula_7 Thus, we have formula_8 where formula_9 The matrix formula_10 can be decomposed in an algebraic manner into formula_11 References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\n\\begin{pmatrix}\n A & B \\\\\n C & D \n\\end{pmatrix}\n=\n\\begin{pmatrix}\nI & 0 \\\\\nC A^{-1} & I\n\\end{pmatrix}\n\\begin{pmatrix}\nA & 0 \\\\\n0 & D-C A^{-1} B\n\\end{pmatrix}\n\\begin{pmatrix}\nI & A^{-1} B \\\\\n0 & I\n\\end{pmatrix}\n" }, { "math_id": 1, "text": "\n\\begin{pmatrix}\n A & B \\\\\n C & D \n\\end{pmatrix}\n=\n\\begin{pmatrix}\nI \\\\\nC A^{-1}\n\\end{pmatrix}\n\\,A\\,\n\\begin{pmatrix}\nI & A^{-1}B\n\\end{pmatrix}\n+\n\\begin{pmatrix}\n0 & 0 \\\\\n0 & D-C A^{-1} B\n\\end{pmatrix},\n" }, { "math_id": 2, "text": "\\begin{matrix}A\\end{matrix}" }, { "math_id": 3, "text": "\\begin{matrix}I\\end{matrix}" }, { "math_id": 4, "text": "\\begin{matrix}0\\end{matrix}" }, { "math_id": 5, "text": "\n\\begin{pmatrix}\n A & B \\\\\n C & D \n\\end{pmatrix}\n=\n\\begin{pmatrix}\nA^{\\frac{1}{2}} \\\\\nC A^{-\\frac{*}{2}}\n\\end{pmatrix}\n\\begin{pmatrix}\nA^{\\frac{*}{2}} & A^{-\\frac{1}{2}}B\n\\end{pmatrix}\n+\n\\begin{pmatrix}\n0 & 0 \\\\\n0 & Q^{\\frac{1}{2}}\n\\end{pmatrix}\n\\begin{pmatrix}\n0 & 0 \\\\\n0 & Q^{\\frac{*}{2}}\n\\end{pmatrix}\n," }, { "math_id": 6, "text": "\n\\begin{matrix}\nQ = D - C A^{-1} B\n\\end{matrix}\n" }, { "math_id": 7, "text": "\n\\begin{matrix}\nA^{\\frac{1}{2}}\\,A^{\\frac{*}{2}}=A;\n\\end{matrix}\n\\qquad\n\\begin{matrix}\nA^{\\frac{1}{2}}\\,A^{-\\frac{1}{2}}=I;\n\\end{matrix}\n\\qquad\n\\begin{matrix}\nA^{-\\frac{*}{2}}\\,A^{\\frac{*}{2}}=I;\n\\end{matrix}\n\\qquad\n\\begin{matrix}\nQ^{\\frac{1}{2}}\\,Q^{\\frac{*}{2}}=Q.\n\\end{matrix}" }, { "math_id": 8, "text": "\n\\begin{pmatrix}\n A & B \\\\\n C & D \n\\end{pmatrix}\n=\nLU,\n" }, { "math_id": 9, "text": "\nLU =\n\\begin{pmatrix}\nA^{\\frac{1}{2}} & 0 \\\\\nC A^{-\\frac{*}{2}} & 0\n\\end{pmatrix}\n\\begin{pmatrix}\nA^{\\frac{*}{2}} & A^{-\\frac{1}{2}}B \\\\\n0 & 0\n\\end{pmatrix}\n+\n\\begin{pmatrix}\n0 & 0 \\\\\n0 & Q^{\\frac{1}{2}}\n\\end{pmatrix}\n\\begin{pmatrix}\n0 & 0 \\\\\n0 & Q^{\\frac{*}{2}}\n\\end{pmatrix}.\n" }, { "math_id": 10, "text": "\\begin{matrix}LU\\end{matrix}" }, { "math_id": 11, "text": "L = \n\\begin{pmatrix}\nA^{\\frac{1}{2}} & 0 \\\\\nC A^{-\\frac{*}{2}} & Q^{\\frac{1}{2}}\n\\end{pmatrix}\n\\mathrm{~~and~~}\nU =\n\\begin{pmatrix}\nA^{\\frac{*}{2}} & A^{-\\frac{1}{2}}B \\\\\n0 & Q^{\\frac{*}{2}}\n\\end{pmatrix}.\n" } ]
https://en.wikipedia.org/wiki?curid=1214226
12144610
Increment theorem
In nonstandard analysis, a field of mathematics, the increment theorem states the following: Suppose a function "y" = "f"("x") is differentiable at x and that Δ"x" is infinitesimal. Then formula_0 for some infinitesimal ε, where formula_1 If formula_2 then we may write formula_3 which implies that formula_4, or in other words that formula_5 is infinitely close to formula_6, or formula_6 is the standard part of formula_5. A similar theorem exists in standard Calculus. Again assume that "y" = "f"("x") is differentiable, but now let Δ"x" be a nonzero standard real number. Then the same equation formula_0 holds with the same definition of Δ"y", but instead of ε being infinitesimal, we have formula_7 (treating x and "f" as given so that ε is a function of Δ"x" alone).
[ { "math_id": 0, "text": "\\Delta y = f'(x)\\,\\Delta x + \\varepsilon\\, \\Delta x" }, { "math_id": 1, "text": "\\Delta y=f(x+\\Delta x)-f(x)." }, { "math_id": 2, "text": "\\Delta x \\neq 0" }, { "math_id": 3, "text": "\\frac{\\Delta y}{\\Delta x} = f'(x) + \\varepsilon," }, { "math_id": 4, "text": "\\frac{\\Delta y}{\\Delta x}\\approx f'(x)" }, { "math_id": 5, "text": " \\frac{\\Delta y}{\\Delta x}" }, { "math_id": 6, "text": " f'(x)" }, { "math_id": 7, "text": " \\lim_{\\Delta x \\to 0} \\varepsilon = 0 " } ]
https://en.wikipedia.org/wiki?curid=12144610
1214618
Mean radiant temperature
Type of temperature The concept of mean radiant temperature (MRT) is used to quantify the exchange of radiant heat between a human and their surrounding environment, with a view to understanding the influence of surface temperatures on personal comfort. Mean radiant temperature has been both qualitatively defined and quantitatively evaluated for both indoor and outdoor environments. MRT has been defined as the uniform temperature of an imaginary enclosure in which the radiant heat transfer from the human body is equal to the radiant heat transfer in the actual non-uniform enclosure. MRT is a useful concept as the net exchange of radiant energy between two objects is approximately proportional to the product of their temperature difference multiplied by their emissivity (ability to emit and absorb heat). The MRT is simply the area weighted mean temperature of all the objects surrounding the body. This is meaningful as long as the temperature differences of the objects are small compared to their absolute temperatures, allowing linearization of the Stefan-Boltzmann Law in the relevant temperature range. MRT also has a strong influence on thermophysiological comfort indexes such as physiological equivalent temperature (PET) or predicted mean vote (PMV). What we experience and feel relating to thermal comfort in a building is related to the influence of both the air temperature and the temperature of surfaces in that space, represented by the mean radiant temperature. The MRT is controlled by enclosure performances. The operative temperature, which is a more functional measure of thermal comfort in a building, is calculated from air temperature, mean radiant temperature and air speed. Maintaining a balance between the operative temperature and the mean radiant temperature can create a more comfortable space. This is done with effective design of the building, interior and with the use of high temperature radiant cooling and low temperature radiant heating. In outdoor settings, mean radiant temperature is affected by air temperature but also by the radiation of absorbed heat from the materials used in sidewalks, streets, and buildings. It can be mitigated by tree cover and green space, which act as sources of shade and promote evaporative cooling. The experienced mean radiant temperature outdoors can vary widely depending on local conditions. For example, measurements taken across Chapel Hill, North Carolina to examine urban heat island exposure ranged from . Calculation. There are different ways to estimate the mean radiant temperature, either applying its definition and using equations to calculate it, or measuring it with particular thermometers or sensors. Since the amount of radiant heat lost or received by human body is the algebraic sum of all radiant fluxes exchanged by its exposed parts with the surrounding sources, MRT can be calculated from the measured temperature of surrounding walls and surfaces and their positions with respect to the person. Therefore, it is necessary to measure those temperatures and the angle factors between the person and the surrounding surfaces. Most building materials have a high emittance ε, so all surfaces in the room can be assumed to be black. Because the sum of the angle factors is unity, the fourth power of MRT equals the mean value of the surrounding surface temperatures to the fourth power, weighted by the respective angle factors. The following equation is used: formula_0 where: formula_1 is Mean Radiant Temperature; formula_2 is the temperature of surface "n", in Kelvins; formula_3 is the angle factor between a person and surface "n". If relatively small temperature differences exist between the surfaces of the enclosure, the equation can be simplified to the following linear form: formula_4 This linear formula tends to give a lower value of MRT, but in many cases the difference is small. In general, angle factors are difficult to determine, and they normally depend on the position and orientation of the person. Furthermore, this method becomes complex and time consuming as the number of surfaces increases and they have elaborate shapes. There is currently no way to effectively collect this data. For this reason, an easier way to determine the MRT is by measuring it with a particular thermometer. Measurement. The MRT can be estimated using a black-globe thermometer. The black-globe thermometer consists of a black globe in the center of which is placed a temperature sensor such as the bulb of a mercury thermometer, a thermocouple or a resistance probe. The globe can in theory have any diameter but as the formulae used in the calculation of the mean radiant temperature depend on the diameter of the globe, a diameter of , specified for use with these formulae, is generally recommended. The smaller the diameter of the globe, the greater the effect of the air temperature and air velocity, thus causing a reduction in the accuracy of the measurement of the mean radiant temperature. So that the external surface of the globe absorbs the radiation from the walls of the enclosure, the surface of the globe shall be darkened, either by the means of an electro-chemical coating or, more generally, by means of a layer of matte black paint. This thermometer actually measures the globe temperature (GT), tending towards thermal balance under the effect of convection and radiation coming from the different heat sources in the enclosure. Thanks to this principle, knowing GT allows the mean radiant temperature MRT to be determined. According to ISO 7726 Standard, the equation that is used most frequently (forced convection) is the following: formula_5 When air velocity is less than 1m/s (natural convection), the equation is the following: formula_6 where: formula_1 is the mean radiant temperature (°C); formula_7 is the globe temperature (°C); formula_8 is the air velocity at the level of the globe (m/s); formula_9 is the emissivity of the globe (no dimension); formula_10 is the diameter of the globe (m); formula_11 is air temperature (°C); And for the standard globe (D = 0.150 m, formula_9 = 0.95): formula_12 The measurement is affected by air movement because the measured GT depends on both convection and radiation transfer. By effectively increasing the size of the thermometer bulb, the convection transfer coefficient is reduced and the effect of radiation is proportionally increased. Because of local convective air currents GT typically lies between the air temperature and MRT. The faster the air moves over the globe thermometer, the closer GT approaches the air temperature. Moreover, since the MRT is defined with respect to the human body, the shape of the sensor is also a factor. The spherical shape of the globe thermometer gives a reasonable approximation of a seated person; for people who are standing, the globe, in a radiant nonuniform environment, overestimates the radiation from floor or ceiling, so an ellipsoid sensor gives a better approximation. There are several other precautions to be taken when using a black-globe thermometer, depending on the conditions of the measurement. Furthermore, there are different measuring methods, such as the two-sphere radiometer and the constant-air-temperature sensor.
[ { "math_id": 0, "text": "MRT^4 = T_1^4 F_{p-1} + T_2^4 F_{p-2} + ... + T_n^4 F_{p-n}" }, { "math_id": 1, "text": "MRT" }, { "math_id": 2, "text": "T_n" }, { "math_id": 3, "text": "F_{p-n}" }, { "math_id": 4, "text": "MRT = T_1 F_{p-1} + T_2 F_{p-2} + ... + T_n F_{p-n}" }, { "math_id": 5, "text": "MRT = \\left[ \\left(GT+273.15 \\right)^4 + \\frac{1.1 \\cdot 10^8 \\cdot v_a^{0.6}} {\\varepsilon \\cdot D^{0.4}}(GT - T_a) \\right]^{1/4} - 273.15" }, { "math_id": 6, "text": "MRT = \\left[ \\left(GT+273.15 \\right)^4 + \\frac{0.25 \\cdot 10^8} {\\varepsilon} \\left(\\frac{|GT - T_a|} {D} \\right)^{1/4} (GT - T_a) \\right]^{1/4} - 273.15" }, { "math_id": 7, "text": "GT" }, { "math_id": 8, "text": "v_a" }, { "math_id": 9, "text": "\\varepsilon" }, { "math_id": 10, "text": "D" }, { "math_id": 11, "text": "T_a" }, { "math_id": 12, "text": "MRT = \\left[ \\left(GT+273.15 \\right)^4 + 2.5 \\cdot 10^8 \\cdot v_a^{0.6}(GT - T_a) \\right]^{1/4} - 273.15" } ]
https://en.wikipedia.org/wiki?curid=1214618
12146395
Proximity effect (electromagnetism)
Magnetically-induced effect in AC conductors In electromagnetics, proximity effect is a redistribution of electric current occurring in nearby parallel electrical conductors carrying alternating current (AC), caused by magnetic effects. In adjacent conductors carrying AC current in the same direction, it causes the current in the conductor to concentrate on the side away from the nearby conductor. In conductors carrying AC current in opposite directions, it causes the current in the conductor to concentrate on the side adjacent to the nearby conductor. Proximity effect is caused by eddy currents induced within a conductor by the time-varying magnetic field of the other conductor, by electromagnetic induction. For example, in a coil of wire carrying alternating current with multiple turns of wire lying next to each other, the current in each wire will be concentrated in a strip on each side of the wire facing away from the adjacent wires. This "current crowding" effect causes the current to occupy a smaller effective cross-sectional area of the conductor, increasing current density and AC electrical resistance of the conductor. The concentration of current on the side of the conductor gets larger with increasing frequency, so proximity effect causes adjacent wires carrying the same current to have more resistance at higher frequencies. Explanation. A changing magnetic field will influence the distribution of an electric current flowing within an electrical conductor, by electromagnetic induction. When an alternating current (AC) flows through a conductor, it creates an associated alternating magnetic field around it. The alternating magnetic field induces eddy currents in adjacent conductors, altering the overall distribution of current flowing through them. The result is that the current is concentrated in the areas of the conductor farthest away from nearby conductors carrying current in the same direction. The proximity effect can significantly increase the AC resistance of adjacent conductors when compared to their resistance with a DC current. The effect increases with frequency. At higher frequencies, the AC resistance of a conductor can easily exceed ten times its DC resistance. Example: two parallel wires. The cause of proximity effect can be seen from the accompanying drawings of two parallel wires next to each other carrying alternating current (AC). The righthand wire in each drawing has the top part transparent to show the currents inside the metal. Each drawing depicts a point in the alternating current cycle when the current is increasing. Currents in the same direction. In the first drawing the current "(I, red arrows)" in both wires is in the same direction. The current in the lefthand wire creates a circular magnetic field "(B, green lines)" which passes through the other wire. From the right hand rule the field lines pass through the wire in an upward direction. From Faraday's law of induction, when the time-varying magnetic field is increasing, it creates a circular current "(E, red loops)" within the wire around the magnetic field lines in a clockwise direction. These are called eddy currents. On the lefthand side nearest to the other wire "(1)" the eddy current is in the opposite direction to the main current "(big pink arrow)" in the wire, so it subtracts from the main current, reducing it. On the righthand side "(2)" the eddy current is in the same direction as the main current so it adds to it, increasing it. The net effect is to redistribute the current in the cross section of the wire into a thin strip on the side facing away from the other wire. The current distribution is shown by the red arrows and color gradient "(3)" on the cross section, with blue areas indicating low current and green, yellow, and red indicating higher current. The same argument shows that the current in the lefthand wire is also concentrated into a strip on the far side away from the other wire. In an alternating current the currents in the wire are increasing for half the time and decreasing half the time. When the current in the wires begins to decrease, the eddy currents reverse direction, which reverses the current redistribution. Currents in opposite directions. In the second drawing, the alternating current in the wires is in opposite directions; in the lefthand wire it is into the page and in the righthand wire it is out of the page. This is the case in AC electrical power cables, which have two wires in which the current direction is always opposite. In this case, since the current is opposite, from the right hand rule the magnetic field "(B)" created by the lefthand wire is directed downward through the righthand wire, instead of upward as in the other drawing. From Faraday's law the circular eddy currents "(E)" are directed in a counterclockwise direction. On the lefthand side nearest to the other wire "(1)" the eddy current is now in the same direction as the main current, so it adds to the main current, increasing it. On the righthand side "(2)" the eddy current is in the opposite direction to the main current, reducing it. In contrast to the previous case, the net effect is to redistribute the current into a thin strip on the side "adjacent" to the other wire. Effects. The additional resistance increases power losses which, in power circuits, can generate undesirable heating. Proximity and skin effect significantly complicate the design of efficient transformers and inductors operating at high frequencies, used for example in switched-mode power supplies. In radio frequency tuned circuits used in radio equipment, proximity and skin effect losses in the inductor reduce the Q factor, broadening the bandwidth. To minimize this, special construction is used in radio frequency inductors. The winding is usually limited to a single layer, and often the turns are spaced apart to separate the conductors. In multilayer coils, the successive layers are wound in a crisscross pattern to avoid having wires lying parallel to one another; these are sometimes referred to as "basket-weave" or "honeycomb" coils. Since the current flows on the surface of the conductor, high frequency coils are sometimes silver-plated, or made of litz wire. Dowell method for determination of losses. This one-dimensional method for transformers assumes the wires have rectangular cross-section, but can be applied approximately to circular wire by treating it as square with the same cross-sectional area. The windings are divided into 'portions', each portion being a group of layers which contains one position of zero MMF. For a transformer with a separate primary and secondary winding, each winding is a portion. For a transformer with interleaved (or sectionalised) windings, the innermost and outermost sections are each one portion, while the other sections are each divided into two portions at the point where zero m.m.f occurs. The total resistance of a portion is given by formula_0 Squared-field-derivative method. This can be used for round wire or litz wire transformers or inductors with multiple windings of arbitrary geometry with arbitrary current waveforms in each winding. The diameter of each strand should be less than 2 δ. It also assumes the magnetic field is perpendicular to the axis of the wire, which is the case in most designs. The method can be generalized to multiple windings. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "R_\\text{AC} = R_\\text{DC}\\left(\\operatorname{Re}(M) + \\frac{(m^2-1) \\operatorname{Re}(D)}{3}\\right)" }, { "math_id": 1, "text": "M = \\alpha h \\coth (\\alpha h) " }, { "math_id": 2, "text": "D = 2 \\alpha h \\tanh (\\alpha h/2) " }, { "math_id": 3, "text": "\\alpha = \\sqrt{\\frac{j \\omega \\mu_0 \\eta}{\\rho}}" }, { "math_id": 4, "text": "\\omega" }, { "math_id": 5, "text": "\\rho" }, { "math_id": 6, "text": "\\eta = N_l \\frac{a}{b}" }, { "math_id": 7, "text": "\\mathbf{D}=\\gamma_1 \\left \\langle\n\\begin{bmatrix}\n \\left | \\hat{\\vec B_1} \\right |^2 & \\hat{\\vec B_1} \\cdot \\hat{\\vec B_2} \\\\\n \\hat{\\vec B_2} \\cdot \\hat{\\vec B_1} & \\left | \\hat{\\vec B_2} \\right |^2\n\\end{bmatrix}\n\\right \\rangle_1 + \\gamma_2 \\left \\langle\n\\begin{bmatrix}\n \\left | \\hat{\\vec B_1} \\right |^2 & \\hat{\\vec B_1} \\cdot \\hat{\\vec B_2} \\\\\n \\hat{\\vec B_2} \\cdot \\hat{\\vec B_1} & \\left | \\hat{\\vec B_2} \\right |^2\n\\end{bmatrix}\n\\right \\rangle_2 " }, { "math_id": 8, "text": "\\hat{\\vec B_j}" }, { "math_id": 9, "text": "\\gamma_j = \\frac{\\pi N_j l_{t,j}d_{c,j}^4}{64 \\rho_c}" }, { "math_id": 10, "text": "N_j" }, { "math_id": 11, "text": "l_{t,j}" }, { "math_id": 12, "text": "d_{c,j}" }, { "math_id": 13, "text": "\\rho_c" }, { "math_id": 14, "text": "\nP = \\overline{\\begin{bmatrix} \\frac{di_1}{dt} & \\frac{di_2}{dt} \\end{bmatrix}\n\\mathbf{D}\n\\begin{bmatrix} \\frac{di_1}{dt} \\\\ \\frac{di_2}{dt} \\end{bmatrix}}\n" }, { "math_id": 15, "text": "I_\\text{rms}^2 \\times R_\\text{DC} " } ]
https://en.wikipedia.org/wiki?curid=12146395
12146473
Proximity effect (electron beam lithography)
The proximity effect in electron beam lithography (EBL) is the phenomenon that the exposure dose distribution, and hence the developed pattern, is wider than the scanned pattern due to the interactions of the primary beam electrons with the resist and substrate. These cause the resist outside the scanned pattern to receive a non-zero dose. Important contributions to weak-resist polymer chain scission (for positive resists) or crosslinking (for negative resists) come from electron forward scattering and backscattering. The forward scattering process is due to electron-electron interactions which deflect the primary electrons by a typically small angle, thus statistically broadening the beam in the resist (and further in the substrate). The majority of the electrons do not stop in the resist but penetrate the substrate. These electrons can still contribute to resist exposure by scattering back into the resist and causing subsequent inelastic or exposing processes. This backscattering process originates e.g. from a collision with a heavy particle (i.e. substrate nucleus) and leads to wide-angle scattering of the light electron from a range of depths (micrometres) in the substrate. The Rutherford backscattering probability increases quickly with substrate nuclear charge. The above effects can be approximated by a simple two-gaussian model where a perfect point-like electron beam is broadened to a superposition of a Gaussian with a width formula_0 of a few nanometers to order tens of nanometers, depending on the acceleration voltage, due to forward scattering, and a Gaussian with a width formula_1 of the order of a few micrometres to order tens due to backscattering, again depending on the acceleration voltage but also on the materials involved: formula_2 formula_3 is of order 1 so the contribution of backscattered electrons to the exposure is of the same order as the contribution of 'direct' forward scattered electrons. formula_4, formula_5 and formula_3 are determined by the resist and substrate materials and the primary beam energy. The two-gaussian model parameters, including the development process, can be determined experimentally by exposing shapes for which the Gaussian integral is easily solved, i.e. donuts, with increasing dose and observing at which dose the center resist clears or does not clear. A thin resist with a low electron density will reduce forward scattering. A light substrate (light nuclei) will reduce backscattering. When electron beam lithography is performed on substrates with 'heavy' films, such as gold coatings, the backscatter effect will (depending on thickness) significantly increase. Increasing beam energy will reduce the forward scattering width, but since the beam penetrates the substrate more deeply, the backscatter width will increase. The primary beam can transfer energy to electrons via elastic collisions with electrons and via inelastic collision processes such as impact ionization. In the latter case, a secondary electron is created and the energy state of the atom changes, which can result in the emission of Auger electrons or X-rays. The range of these secondary electrons is an energy-dependent accumulation of (inelastic) mean free paths; while not always a repeatable number, it is this range (up to 50 nanometers) that ultimately affects the practical resolution of the EBL process. The model described above can be extended to include these effects. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "{\\displaystyle \\alpha }" }, { "math_id": 1, "text": "{\\displaystyle \\beta }" }, { "math_id": 2, "text": " PSF(r)=\\frac{1}{\\pi (1+\\eta)} \\left[\\frac{1}{\\alpha^2} e^{-\\frac{r^2}{\\alpha^2}} + \\frac{\\eta}{\\beta^2} e^{-\\frac{r^2}{\\beta^2}}\\right] " }, { "math_id": 3, "text": "\\eta" }, { "math_id": 4, "text": "\\alpha" }, { "math_id": 5, "text": "\\beta" } ]
https://en.wikipedia.org/wiki?curid=12146473
12146531
Proximity effect (superconductivity)
Phenomena that occur when a superconductor is in contact with a non-superconductor Proximity effect or Holm–Meissner effect is a term used in the field of superconductivity to describe phenomena that occur when a superconductor (S) is placed in contact with a "normal" (N) non-superconductor. Typically the critical temperature formula_0 of the superconductor is suppressed and signs of weak superconductivity are observed in the normal material over mesoscopic distances. The proximity effect is known since the pioneering work by R. Holm and W. Meissner. They have observed zero resistance in SNS pressed contacts, in which two superconducting metals are separated by a thin film of a non-superconducting (i.e. normal) metal. The discovery of the supercurrent in SNS contacts is sometimes mistakenly attributed to Brian Josephson's 1962 work, yet the effect was known long before his publication and was understood as the proximity effect. Origin of the effect. Electrons in the superconducting state of a superconductor are ordered in a very different way than in a normal metal, i.e. they are paired into Cooper pairs. Furthermore, electrons in a material cannot be said to have a definitive position because of the momentum-position complementarity. In solid state physics one generally chooses a momentum space basis, and all electron states are filled with electrons until the Fermi surface in a metal, or until the gap edge energy in the superconductor. Because of the nonlocality of the electrons in metals, the properties of those electrons cannot change infinitely quickly. In a superconductor, the electrons are ordered as superconducting Cooper pairs; in a normal metal, the electron order is gapless (single-electron states are filled up to the Fermi surface). If the superconductor and normal metal are brought together, the electron order in the one system cannot infinitely abruptly change into the other order at the border. Instead, the paired state in the superconducting layer is carried over to the normal metal, where the pairing is destroyed by scattering events, causing the Cooper pairs to lose their coherence. For very clean metals, such as copper, the pairing can persist for hundreds of microns. Conversely, the (gapless) electron order present in the normal metal is also carried over to the superconductor in that the superconducting gap is lowered near the interface. The microscopic model describing this behavior in terms of single electron processes is called Andreev reflection. It describes how electrons in one material take on the order of the neighboring layer by taking into account interface transparency and the states (in the other material) from which the electrons can scatter. Overview. As a contact effect, the proximity effect is closely related to thermoelectric phenomena like the Peltier effect or the formation of pn junctions in semiconductors. The proximity effect enhancement of formula_1 is largest when the normal material is a metal with a large diffusivity rather than an insulator (I). Proximity-effect suppression of formula_1 in a spin-singlet superconductor is largest when the normal material is ferromagnetic, as the presence of the internal magnetic field weakens superconductivity (Cooper pairs breaking). Research. The study of S/N, S/I and S/S' (S' is lower superconductor) bilayers and multilayers has been a particularly active area of superconducting proximity effect research. The behavior of the compound structure in the direction parallel to the interface differs from that perpendicular to the interface. In type II superconductors exposed to a magnetic field parallel to the interface, vortex defects will preferentially nucleate in the N or I layers and a discontinuity in behavior is observed when an increasing field forces them into the S layers. In type I superconductors, flux will similarly first penetrate N layers. Similar qualitative changes in behavior do not occur when a magnetic field is applied perpendicular to the S/I or S/N interface. In S/N and S/I multilayers at low temperatures, the long penetration depths and coherence lengths of the Cooper pairs will allow the S layers to maintain a mutual, three-dimensional quantum state. As temperature is increased, communication between the S layers is destroyed resulting in a crossover to two-dimensional behavior. The anisotropic behavior of S/N, S/I and S/S' bilayers and multilayers has served as a basis for understanding the far more complex critical field phenomena observed in the highly anisotropic cuprate high-temperature superconductors. Recently the Holm–Meissner proximity effect was observed in graphene by the Morpurgo research group. The experiments have been done on nanometer scale devices made of single graphene layers with superimposed superconducting electrodes made of 10 nm Titanium and 70 nm Aluminum films. Aluminum is a superconductor, which is responsible for inducing superconductivity into graphene. The distance between the electrodes was in the range between 100 nm and 500 nm. The proximity effect is manifested by observations of a supercurrent, i.e. a current flowing through the graphene junction with zero voltage on the junction. By using the gate electrodes the researches have shown that the proximity effect occurs when the carriers in the graphene are electrons as well as when the carriers are holes. The critical current of the devices was above zero even at the Dirac point. Abrikosov vortex and proximity effect. Here is shown, that a quantum vortex with a well-defined core can exist in a rather thick normal metal, proximized with a superconductor. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "T_{c}" }, { "math_id": 1, "text": "T_c" } ]
https://en.wikipedia.org/wiki?curid=12146531
1214667
Centipede game
In game theory, the centipede game, first introduced by Robert Rosenthal in 1981, is an extensive form game in which two players take turns choosing either to take a slightly larger share of an increasing pot, or to pass the pot to the other player. The payoffs are arranged so that if one passes the pot to one's opponent and the opponent takes the pot on the next round, one receives slightly less than if one had taken the pot on this round, but after an additional switch the potential payoff will be higher. Therefore, although at each round a player has an incentive to take the pot, it would be better for them to wait. Although the traditional centipede game had a limit of 100 rounds (hence the name), any game with this structure but a different number of rounds is called a centipede game. The unique subgame perfect equilibrium (and every Nash equilibrium) of these games results in the first player taking the pot on the first round of the game; however, in empirical tests, relatively few players do so, and as a result, achieve a higher payoff than in the subgame perfect and Nash equilibria. These results are taken to show that subgame perfect equilibria and Nash equilibria fail to predict human play in some circumstances. The Centipede game is commonly used in introductory game theory courses and texts to highlight the concept of backward induction and the iterated elimination of dominated strategies, which show a standard way of providing a solution to the game. Play. One possible version of a centipede game could be played as follows: &lt;templatestyles src="Template:Blockquote/styles.css" /&gt;Consider two players: Alice and Bob. Alice moves first. At the start of the game, Alice has two piles of coins in front of her: one pile contains 4 coins and the other pile contains 1 coin. Each player has two moves available: either "take" the larger pile of coins and give the smaller pile to the other player or "push" both piles across the table to the other player. Each time the piles of coins pass across the table, the quantity of coins in each pile doubles. For example, assume that Alice chooses to "push" the piles on her first move, handing the piles of 1 and 4 coins over to Bob, doubling them to 2 and 8. Bob could now use his first move to either "take" the pile of 8 coins and give 2 coins to Alice, or he can "push" the two piles back across the table again to Alice, again increasing the size of the piles to 4 and 16 coins. The game continues for a fixed number of rounds or until a player decides to end the game by pocketing a pile of coins. The addition of coins is taken to be an externality, as it is not contributed by either player. Formal definition. The centipede game may be written as formula_0 where formula_1 and formula_2. Players formula_3 and formula_4 alternate, starting with player formula_3, and may on each turn play a move from formula_5 with a maximum of formula_6 rounds. The game terminates when formula_7 is played for the first time, otherwise upon formula_6 moves, if formula_7 is never played. Suppose the game ends on round formula_8 with player formula_9making the final move. Then the outcome of the game is defined as follows: Here, formula_17denotes the other player. Equilibrium analysis and backward induction. Standard game theoretic tools predict that the first player will defect on the first round, taking the pile of coins for himself. In the centipede game, a pure strategy consists of a set of actions (one for each choice point in the game, even though some of these choice points may never be reached) and a mixed strategy is a probability distribution over the possible pure strategies. There are several pure strategy Nash equilibria of the centipede game and infinitely many mixed strategy Nash equilibria. However, there is only one subgame perfect equilibrium (a popular refinement to the Nash equilibrium concept). In the unique subgame perfect equilibrium, each player chooses to defect at every opportunity. This, of course, means defection at the first stage. In the Nash equilibria, however, the actions that would be taken after the initial choice opportunities (even though they are never reached since the first player defects immediately) may be cooperative. Defection by the first player is the unique subgame perfect equilibrium and required by any Nash equilibrium, it can be established by backward induction. Suppose two players reach the final round of the game; the second player will do better by defecting and taking a slightly larger share of the pot. Since we suppose the second player will defect, the first player does better by defecting in the second to last round, taking a slightly higher payoff than she would have received by allowing the second player to defect in the last round. But knowing this, the second player ought to defect in the third to last round, taking a slightly higher payoff than he would have received by allowing the first player to defect in the second to last round. This reasoning proceeds backwards through the game tree until one concludes that the best action is for the first player to defect in the first round. The same reasoning can apply to any node in the game tree. For a game that ends after four rounds, this reasoning proceeds as follows. If we were to reach the last round of the game, Player "2" would do better by choosing "d" instead of "r", receiving 4 coins instead of 3. However, given that "2" will choose "d", "1" should choose "D" in the second to last round, receiving 3 instead of 2. Given that "1" would choose "D" in the second to last round, "2" should choose "d" in the third to last round, receiving 2 instead of 1. But given this, Player "1" should choose "D" in the first round, receiving 1 instead of 0. There are a large number of Nash equilibria in a centipede game, but in each, the first player defects on the first round and the second player defects in the next round frequently enough to dissuade the first player from passing. Being in a Nash equilibrium does not require that strategies be rational at every point in the game as in the subgame perfect equilibrium. This means that strategies that are cooperative in the never-reached later rounds of the game could still be in a Nash equilibrium. In the example above, one Nash equilibrium is for both players to defect on each round (even in the later rounds that are never reached). Another Nash equilibrium is for player 1 to defect on the first round, but pass on the third round and for player 2 to defect at any opportunity. Empirical results. Several studies have demonstrated that the Nash equilibrium (and likewise, subgame perfect equilibrium) play is rarely observed. Instead, subjects regularly show partial cooperation, playing "R" (or "r") for several moves before eventually choosing "D" (or "d"). It is also rare for subjects to cooperate through the whole game. For examples see McKelvey and Palfrey (1992), Nagel and Tang (1998) or Krockow et al. (2016) for a survey. Scholars have investigated the effect of increasing the stakes. As with other games, for instance the ultimatum game, as the stakes increase the play approaches (but does not reach) Nash equilibrium play. Since the empirical studies have produced results that are inconsistent with the traditional equilibrium analysis, several explanations of this behavior have been offered. To explain the experimental data, we either need some altruistic agents or some bounded rational agents. Preference-based explanation. One reason people may deviate from equilibrium behavior is if some are altruistic. The basic idea is that you have a certain probability at each game to play against an altruistic agent and if this probability is high enough, you should defect on the last round rather than the first. If enough people are altruists, sacrificing the payoff of first-round defection is worth the price in order to determine whether or not your opponent is an altruist. McKelvey and Palfrey (1992) create a model with some altruistic agents and some rational agents who will end up playing a mixed strategy (i.e. they play at multiple nodes with some probability). To match well the experimental data, around 5% of the players need to be altruistic in the model. Elmshauser (2022) shows that a model including altruistic agents and uncertainty-averse agents (instead of rational agents) explain even better the experimental data. Some experiments tried to see whether players who passing a lot would also be the most altruistic agents in other games or other life situations (see for instance Pulford et al or Gamba and Regner (2019) who assessed Social Value Orientation). Players passing a lot were indeed more altruistic but the difference wasn't huge. Bounded rationality explanation. Rosenthal (1981) suggested that if one has reason to believe his opponent will deviate from Nash behavior, then it may be advantageous to not defect on the first round. Another possibility involves error. If there is a significant possibility of error in action, perhaps because your opponent has not reasoned completely through the backward induction, it may be advantageous (and rational) to cooperate in the initial rounds. The quantal response equilibrium of McKelvey and Palfrey (1995) created a model with agents playing Nash equilibrium with errors and they applied it to the Centipede game. Another modelling able to explain behaviors in the centipede game is the level-k model, which is a cognitive hierarchy theory : a L0 player plays randomly, the L1 player best responds to the L0 player, the L2 player best responds to the L1 player and so on. In many games, scholars observed that most of the player were L2 or L3 players, which is consistent with the centipede game experimental data. Garcia-Pola et al. (2020) concluded from an experiment that most of the players either play following a Level-k logic or a Quantal response logic. However, Parco, Rapoport and Stein (2002) illustrated that the level of financial incentives can have a profound effect on the outcome in a three-player game: the larger the incentives are for deviation, the greater propensity for learning behavior in a repeated single-play experimental design to move toward the Nash equilibrium. Palacios-Huerta and Volij (2009) find that expert chess players play differently from college students. With a rising Elo, the probability of continuing the game declines; all Grandmasters in the experiment stopped at their first chance. They conclude that chess players are familiar with using backward induction reasoning and hence need less learning to reach the equilibrium. However, in an attempt to replicate these findings, Levitt, List, and Sadoff (2010) find strongly contradictory results, with zero of sixteen Grandmasters stopping the game at the first node. Qualitative research by Krockow et al., which employed think-aloud protocols that required players in a Centipede game to vocalise their reasoning during the game, indicated a range of decision biases such as action bias or completion bias, which may drive irrational choices in the game. Significance. Like the prisoner's dilemma, this game presents a conflict between self-interest and mutual benefit. If it could be enforced, both players would prefer that they both cooperate throughout the entire game. However, a player's self-interest or players' distrust can interfere and create a situation where both do worse than if they had blindly cooperated. Although the Prisoner's Dilemma has received substantial attention for this fact, the Centipede Game has received relatively less. Additionally, Binmore (2005) has argued that some real-world situations can be described by the Centipede game. One example he presents is the exchange of goods between parties that distrust each other. Another example Binmore (2005) likens to the Centipede game is the mating behavior of a hermaphroditic sea bass which takes turns exchanging eggs to fertilize. In these cases, we find cooperation to be abundant. Since the payoffs for some amount of cooperation in the Centipede game are so much larger than immediate defection, the "rational" solutions given by backward induction can seem paradoxical. This, coupled with the fact that experimental subjects regularly cooperate in the Centipede game, has prompted debate over the usefulness of the idealizations involved in the backward induction solutions, see Aumann (1995, 1996) and Binmore (1996). References. &lt;templatestyles src="Reflist/styles.css" /&gt; &lt;templatestyles src="Refbegin/styles.css" /&gt;
[ { "math_id": 0, "text": "\\mathcal{G}(N,~m_{0},~m_{1})" }, { "math_id": 1, "text": "N, m_{0}, m_{1}\\in\\mathbb{N}" }, { "math_id": 2, "text": "m_{0}>m_{1}" }, { "math_id": 3, "text": "I" }, { "math_id": 4, "text": "II" }, { "math_id": 5, "text": "\\{\\mathrm{take},\\mathrm{push}\\}" }, { "math_id": 6, "text": "N" }, { "math_id": 7, "text": "\\mathrm{take}" }, { "math_id": 8, "text": "t\\in\\{0,\\ldots,N-1\\}" }, { "math_id": 9, "text": "p\\in\\{I,II\\}" }, { "math_id": 10, "text": "p" }, { "math_id": 11, "text": "2^{t}m_{0}" }, { "math_id": 12, "text": "p^{\\ast}" }, { "math_id": 13, "text": "2^{t}m_{1}" }, { "math_id": 14, "text": "\\mathrm{push}" }, { "math_id": 15, "text": "2^{t+1}m_{1}" }, { "math_id": 16, "text": "2^{t+1}m_{0}" }, { "math_id": 17, "text": "p^{\\ast}\\in\\{I,II\\}" } ]
https://en.wikipedia.org/wiki?curid=1214667
1214697
Wet-bulb globe temperature
Apparent temperature estimating how humans are affected The wet-bulb globe temperature (WBGT) is a measure of environmental heat as it affects humans. Unlike a simple temperature measurement, WBGT accounts for all four major environmental heat factors: air temperature, humidity, radiant heat (from sunlight or sources such as furnaces), and air movement (wind or ventilation). It is used by industrial hygienists, athletes, sporting events and the military to determine appropriate exposure levels to high temperatures. A WBGT meter combines three sensors, a dry-bulb thermometer, a natural (static) wet-bulb thermometer, and a black globe thermometer. For outdoor environments, the meter uses all sensor data inputs, calculating WBGT as: formula_0 where Indoors the following formula is used: formula_1 If a meter is not available, the WBGT can be calculated from current or historic weather data. A clothing adjustment may be added to the WBGT to determine the "effective WBGT", WBGTeff. Uses. The American Conference of Governmental Industrial Hygienists publishes threshold limit values (TLVs) that have been adopted by many governments for use in the workplace. The process for determining the WBGT is also described in ISO 7243, Hot Environments - Estimation of the Heat Stress on Working Man, Based on the WBGT Index. The American College of Sports Medicine bases its guidelines on the intensity of sport practices based on WBGT. In hot areas, some US military installations display a flag to indicate the heat category based on the WBGT. The military publishes guidelines for water intake and physical activity level for acclimated and unacclimated individuals in different uniforms based on the heat category. The University of Georgia adapted these categories for use in college sports as a guideline for how strenuous practices can be. Related temperature comfort measures. The heat index used by the U.S. National Weather Service and the humidex used by the Meteorological Service of Canada, along with the wind chill used in both countries, are also measures of perceived heat or cold, but they do not account for the effects of radiation. The NWS office in Tulsa, Oklahoma, in conjunction with Oral Roberts University's mathematics department, published an approximation formula to the WBGT that takes into account cloud cover and wind speed; in limited experimentation (four samples), the office claimed the estimate was regularly accurate to within , even with a simplification that reduces the equation from a four-degree polynomial to a linear relationship (the authors noted that the linear approximation was not tested for air temperatures under since the WBGT is designed to measure heat stress, which seldom occurs below that threshold). References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\mathrm{WBGT} = 0.7T_\\mathrm{w} + 0.2T_\\mathrm{g} + 0.1T_\\mathrm{d}" }, { "math_id": 1, "text": "\\mathrm{WBGT} = 0.7T_\\mathrm{w} + 0.3T_\\mathrm{g}" } ]
https://en.wikipedia.org/wiki?curid=1214697
12148336
Control premium
Amount that a buyer is sometimes willing to pay over the current market price A control premium is an amount that a buyer is sometimes willing to pay over the current market price of a publicly traded company in order to acquire a controlling share in that company. If the market perceives that a public company's profit and cash flow is not being maximized, capital structure is not optimal, or other factors that can be changed are impacting the company's share price, an acquirer may be willing to offer a premium over the price currently established by other market participants. A discount for lack of control, sometimes referred to as a minority discount, reflects the reduction in value from a firm's perceived optimal or intrinsic value when cash flow or other factors prevent optimal value from being reached. Overview of concept. Transactions involving small blocks of shares in public companies occur regularly and serve to establish the market price per share of company stock. Acquiring a controlling number of shares sometimes requires offering a premium over the current market price per share in order to induce existing shareholders to sell. It is made through a tender offer with specific terms, including the price. Higher control premiums are often associated with classified boards. The amount of control is the acquirer's decision and is based on its belief that the target company's share price is not optimized. An acquirer would not be making a prudent investment decision if a tender offer made is higher than the future benefit of the acquisition. Control premium vs. minority discount. The control premium and the minority discount could be considered to be the same dollar amount. Stated as a percentage, this dollar amount would be higher as a percentage of the lower minority marketable value or, conversely, lower as a percentage of the higher control value. formula_0 Source: Size of premium. In general, the maximum value that an acquirer firm would be willing to pay should equal the sum of the target firm's intrinsic value, synergies that the acquiring firm can expect to achieve between the two firms, and the opportunity cost of not acquiring the target firm (i.e. loss to the acquirer if a rival firm acquires the target firm instead). A premium paid, if any, will be specific to the acquirer and the target; actual premiums paid have varied widely. In business practice, control premiums may vary from 20% to 40%. Larger control premiums indicate a low minority shareholders' protection. Example. Company XYZ has an EBITDA of $1,500,000 and its shares are currently trading at an EV/EBITDA multiple of 5x. This results in a valuation of XYZ of $7,500,000 (=$1,500,000 * 5) on an EV basis. A potential buyer may believe that EBITDA can be improved to $2,000,000 by eliminating the CEO, who would become redundant after the transaction. Thus, the buyer could potentially value the target at $10,000,000 since the value expected to be achieved by replacing the CEO is the accretive $500,000 (=$2,000,000–$1,500,000) in EBITDA, which in turn translates to $2,500,000 (=$500,000 * 5 or =$10,000,000–$7,500,000) premium over the pre-transaction value of the target. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\mbox{Minority discount} = \\mbox{1 – } \\left({ {1 \\over \\mbox{1 + Control premium} }}\\right) " } ]
https://en.wikipedia.org/wiki?curid=12148336
12150098
Carbon leakage
Unintended increase in greenhouse gas emissions Carbon leakage a concept to quantify an increase in greenhouse gas emissions in one country as a result of an emissions reduction by a second country with stricter climate change mitigation policies. Carbon leakage is one type of spill-over effect. Spill-over effects can be positive or negative; for example, emission reductions policy might lead to technological developments that aid reductions outside of the policy area. Carbon leakage is defined as "the increase in CO2 emissions outside the countries taking domestic mitigation action divided by the reduction in the emissions of these countries." It is expressed as a percentage, and can be greater or less than 100%. There is no consensus over the magnitude of long-term leakage effects. Carbon leakage may occur for a number of reasons: If the emissions policy of a country raises local costs, then another country with a more relaxed policy may have a trading advantage. If demand for these goods remains the same, production may move offshore to the cheaper country with lower standards, and global emissions will not be reduced. If environmental policies in one country add a premium to certain fuels or commodities, then the demand may decline and their price may fall. Countries that do not place a premium on those items may then take up the demand and use the same supply, negating any benefit. Coal, oil and alternative technologies. The issue of carbon leakage can be interpreted from the perspective of the reliance of society on coal, oil, and alternative (less polluting) technologies, e.g., biomass. This is based on the theory of nonrenewable resources. The potential emissions from coal, oil and gas is limited by the supply of these nonrenewable resources. To a first approximation, the total emissions from oil and gas is fixed, and the total load of carbon in the atmosphere is primarily determined by coal usage. A policy that sets a carbon tax only in developed countries might lead to leakage of emissions to developing countries. However, a negative leakage (i.e., leakage having the effect of reducing emissions) could also occur due to a lowering in demand and price for oil and gas. One of the negative effects of carbon leakage is the undermining of global emissions reduction efforts. When industries relocate to countries with lower emission standards, it can lead to increased greenhouse gas emissions in those countries. This might lead coal-rich countries to use less coal and more oil and gas, thus lowering their emissions. While this is of short-term benefit, it reduces the insurance provided by limiting the consumption of oil and gas. The insurance is against the possibility of delayed arrival of backstop technologies. If the arrival of alternative technologies is delayed, the replacement of coal by oil and gas might have no long-term benefit. If the alternative technology arrives earlier, then the issue of substitution becomes unimportant. In terms of climate policy, the issue of substitution means that long-term leakage needs to be considered, and not just short-term leakage.By taking into account the potential delays in alternative technologies and wider substitution effects, policymakers can develop strategies that minimize leakage and promote sustainable emissions reduction. Current schemes. Estimates of leakage rates for action under the Kyoto Protocol ranged from 5 to 20% as a result of a loss in price competitiveness, but these leakage rates were viewed as being very uncertain. For energy-intensive industries, the beneficial effects of Annex I actions through technological development were viewed as possibly being substantial. This beneficial effect, however, had not been reliably quantified. On the empirical evidence they assessed, Barker "et al". (2007) concluded that the competitive losses of then-current mitigation actions, e.g., the EU ETS, were not significant. The European Union hands out free EU ETS certificates (EU allowances) to sectors with high risk of carbon leakage, e.g., aluminium. It uses the Carbon Leakage Indicator (CLI) to determine sectors at risk of carbon leakage, with the formula formula_0. formula_1formula_2, where formula_3 is gross value added. Recent North American emissions schemes such as the Regional Greenhouse Gas Initiative and the Western Climate Initiative are looking at ways of measuring and equalising the price of energy 'imports' that enter their trading region References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "CLI=Trade\\ Intensity \\times Emission\\ Intensity\n" }, { "math_id": 1, "text": " CLI = TI \\times \\bigl( Direct\\ Emission\\ Intensity + Indirect\\ Emission\\ Intensity \\bigr)\n" }, { "math_id": 2, "text": "= {(Imports + Exports) \\over(Turnover + Imports)} \n\\times \\Biggl( {Direct\\ Emissions \\over{GVA}_{DE}}\n+{Indirect\\ Emissions\\over{GVA}_{IE} } \\Biggr)" }, { "math_id": 3, "text": "GVA\n" } ]
https://en.wikipedia.org/wiki?curid=12150098
12151000
Sales (accounting)
Type of company operating revenue In bookkeeping, accounting, and financial accounting, net sales are operating revenues earned by a company for selling its products or rendering its services. Also referred to as revenue, they are reported directly on the income statement as "Sales" or "Net sales". In financial ratios that use income statement sales values, "sales" refers to net sales, not gross sales. Sales are the unique transactions that occur in professional selling or during marketing initiatives. Revenue is earned when goods are delivered or services are rendered. The term sales in a marketing, advertising or a general business context often refers to a free in which a buyer has agreed to purchase some products at a set time in the future. From an accounting standpoint, sales do not occur until the product is delivered. "Outstanding orders" refers to sales orders that have not been filled. A sale is a transfer of property for money or credit. In double-entry bookkeeping, a sale of merchandise is recorded in the general journal as a debit to cash or accounts receivable and a credit to the sales account. The amount recorded is the actual monetary value of the transaction, not the list price of the merchandise. A discount from list price might be noted if it applies to the sale. Fees for services are recorded separately from sales of merchandise, but the bookkeeping transactions for recording "sales" of services are similar to those for recording sales of tangible goods. Gross sales and net sales. formula_0 Gross sales are the sum of all sales during a time period. Net sales are gross sales minus sales returns, sales allowances, and sales discounts. Gross sales do not normally appear on an income statement. The sales figures reported on an income statement are net sales. input vat - output vat sales of portfolio items and capital gains taxes Sales Returns and Allowances and Sales Discounts are contra-revenue accounts. In a survey of nearly 200 senior marketing managers, 70 percent responded that they found the "sales total" metric very useful. Revenue or Sales reported on the income statement are net sales after deducting Sales Returns and Allowances and Sales Discounts. Unique definitions. When the US government reports wholesale sales, this includes excise taxes on certain products. Net sales = gross sales – (customer discounts, returns, and allowances) Gross profit = net sales – cost of goods sold Operating profit = gross profit – total operating expenses Net profit = operating profit – taxes – interest Net profit = net sales – cost of goods sold – operating expense – taxes – interest References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\text{Net sales}=\\text{Gross sales} - \\text{(Customer discounts, returns, allowances)}" } ]
https://en.wikipedia.org/wiki?curid=12151000
12152471
Zhegalkin polynomial
Zhegalkin (also Žegalkin, Gégalkine or Shegalkin) polynomials (), also known as algebraic normal form, are a representation of functions in Boolean algebra. Introduced by the Russian mathematician Ivan Ivanovich Zhegalkin in 1927, they are the polynomial ring over the integers modulo 2. The resulting degeneracies of modular arithmetic result in Zhegalkin polynomials being simpler than ordinary polynomials, requiring neither coefficients nor exponents. Coefficients are redundant because 1 is the only nonzero coefficient. Exponents are redundant because in arithmetic mod 2, "x"2 = "x". Hence a polynomial such as 3"x"2"y"5"z" is congruent to, and can therefore be rewritten as, "xyz". Boolean equivalent. Prior to 1927, Boolean algebra had been considered a calculus of logical values with logical operations of conjunction, disjunction, negation, and so on. Zhegalkin showed that all Boolean operations could be written as ordinary numeric polynomials, representing the "false" and "true" values as 0 and 1, the integers mod 2. Logical conjunction is written as "xy", and logical exclusive-or as arithmetic addition mod 2, (written here as "x"⊕"y" to avoid confusion with the common use of + as a synonym for inclusive-or ∨). The logical complement ¬"x" is then "x"⊕1. Since ∧ and ¬ form a basis for Boolean algebra, all other logical operations are compositions of these basic operations, and so the polynomials of ordinary algebra can represent all Boolean operations, allowing Boolean reasoning to be performed using elementary algebra. For example, the Boolean 2-out-of-3 threshold or median operation is written as the Zhegalkin polynomial "xy"⊕"yz"⊕"zx". Formal properties. Formally a "Zhegalkin monomial" is the product of a finite set of distinct variables (hence square-free), including the empty set whose product is denoted 1. There are 2"n" possible Zhegalkin monomials in "n" variables, since each monomial is fully specified by the presence or absence of each variable. A "Zhegalkin polynomial" is the sum (exclusive-or) of a set of Zhegalkin monomials, with the empty set denoted by 0. A given monomial's presence or absence in a polynomial corresponds to that monomial's coefficient being 1 or 0 respectively. The Zhegalkin monomials, being linearly independent, span a 2"n"-dimensional vector space over the Galois field GF(2) (NB: not GF(2"n"), whose multiplication is quite different). The 22"n" vectors of this space, i.e. the linear combinations of those monomials as unit vectors, constitute the Zhegalkin polynomials. The exact agreement with the number of Boolean operations on "n" variables, which exhaust the "n"-ary operations on {0,1}, furnishes a direct counting argument for completeness of the Zhegalkin polynomials as a Boolean basis. This vector space is not equivalent to the free Boolean algebra on "n" generators because it lacks complementation (bitwise logical negation) as an operation (equivalently, because it lacks the top element as a constant). This is not to say that the space is not closed under complementation or lacks top (the all-ones vector) as an element, but rather that the linear transformations of this and similarly constructed spaces need not preserve complement and top. Those that do preserve them correspond to the Boolean homomorphisms, e.g. there are four linear transformations from the vector space of Zhegalkin polynomials over one variable to that over none, only two of which are Boolean homomorphisms. Method of computation. There are various known methods generally used for the computation of the Zhegalkin polynomial: The method of indeterminate coefficients. Using the method of indeterminate coefficients, a linear system consisting of all the tuples of the function and their values is generated. Solving the linear system gives the coefficients of the Zhegalkin polynomial. Example. Given the Boolean function formula_0, express it as a Zhegalkin polynomial. This function can be expressed as a column vector formula_1 This vector should be the output of left-multiplying a vector of undetermined coefficients formula_2 by an 8x8 logical matrix which represents the possible values that all the possible conjunctions of A, B, C can take. These possible values are given in the following truth table: The information in the above truth table can be encoded in the following logical matrix: formula_3 where the 'S' here stands for "Sierpiński", as in Sierpiński triangle, and the subscript 3 gives the exponents of its size: formula_4. It can be proven through mathematical induction and block-matrix multiplication that any such "Sierpiński matrix" formula_5 is its own inverse. Then the linear system is formula_6 which can be solved for formula_7: formula_8 and the Zhegalkin polynomial corresponding to formula_7 is formula_9. Using the canonical disjunctive normal form. Using this method, the canonical disjunctive normal form (a fully expanded disjunctive normal form) is computed first. Then the negations in this expression are replaced by an equivalent expression using the mod 2 sum of the variable and 1. The disjunction signs are changed to addition mod 2, the brackets are opened, and the resulting Boolean expression is simplified. This simplification results in the Zhegalkin polynomial. Using tables. Let formula_10 be the outputs of a truth table for the function "P" of "n" variables, such that the index of the formula_11's corresponds to the binary indexing of the minterms. Define a function ζ recursively by: formula_12 formula_13 Note that formula_14 where formula_15 is the binomial coefficient reduced modulo 2. Then formula_16 is the "i" th coefficient of a Zhegalkin polynomial whose literals in the "i" th monomial are the same as the literals in the "i" th minterm, except that the negative literals are removed (or replaced by 1). The ζ-transformation is its own inverse, so the same kind of table can be used to compute the coefficients formula_10 given the coefficients formula_17. Just let formula_18 In terms of the table in the figure, copy the outputs of the truth table (in the column labeled "P") into the leftmost column of the triangular table. Then successively compute columns from left to right by applying XOR to each pair of vertically adjacent cells in order to fill the cell immediately to the right of the top cell of each pair. When the entire triangular table is filled in then the top row reads out the coefficients of a linear combination which, when simplified (removing the zeroes), yields the Zhegalkin polynomial. To go from a Zhegalkin polynomial to a truth-table, it is possible to fill out the top row of the triangular table with the coefficients of the Zhegalkin polynomial (putting in zeroes for any combinations of positive literals not in the polynomial). Then successively compute rows from top to bottom by applying XOR to each pair of horizontally adjacent cells in order to fill the cell immediately to the bottom of the leftmost cell of each pair. When the entire triangular table is filled then the leftmost column of it can be copied to column "P" of the truth table. As an aside, this method of calculation corresponds to the method of operation of the elementary cellular automaton called Rule 102. For example, start such a cellular automaton with eight cells set up with the outputs of the truth table (or the coefficients of the canonical disjunctive normal form) of the Boolean expression: 10101001. Then run the cellular automaton for seven more generations while keeping a record of the state of the leftmost cell. The history of this cell then turns out to be: 11000010, which shows the coefficients of the corresponding Zhegalkin polynomial. The Pascal method. The most economical in terms of the amount of computation and expedient for constructing the Zhegalkin polynomial manually is the Pascal method. We build a table consisting of formula_19 columns and formula_20 rows, where "N" is the number of variables in the function. In the top row of the table we place the vector of function values, that is, the last column of the truth table. Each row of the resulting table is divided into blocks (black lines in the figure). In the first line, the block occupies one cell, in the second line — two, in the third — four, in the fourth — eight, and so on. Each block in a certain line, which we will call "lower block", always corresponds to exactly two blocks in the previous line. We will call them "left upper block" and "right upper block". The construction starts from the second line. The contents of the left upper blocks are transferred without change into the corresponding cells of the lower block (green arrows in the figure). Then, the operation "addition modulo two" is performed bitwise over the right upper and left upper blocks and the result is transferred to the corresponding cells of the right side of the lower block (red arrows in the figure). This operation is performed with all lines from top to bottom and with all blocks in each line. After the construction is completed, the bottom line contains a string of numbers, which are the coefficients of the Zhegalkin polynomial, written in the same sequence as in the triangle method described above. The summation method. According to the truth table, it is easy to calculate the individual coefficients of the Zhegalkin polynomial. To do this, sum up modulo 2 the values of the function in those rows of the truth table where variables that are not in the conjunction (that corresponds to the coefficient being calculated) take zero values. Suppose, for example, that we need to find the coefficient of the "xz" conjunction for the function of three variables formula_21. There is no variable "y" in this conjunction. Find the input sets in which the variable "y" takes a zero value. These are the sets 0, 1, 4, 5 (000, 001, 100, 101). Then the coefficient at conjunction "xz" is formula_22 Since there are no variables with the constant term, formula_23 For a term which includes all variables, the sum includes all values of the function: formula_24 Let us graphically represent the coefficients of the Zhegalkin polynomial as sums modulo 2 of values of functions at certain points. To do this, we construct a square table, where each column represents the value of the function at one of the points, and the row is the coefficient of the Zhegalkin polynomial. The point at the intersection of some column and row means that the value of the function at this point is included in the sum for the given coefficient of the polynomial (see figure). We call this table formula_25, where "N" is the number of variables of the function. There is a pattern that allows you to get a table for a function of "N" variables, having a table for a function of formula_26 variables. The new table formula_27 is arranged as a 2 × 2 matrix of formula_25 tables, and the right upper block of the matrix is cleared. Lattice-theoretic interpretation. Consider the columns of a table formula_25 as corresponding to elements of a Boolean lattice of size formula_19. For each column formula_28 express number "M" as a binary number formula_29, then formula_30 if and only if formula_31, where formula_32 denotes bitwise OR. If the rows of table formula_25 are numbered, from top to bottom, with the numbers from 0 to formula_33, then the tabular content of row number "R" is the ideal generated by element formula_34 of the lattice. Note incidentally that the overall pattern of a table formula_25 is that of a logical matrix Sierpiński triangle. Also, the pattern corresponds to an elementary cellular automaton called Rule 60, starting with the leftmost cell set to 1 and all other cells cleared. Using a Karnaugh map. The figure shows a function of three variables, "P"("A", "B", "C") represented as a Karnaugh map, which the reader may consider as an example of how to convert such maps into Zhegalkin polynomials; the general procedure is given in the following steps: Möbius transformation. The Möbius inversion formula relates the coefficients of a Boolean sum-of-minterms expression and a Zhegalkin polynomial. This is the partial order version of the Möbius formula, not the number theoretic. The Möbius inversion formula for partial orders is: formula_35 where formula_36, |"x"| being the Hamming distance of "x" from 0. Since formula_37 in the Zhegalkin algebra, the Möbius function collapses to being the constant 1. The set of divisors of a given number "x" is also the order ideal generated by that number: formula_38. Since summation is modulo 2, the formula can be restated as formula_39 Example. As an example, consider the three-variable case. The following table shows the divisibility relation: Then formula_40 The above system of equations can be solved for "f", and the result can be summarized as being obtainable by exchanging "g" and "f" throughout the above system. The table below shows the binary numbers along with their associated Zhegalkin monomials and Boolean minterms: The Zhegalkin monomials are naturally ordered by divisibility, whereas the Boolean minterms do not so naturally order themselves; each one represents an exclusive eighth of the three-variable Venn diagram. The ordering of the monomials transfers to the bit strings as follows: given formula_41 and formula_42, a pair of bit triplets, then formula_43. The correspondence between a three-variable Boolean sum-of-minterms and a Zhegalkin polynomial is then: formula_44 The system of equations above may be summarized as a logical matrix equation: formula_45 which N. J. Wildberger calls a Boole–Möbius transformation. Below is shown the “XOR spreadsheet” form of the transformation, going in the direction of "g" to "f": Related work. In 1927, in the same year as Zhegalkin's paper, the American mathematician Eric Temple Bell published a sophisticated arithmetization of Boolean algebra based on Richard Dedekind's ideal theory and general modular arithmetic (as opposed to arithmetic mod 2). The much simpler arithmetic character of Zhegalkin polynomials was first noticed in the west (independently, communication between Soviet and Western mathematicians being very limited in that era) by the American mathematician Marshall Stone in 1936 when he observed while writing up his celebrated Stone duality theorem that the supposedly loose analogy between Boolean algebras and rings could in fact be formulated as an exact equivalence holding for both finite and infinite algebras, leading him to substantially reorganize his paper over the next few years. Notes. &lt;templatestyles src="Reflist/styles.css" /&gt; References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "f(A,B,C) = \\bar A \\bar B \\bar C + \\bar A B \\bar C + A \\bar B \\bar C + A B C" }, { "math_id": 1, "text": "\\vec f = \\begin{pmatrix} 1 \\\\ 0 \\\\ 1 \\\\ 0 \\\\ 1 \\\\ 0 \\\\ 0 \\\\ 1 \\end{pmatrix} . " }, { "math_id": 2, "text": " \\vec c = \\begin{pmatrix} c_0 \\\\ c_1 \\\\ c_2 \\\\ c_3 \\\\ c_4 \\\\ c_5 \\\\ c_6 \\\\ c_7 \\end{pmatrix}" }, { "math_id": 3, "text": " S_3 = \\begin{pmatrix} 1&0&0&0&0&0&0&0 \\\\\n1&1&0&0&0&0&0&0 \\\\\n1&0&1&0&0&0&0&0 \\\\\n1&1&1&1&0&0&0&0 \\\\\n1&0&0&0&1&0&0&0 \\\\\n1&1&0&0&1&1&0&0 \\\\\n1&0&1&0&1&0&1&0 \\\\\n1&1&1&1&1&1&1&1\n\\end{pmatrix}" }, { "math_id": 4, "text": "2^3 \\times 2^3" }, { "math_id": 5, "text": "S_n" }, { "math_id": 6, "text": " S_3 \\vec c = \\vec f" }, { "math_id": 7, "text": "\\vec c" }, { "math_id": 8, "text": " \\vec c = S_3^{-1} \\vec f = S_3 \\vec f\n= \\begin{pmatrix} 1&0&0&0&0&0&0&0 \\\\ 1&1&0&0&0&0&0&0 \\\\ 1&0&1&0&0&0&0&0 \\\\ 1&1&1&1&0&0&0&0 \\\\ 1&0&0&0&1&0&0&0 \\\\ 1&1&0&0&1&1&0&0 \\\\ 1&0&1&0&1&0&1&0 \\\\ 1&1&1&1&1&1&1&1 \\end{pmatrix} \\begin{pmatrix} 1 \\\\ 0 \\\\ 1 \\\\ 0 \\\\ 1 \\\\ 0 \\\\ 0 \\\\ 1 \\end{pmatrix} = \\begin{pmatrix} 1 \\\\ 1 \\\\ 1 \\oplus 1 \\\\ 1 \\oplus 1 \\\\ 1 \\oplus 1 \\\\ 1 \\oplus 1 \\\\ 1 \\oplus 1 \\oplus 1 \\\\ 1 \\oplus 1 \\oplus 1 \\oplus 1 \\end{pmatrix} = \\begin{pmatrix} 1 \\\\ 1 \\\\ 0 \\\\ 0 \\\\ 0 \\\\ 0 \\\\ 1 \\\\ 0 \\end{pmatrix}," }, { "math_id": 9, "text": "1 \\oplus C \\oplus AB" }, { "math_id": 10, "text": "c_0, \\dots , c_{2^n-1}" }, { "math_id": 11, "text": "c_i" }, { "math_id": 12, "text": " \\zeta(c_i) := c_i" }, { "math_id": 13, "text": " \\zeta(c_0, \\dots , c_k) := \\zeta(c_0, \\dots , c_{k - 1}) \\oplus \\zeta(c_1, \\dots , c_k). " }, { "math_id": 14, "text": " \\zeta(c_0, \\dots , c_m) = \\bigoplus_{k = 0}^m {m \\choose k}_2 c_k " }, { "math_id": 15, "text": "{m \\choose k}_2" }, { "math_id": 16, "text": " g_i = \\zeta(c_0, \\dots , c_i) " }, { "math_id": 17, "text": "g_0, \\dots , g_{2^n-1}" }, { "math_id": 18, "text": " c_i = \\zeta(g_0, \\dots , g_i). " }, { "math_id": 19, "text": "2^N" }, { "math_id": 20, "text": "N + 1" }, { "math_id": 21, "text": "f(x, y, z)" }, { "math_id": 22, "text": "a_5 = f_0 \\oplus f_1 \\oplus f_4 \\oplus f_5 = f(0,0,0) \\oplus f(0,0,1) \\oplus f(1,0,0) \\oplus f(1,0,1) " }, { "math_id": 23, "text": "a_0 = f_0." }, { "math_id": 24, "text": "a_{N - 1} = f_0 \\oplus f_1 \\oplus f_2 \\oplus \\dots \\oplus f_{N-2} \\oplus f_{N-1} " }, { "math_id": 25, "text": "T_N" }, { "math_id": 26, "text": "N-1" }, { "math_id": 27, "text": "T_N + 1" }, { "math_id": 28, "text": "f_M" }, { "math_id": 29, "text": "M_2" }, { "math_id": 30, "text": "f_M \\le f_K" }, { "math_id": 31, "text": "M_2 \\vee K_2 = K_2" }, { "math_id": 32, "text": "\\vee" }, { "math_id": 33, "text": "2^N - 1" }, { "math_id": 34, "text": "f_R" }, { "math_id": 35, "text": " g(x) = \\sum_{y:y\\le x} f(y) \\leftrightarrow f(x) = \\sum_{y:y\\le x} g(y) \\mu(y,x)," }, { "math_id": 36, "text": "\\mu(y,x) = (-1)^{|x| - |y|}" }, { "math_id": 37, "text": "-1 \\equiv 1" }, { "math_id": 38, "text": "\\langle x \\rangle" }, { "math_id": 39, "text": " g(x) = \\bigoplus_{y:y\\in \\langle x\\rangle} f(y) \\leftrightarrow f(x) = \\bigoplus_{y:y\\in \\langle x\\rangle} g(y) " }, { "math_id": 40, "text": "\\begin{align}\n g(000) &= f(000) \\\\[1ex]\n g(001) &= f(000) \\oplus f(001) \\\\[1ex]\n g(010) &= f(000) \\oplus f(010) \\\\[1ex]\n g(011) &= f(000) \\oplus f(001) \\oplus f(010) \\oplus f(011) \\\\[1ex]\n g(100) &= f(000) \\oplus f(100) \\\\[1ex]\n g(101) &= f(000) \\oplus f(001) \\oplus f(100) \\oplus(101) \\\\[1ex]\n g(110) &= f(000) \\oplus f(010) \\oplus f(100) \\oplus f(110) \\\\[1ex]\n g(111) &= f(000) \\oplus f(001) \\oplus f(010) \\oplus f(011) \\oplus f(100) \\oplus f(101) \\oplus f(110) \\oplus f(111)\n\\end{align}" }, { "math_id": 41, "text": "a_1 a_2 a_3" }, { "math_id": 42, "text": "b_1 b_2 b_3" }, { "math_id": 43, "text": "a_1 a_2 a_3 \\le b_1 b_2 b_3 \\leftrightarrow a_1 \\le b_1 \\wedge a_2 \\le b_2 \\wedge a_3 \\le b_3" }, { "math_id": 44, "text": "\\begin{align}\n&f(000) \\bar A \\bar B \\bar C \\vee f(001) \\bar A \\bar B C \\vee f(010) \\bar A B \\bar C \\vee f(011) \\bar A B C \\vee f(100) A \\bar B \\bar C \\vee f(101) A \\bar B C \\vee f(110) A B \\bar C \\vee f(111) A B C \\\\[1ex]\n&\\qquad \\equiv g(000) \\oplus g(001) C \\oplus g(010) B \\oplus g(011) BC \\oplus g(100) A \\oplus g(101) AC \\oplus g(110) AB \\oplus g(111) ABC.\n\\end{align}" }, { "math_id": 45, "text": " \\begin{pmatrix} g(000) \\\\ g(001) \\\\ g(010) \\\\ g(011) \\\\ g(100) \\\\ g(101) \\\\ g(110) \\\\ g(111) \\end{pmatrix} =\n\\begin{pmatrix}\n1 && 0 && 0 && 0 && 0 && 0 && 0 && 0 \\\\\n1 && 1 && 0 && 0 && 0 && 0 && 0 && 0 \\\\\n1 && 0 && 1 && 0 && 0 && 0 && 0 && 0 \\\\\n1 && 1 && 1 && 1 && 0 && 0 && 0 && 0 \\\\\n1 && 0 && 0 && 0 && 1 && 0 && 0 && 0 \\\\\n1 && 1 && 0 && 0 && 1 && 1 && 0 && 0 \\\\\n1 && 0 && 1 && 0 && 1 && 0 && 1 && 0 \\\\\n1 && 1 && 1 && 1 && 1 && 1 && 1 && 1 \\end{pmatrix} \\begin{pmatrix} f(000) \\\\ f(001) \\\\ f(010) \\\\ f(011) \\\\ f(100) \\\\ f(101) \\\\ f(110) \\\\ f(111) \\end{pmatrix} " } ]
https://en.wikipedia.org/wiki?curid=12152471
12154411
List of problems in loop theory and quasigroup theory
In mathematics, especially abstract algebra, loop theory and quasigroup theory are active research areas with many open problems. As in other areas of mathematics, such problems are often made public at professional conferences and meetings. Many of the problems posed here first appeared in the "Loops (Prague)" conferences and the "Mile High (Denver)" conferences. Open problems (Moufang loops). Abelian by cyclic groups resulting in Moufang loops. Let "L" be a Moufang loop with normal abelian subgroup (associative subloop) "M" of odd order such that "L"/"M" is a cyclic group of order bigger than 3. (i) Is "L" a group? (ii) If the orders of "M" and "L"/"M" are relatively prime, is L a group? Embedding CMLs of period 3 into alternative algebras. Conjecture: Any finite commutative Moufang loop of period 3 can be embedded into a commutative alternative algebra. Frattini subloop for Moufang loops. Conjecture: Let "L" be a finite Moufang loop and Φ("L") the intersection of all maximal subloops of "L". Then Φ("L") is a normal nilpotent subloop of "L". Minimal presentations for loops M(G,2). For a group formula_0, define formula_1 on formula_0 x formula_2 by formula_3, formula_4, formula_5, formula_6. Find a minimal presentation for the Moufang loop formula_1 with respect to a presentation for formula_0. Moufang loops of order "p"2"q"3 and "pq"4. Let "p" and "q" be distinct odd primes. If "q" is not congruent to 1 modulo "p", are all Moufang loops of order "p"2"q"3 groups? What about "pq"4? (Phillips' problem) Odd order Moufang loop with trivial nucleus. Is there a Moufang loop of odd order with trivial nucleus? Presentations for finite simple Moufang loops. Find presentations for all nonassociative finite simple Moufang loops in the variety of Moufang loops. The restricted Burnside problem for Moufang loops. Conjecture: Let "M" be a finite Moufang loop of exponent "n" with "m" generators. Then there exists a function "f"("n","m") such that |"M"| &lt; "f"("n","m"). The Sanov and M. Hall theorems for Moufang loops. Conjecture: Let "L" be a finitely generated Moufang loop of exponent 4 or 6. Then "L" is finite. Torsion in free Moufang loops. Let MF"n" be the free Moufang loop with "n" generators. Conjecture: MF3 is torsion free but MF"n" with "n" &gt; 4 is not. Open problems (Bol loops). Nilpotency degree of the left multiplication group of a left Bol loop. For a left Bol loop "Q", find some relation between the nilpotency degree of the left multiplication group of "Q" and the structure of "Q". Are two Bol loops with similar multiplication tables isomorphic? Let formula_7, formula_8 be two quasigroups defined on the same underlying set formula_9. The distance formula_10 is the number of pairs formula_11 in formula_12 such that formula_13. Call a class of finite quasigroups "quadratic" if there is a positive real number formula_14 such that any two quasigroups formula_7, formula_8 of order formula_15 from the class satisfying formula_16 are isomorphic. Are Moufang loops quadratic? Are Bol loops quadratic? Campbell–Hausdorff series for analytic Bol loops. Determine the Campbell–Hausdorff series for analytic Bol loops. Universally flexible loop that is not middle Bol. A loop is "universally flexible" if every one of its loop isotopes is flexible, that is, satisfies ("xy")"x" = "x"("yx"). A loop is "middle Bol" if every one of its loop isotopes has the antiautomorphic inverse property, that is, satisfies ("xy")−1 = "y"−1"x"−1. Is there a finite, universally flexible loop that is not middle Bol? Finite simple Bol loop with nontrivial conjugacy classes. Is there a finite simple nonassociative Bol loop with nontrivial conjugacy classes? Open problems (Nilpotency and solvability). Niemenmaa's conjecture and related problems. Let "Q" be a loop whose inner mapping group is nilpotent. Is "Q" nilpotent? Is "Q" solvable? Loops with abelian inner mapping group. Let "Q" be a loop with abelian inner mapping group. Is "Q" nilpotent? If so, is there a bound on the nilpotency class of "Q"? In particular, can the nilpotency class of "Q" be higher than 3? Number of nilpotent loops up to isomorphism. Determine the number of nilpotent loops of order 24 up to isomorphism. A finite nilpotent loop without a finite basis for its laws. Construct a finite nilpotent loop with no finite basis for its laws. Open problems (quasigroups). Existence of infinite simple paramedial quasigroups. Are there infinite simple paramedial quasigroups? Minimal isotopically universal varieties of quasigroups. A variety "V" of quasigroups is "isotopically universal" if every quasigroup is isotopic to a member of "V". Is the variety of loops a minimal isotopically universal variety? Does every isotopically universal variety contain the variety of loops or its parastrophes? Small quasigroups with quasigroup core. Does there exist a quasigroup "Q" of order "q" = 14, 18, 26 or 42 such that the operation * defined on "Q" by "x" * "y" = "y" − "xy" is a quasigroup operation? Uniform construction of Latin squares? Construct a latin square "L" of order "n" as follows: Let "G" = "K""n","n" be the complete bipartite graph with distinct weights on its "n"2 edges. Let "M"1 be the cheapest matching in "G", "M"2 the cheapest matching in "G" with "M"1 removed, and so on. Each matching "M""i" determines a permutation "p""i" of 1, ..., "n". Let "L" be obtained from "G" by placing the permutation "p""i" into row "i" of "L". Does this procedure result in a uniform distribution on the space of Latin squares of order "n"? Open problems (miscellaneous). Bound on the size of multiplication groups. For a loop "Q", let Mlt(Q) denote the multiplication group of "Q", that is, the group generated by all left and right translations. Is |Mlt("Q")| &lt; "f"(|"Q"|) for some variety of loops and for some polynomial "f"? Does every finite alternative loop have 2-sided inverses? Does every finite alternative loop, that is, every loop satisfying "x"("xy") = ("xx")"y" and "x"("yy") = ("xy")"y", have 2-sided inverses? Finite simple nonassociative automorphic loop. Find a nonassociative finite simple automorphic loop, if such a loop exists. Moufang theorem in non-Moufang loops. We say that a variety "V" of loops satisfies the Moufang theorem if for every loop "Q" in "V" the following implication holds: for every "x", "y", "z" in "Q", if "x"("yz") = ("xy")"z" then the subloop generated by "x", "y", "z" is a group. Is every variety that satisfies Moufang theorem contained in the variety of Moufang loops? Universality of Osborn loops. A loop is "Osborn" if it satisfies the identity "x"(("yz")"x") = ("x""λ"\"y")("zx"). Is every Osborn loop universal, that is, is every isotope of an Osborn loop Osborn? If not, is there a nice identity characterizing universal Osborn loops? Solved problems. The following problems were posed as open at various conferences and have since been solved. Buchsteiner loop that is not conjugacy closed. Is there a Buchsteiner loop that is not conjugacy closed? Is there a finite simple Buchsteiner loop that is not conjugacy closed? Classification of Moufang loops of order 64. Classify nonassociative Moufang loops of order 64. Conjugacy closed loop with nonisomorphic one-sided multiplication groups. Construct a conjugacy closed loop whose left multiplication group is not isomorphic to its right multiplication group. Existence of a finite simple Bol loop. Is there a finite simple Bol loop that is not Moufang? Left Bol loop with trivial right nucleus. Is there a finite non-Moufang left Bol loop with trivial right nucleus? Lagrange property for Moufang loops. Does every finite Moufang loop have the strong Lagrange property? Moufang loops with non-normal commutant. Is there a Moufang loop whose commutant is not normal? Quasivariety of cores of Bol loops. Is the class of cores of Bol loops a quasivariety? Parity of the number of quasigroups up to isomorphism. Let I(n) be the number of isomorphism classes of quasigroups of order n. Is I(n) odd for every n? Classification of finite simple paramedial quasigroups. Classify the finite simple paramedial quasigroups. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "G" }, { "math_id": 1, "text": "M(G,2)" }, { "math_id": 2, "text": "C_2" }, { "math_id": 3, "text": "(g,0)(h,0)=(gh,0)" }, { "math_id": 4, "text": "(g,0)(h,1)=(hg,1)" }, { "math_id": 5, "text": "(g,1)(h,0)=(gh^{-1},1)" }, { "math_id": 6, "text": "(g,1)(h,1)=(h^{-1}g,0)" }, { "math_id": 7, "text": "(Q,*)" }, { "math_id": 8, "text": "(Q,+)" }, { "math_id": 9, "text": "Q" }, { "math_id": 10, "text": "d(*,+)" }, { "math_id": 11, "text": "(a,b)" }, { "math_id": 12, "text": "Q\\times Q" }, { "math_id": 13, "text": "a*b\\ne a+b " }, { "math_id": 14, "text": "\\alpha" }, { "math_id": 15, "text": "n" }, { "math_id": 16, "text": "d(*,+) < \\alpha\\,n^2" }, { "math_id": 17, "text": "\\alpha=1/9" }, { "math_id": 18, "text": "\\alpha=1/4" }, { "math_id": 19, "text": "M(q)" }, { "math_id": 20, "text": "S_3" } ]
https://en.wikipedia.org/wiki?curid=12154411
12155770
Green measure
In mathematics — specifically, in stochastic analysis — the Green measure is a measure associated to an Itō diffusion. There is an associated Green formula representing suitably smooth functions in terms of the Green measure and first exit times of the diffusion. The concepts are named after the British mathematician George Green and are generalizations of the classical Green's function and Green formula to the stochastic case using Dynkin's formula. Notation. Let "X" be an R"n"-valued Itō diffusion satisfying an Itō stochastic differential equation of the form formula_0 Let P"x" denote the law of "X" given the initial condition "X"0 = "x", and let E"x" denote expectation with respect to P"x". Let "L""X" be the infinitesimal generator of "X", i.e. formula_1 Let "D" ⊆ R"n" be an open, bounded domain; let "τ""D" be the first exit time of "X" from "D": formula_2 The Green measure. Intuitively, the Green measure of a Borel set "H" (with respect to a point "x" and domain "D") is the expected length of time that "X", having started at "x", stays in "H" before it leaves the domain "D". That is, the Green measure of "X" with respect to "D" at "x", denoted "G"("x", ⋅), is defined for Borel sets "H" ⊆ R"n" by formula_3 or for bounded, continuous functions "f" : "D" → R by formula_4 The name "Green measure" comes from the fact that if "X" is Brownian motion, then formula_5 where "G"("x", "y") is Green's function for the operator "L""X" (which, in the case of Brownian motion, is Δ, where Δ is the Laplace operator) on the domain "D". The Green formula. Suppose that E"x"["τ""D"] &lt; +∞ for all "x" ∈ "D", and let "f" : R"n" → R be of smoothness class "C"2 with compact support. Then formula_6 In particular, for "C"2 functions "f" with support compactly embedded in "D", formula_7 The proof of Green's formula is an easy application of Dynkin's formula and the definition of the Green measure: formula_8 formula_9 formula_10
[ { "math_id": 0, "text": "\\mathrm{d} X_{t} = b(X_{t}) \\, \\mathrm{d} t + \\sigma (X_{t}) \\, \\mathrm{d} B_{t}." }, { "math_id": 1, "text": "L_{X} = \\sum_{i} b_{i} \\frac{\\partial}{\\partial x_{i}} + \\frac1{2} \\sum_{i, j} \\big( \\sigma \\sigma^{\\top} \\big)_{i, j} \\frac{\\partial^{2}}{\\partial x_{i} \\, \\partial x_{j}}." }, { "math_id": 2, "text": "\\tau_{D} := \\inf \\{ t \\geq 0 | X_{t} \\not \\in D \\}." }, { "math_id": 3, "text": "G(x, H) = \\mathbf{E}^{x} \\left[ \\int_{0}^{\\tau_{D}} \\chi_{H} (X_{s}) \\, \\mathrm{d} s \\right]," }, { "math_id": 4, "text": "\\int_{D} f(y) \\, G(x, \\mathrm{d} y) = \\mathbf{E}^{x} \\left[ \\int_{0}^{\\tau_{D}} f(X_{s}) \\, \\mathrm{d} s \\right]" }, { "math_id": 5, "text": "G(x, H) = \\int_{H} G(x, y) \\, \\mathrm{d} y," }, { "math_id": 6, "text": "f(x) = \\mathbf{E}^{x} \\big[ f \\big( X_{\\tau_{D}} \\big) \\big] - \\int_{D} L_{X} f (y) \\, G(x, \\mathrm{d} y)." }, { "math_id": 7, "text": "f(x) = - \\int_{D} L_{X} f (y) \\, G(x, \\mathrm{d} y)." }, { "math_id": 8, "text": "\\mathbf{E}^{x} \\big[ f \\big( X_{\\tau_{D}} \\big) \\big]" }, { "math_id": 9, "text": "= f(x) + \\mathbf{E}^{x} \\left[ \\int_{0}^{\\tau_{D}} L_{X} f (X_{s}) \\, \\mathrm{d} s \\right]" }, { "math_id": 10, "text": "= f(x) + \\int_{D} L_{X} f (y) \\, G(x, \\mathrm{d} y)." } ]
https://en.wikipedia.org/wiki?curid=12155770
12155912
Discriminative model
Mathematical model used for classification or regression Discriminative models, also referred to as conditional models, are a class of models frequently used for classification. They are typically used to solve binary classification problems, i.e. assign labels, such as pass/fail, win/lose, alive/dead or healthy/sick, to existing datapoints. Types of discriminative models include logistic regression (LR), conditional random fields (CRFs), decision trees among many others. Generative model approaches which uses a joint probability distribution instead, include naive Bayes classifiers, Gaussian mixture models, variational autoencoders, generative adversarial networks and others. Definition. Unlike generative modelling, which studies the joint probability formula_0, discriminative modeling studies the formula_1 or maps the given unobserved variable (target) formula_2 to a class label formula_3 dependent on the observed variables (training samples). For example, in object recognition, formula_2 is likely to be a vector of raw pixels (or features extracted from the raw pixels of the image). Within a probabilistic framework, this is done by modeling the conditional probability distribution formula_1, which can be used for predicting formula_3 from formula_2. Note that there is still distinction between the conditional model and the discriminative model, though more often they are simply categorised as discriminative model. Pure discriminative model vs. conditional model. A "conditional model" models the conditional probability distribution, while the traditional discriminative model aims to optimize on mapping the input around the most similar trained samples. Typical discriminative modelling approaches. The following approach is based on the assumption that it is given the training data-set formula_4, where formula_5is the corresponding output for the input formula_6. Linear classifier. We intend to use the function formula_7to simulate the behavior of what we observed from the training data-set by the linear classifier method. Using the joint feature vector formula_8, the decision function is defined as: formula_9 According to Memisevic's interpretation, formula_10, which is also formula_11, computes a score which measures the compatibility of the input formula_2 with the potential output formula_3. Then the formula_12 determines the class with the highest score. Logistic regression (LR). Since the 0-1 loss function is a commonly used one in the decision theory, the conditional probability distribution formula_13, where formula_14 is a parameter vector for optimizing the training data, could be reconsidered as following for the logistics regression model: formula_15, with formula_16 The equation above represents logistic regression. Notice that a major distinction between models is their way of introducing posterior probability. Posterior probability is inferred from the parametric model. We then can maximize the parameter by following equation: formula_17 It could also be replaced by the log-loss equation below: formula_18 Since the log-loss is differentiable, a gradient-based method can be used to optimize the model. A global optimum is guaranteed because the objective function is convex. The gradient of log likelihood is represented by: formula_19 where formula_20is the expectation of formula_21. The above method will provide efficient computation for the relative small number of classification. Contrast with generative model. Contrast in approaches. Let's say we are given the formula_22 class labels (classification) and formula_23 feature variables, formula_24, as the training samples. A generative model takes the joint probability formula_0, where formula_2 is the input and formula_3 is the label, and predicts the most possible known label formula_25 for the unknown variable formula_26 using Bayes' theorem. Discriminative models, as opposed to generative models, do not allow one to generate samples from the joint distribution of observed and target variables. However, for tasks such as classification and regression that do not require the joint distribution, discriminative models can yield superior performance (in part because they have fewer variables to compute). On the other hand, generative models are typically more flexible than discriminative models in expressing dependencies in complex learning tasks. In addition, most discriminative models are inherently supervised and cannot easily support unsupervised learning. Application-specific details ultimately dictate the suitability of selecting a discriminative versus generative model. Discriminative models and generative models also differ in introducing the posterior possibility. To maintain the least expected loss, the minimization of result's misclassification should be acquired. In the discriminative model, the posterior probabilities, formula_27, is inferred from a parametric model, where the parameters come from the training data. Points of estimation of the parameters are obtained from the maximization of likelihood or distribution computation over the parameters. On the other hand, considering that the generative models focus on the joint probability, the class posterior possibility formula_28 is considered in Bayes' theorem, which is formula_29. Advantages and disadvantages in application. In the repeated experiments, logistic regression and naive Bayes are applied here for different models on binary classification task, discriminative learning results in lower asymptotic errors, while generative one results in higher asymptotic errors faster. However, in Ulusoy and Bishop's joint work, "Comparison of Generative and Discriminative Techniques for Object Detection and Classification", they state that the above statement is true only when the model is the appropriate one for data (i.e.the data distribution is correctly modeled by the generative model). Advantages. Significant advantages of using discriminative modeling are: Compared with the advantages of using generative modeling: Optimizations in applications. Since both advantages and disadvantages present on the two way of modeling, combining both approaches will be a good modeling in practice. For example, in Marras' article "A Joint Discriminative Generative Model for Deformable Model Construction and Classification", he and his coauthors apply the combination of two modelings on face classification of the models, and receive a higher accuracy than the traditional approach. Similarly, Kelm also proposed the combination of two modelings for pixel classification in his article "Combining Generative and Discriminative Methods for Pixel Classification with Multi-Conditional Learning". During the process of extracting the discriminative features prior to the clustering, Principal component analysis (PCA), though commonly used, is not a necessarily discriminative approach. In contrast, LDA is a discriminative one. Linear discriminant analysis (LDA), provides an efficient way of eliminating the disadvantage we list above. As we know, the discriminative model needs a combination of multiple subtasks before classification, and LDA provides appropriate solution towards this problem by reducing dimension. Types. Examples of discriminative models include: References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "P(x,y)" }, { "math_id": 1, "text": "P(y|x)" }, { "math_id": 2, "text": "x" }, { "math_id": 3, "text": "y" }, { "math_id": 4, "text": "D=\\{(x_i;y_i)|i\\leq N\\in \\mathbb{Z}\\}" }, { "math_id": 5, "text": "y_i" }, { "math_id": 6, "text": "x_i" }, { "math_id": 7, "text": "f(x)" }, { "math_id": 8, "text": "\\phi(x,y)" }, { "math_id": 9, "text": "f(x;w)=\\arg \\max_y w^T \\phi(x,y)" }, { "math_id": 10, "text": "w^T \\phi(x,y)" }, { "math_id": 11, "text": "c(x,y;w)" }, { "math_id": 12, "text": "\\arg \\max" }, { "math_id": 13, "text": "P(y|x;w)" }, { "math_id": 14, "text": "w" }, { "math_id": 15, "text": "P(y|x;w)= \\frac{1}{Z(x;w)} \\exp(w^T\\phi(x,y))\n" }, { "math_id": 16, "text": "Z(x;w)= \\textstyle \\sum_{y} \\displaystyle\\exp(w^T\\phi(x,y))" }, { "math_id": 17, "text": "L(w)=\\textstyle \\sum_{i} \\displaystyle \\log p(y^i|x^i;w)" }, { "math_id": 18, "text": "l^{\\log} (x^i, y^i,c(x^i;w)) = -\\log p(y^i|x^i;w) = \\log Z(x^i;w)-w^T\\phi(x^i,y^i)" }, { "math_id": 19, "text": "\\frac{\\partial L(w)}{\\partial w} = \\textstyle \\sum_{i} \\displaystyle \\phi(x^i,y^i) - E_{p(y|x^i;w)} \\phi(x^i,y)" }, { "math_id": 20, "text": "E_{p(y|x^i;w)}" }, { "math_id": 21, "text": "p(y|x^i;w)" }, { "math_id": 22, "text": "m" }, { "math_id": 23, "text": "n" }, { "math_id": 24, "text": "Y:\\{y_1, y_2,\\ldots,y_m\\}, X:\\{x_1,x_2,\\ldots,x_n \\}" }, { "math_id": 25, "text": "\\widetilde{y}\\in Y" }, { "math_id": 26, "text": "\\widetilde{x}" }, { "math_id": 27, "text": "P(y|x) " }, { "math_id": 28, "text": "P(k)" }, { "math_id": 29, "text": "P(y|x) = \\frac{p(x|y)p(y)}{\\textstyle \\sum_{i}p(x|i)p(i) \\displaystyle}=\\frac{p(x|y)p(y)}{p(x)}" } ]
https://en.wikipedia.org/wiki?curid=12155912
1215732
Radiant intensity
Intensity of electromagnetic radiation In radiometry, radiant intensity is the radiant flux emitted, reflected, transmitted or received, per unit solid angle, and spectral intensity is the radiant intensity per unit frequency or wavelength, depending on whether the spectrum is taken as a function of frequency or of wavelength. These are "directional" quantities. The SI unit of radiant intensity is the watt per steradian (W/sr), while that of spectral intensity in frequency is the watt per steradian per hertz (W·sr−1·Hz−1) and that of spectral intensity in wavelength is the watt per steradian per metre (W·sr−1·m−1)—commonly the watt per steradian per nanometre (W·sr−1·nm−1). Radiant intensity is distinct from irradiance and radiant exitance, which are often called "intensity" in branches of physics other than radiometry. In radio-frequency engineering, radiant intensity is sometimes called radiation intensity. Mathematical definitions. Radiant intensity. Radiant intensity, denoted "I"e,Ω ("e" for "energetic", to avoid confusion with photometric quantities, and "Ω" to indicate this is a "directional" quantity), is defined as formula_0 where In general, "I"e,Ω is a function of viewing angle "θ" and potentially azimuth angle. For the special case of a Lambertian surface, "I"e,Ω follows the Lambert's cosine law "I"e,Ω = "I"0 cos "θ". When calculating the radiant intensity emitted by a source, "Ω" refers to the solid angle into which the light is emitted. When calculating radiance received by a detector, "Ω" refers to the solid angle subtended by the source as viewed from that detector. Spectral intensity. Spectral intensity in frequency, denoted "I"e,Ω,ν, is defined as formula_1 where "ν" is the frequency. Spectral intensity in wavelength, denoted "I"e,Ω,λ, is defined as formula_2 where "λ" is the wavelength. Radio-frequency engineering. Radiant intensity is used to characterize the emission of radiation by an antenna: formula_3 where Unlike power density, radiant intensity does not depend on distance: because radiant intensity is defined as the power through a solid angle, the decreasing power density over distance due to the inverse-square law is offset by the increase in area with distance. SI radiometry units. &lt;templatestyles src="Reflist/styles.css" /&gt; References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "I_{\\mathrm{e},\\Omega} = \\frac{\\partial \\Phi_\\mathrm{e}}{\\partial \\Omega}," }, { "math_id": 1, "text": "I_{\\mathrm{e},\\Omega,\\nu} = \\frac{\\partial I_{\\mathrm{e},\\Omega}}{\\partial \\nu}," }, { "math_id": 2, "text": "I_{\\mathrm{e},\\Omega,\\lambda} = \\frac{\\partial I_{\\mathrm{e},\\Omega}}{\\partial \\lambda}," }, { "math_id": 3, "text": "I_{\\mathrm{e},\\Omega} = E_\\mathrm{e}(r) \\, r^2," } ]
https://en.wikipedia.org/wiki?curid=1215732
1215764
't Hooft–Polyakov monopole
Yang–Mills–Higgs magnetic monopole In theoretical physics, the 't Hooft–Polyakov monopole is a topological soliton similar to the Dirac monopole but without the Dirac string. It arises in the case of a Yang–Mills theory with a gauge group formula_0, coupled to a Higgs field which spontaneously breaks it down to a smaller group formula_1 via the Higgs mechanism. It was first found independently by Gerard 't Hooft and Alexander Polyakov. Unlike the Dirac monopole, the 't Hooft–Polyakov monopole is a smooth solution with a finite total energy. The solution is localized around formula_2. Very far from the origin, the gauge group formula_0 is broken to formula_1, and the 't Hooft–Polyakov monopole reduces to the Dirac monopole. However, at the origin itself, the formula_0 gauge symmetry is unbroken and the solution is non-singular also near the origin. The Higgs field formula_3, is proportional to formula_4, where the adjoint indices are identified with the three-dimensional spatial indices. The gauge field at infinity is such that the Higgs field's dependence on the angular directions is pure gauge. The precise configuration for the Higgs field and the gauge field near the origin is such that it satisfies the full Yang–Mills–Higgs equations of motion. Mathematical details. Suppose the vacuum is the vacuum manifold formula_5. Then, for finite energies, as we move along each direction towards spatial infinity, the state along the path approaches a point on the vacuum manifold formula_5. Otherwise, we would not have a finite energy. In topologically trivial 3 + 1 dimensions, this means spatial infinity is homotopically equivalent to the topological sphere formula_6. So, the superselection sectors are classified by the second homotopy group of formula_5, formula_7. In the special case of a Yang–Mills–Higgs theory, the vacuum manifold is isomorphic to the quotient space formula_8 and the relevant homotopy group is formula_9. This does not actually require the existence of a scalar Higgs field. Most symmetry breaking mechanisms (e.g. technicolor) would also give rise to a 't Hooft–Polyakov monopole. It is easy to generalize to the case of formula_10 dimensions. We have formula_11. Monopole problem. The "monopole problem" refers to the cosmological implications of grand unification theories (GUT). Since monopoles are generically produced in GUT during the cooling of the universe, and since they are expected to be quite massive, their existence threatens to overclose it. This is considered a "problem" within the standard Big Bang theory. Cosmic inflation remedies the situation by diluting any primordial abundance of magnetic monopoles. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "G" }, { "math_id": 1, "text": "H" }, { "math_id": 2, "text": "r=0" }, { "math_id": 3, "text": "H_i (i=1,2,3)" }, { "math_id": 4, "text": "x_i f(|x|)" }, { "math_id": 5, "text": "\\Sigma" }, { "math_id": 6, "text": "S^2" }, { "math_id": 7, "text": "\\pi_2(\\Sigma)" }, { "math_id": 8, "text": "G/H" }, { "math_id": 9, "text": "\\pi_2(G/H)" }, { "math_id": 10, "text": "d+1" }, { "math_id": 11, "text": "\\pi_{d-1}(\\Sigma)" } ]
https://en.wikipedia.org/wiki?curid=1215764
12158034
Edge (geometry)
Line segment joining two adjacent vertices in a polygon or polytope Three edges AB, BC, and CA, each between two vertices of a triangle. File:Square (geometry).svg In geometry, an edge is a particular type of line segment joining two vertices in a polygon, polyhedron, or higher-dimensional polytope. In a polygon, an edge is a line segment on the boundary, and is often called a polygon side. In a polyhedron or more generally a polytope, an edge is a line segment where two faces (or polyhedron sides) meet. A segment joining two vertices while passing through the interior or exterior is not an edge but instead is called a diagonal. Relation to edges in graphs. In graph theory, an edge is an abstract object connecting two graph vertices, unlike polygon and polyhedron edges which have a concrete geometric representation as a line segment. However, any polyhedron can be represented by its skeleton or edge-skeleton, a graph whose vertices are the geometric vertices of the polyhedron and whose edges correspond to the geometric edges. Conversely, the graphs that are skeletons of three-dimensional polyhedra can be characterized by Steinitz's theorem as being exactly the 3-vertex-connected planar graphs. Number of edges in a polyhedron. Any convex polyhedron's surface has Euler characteristic formula_0 where "V" is the number of vertices, "E" is the number of edges, and "F" is the number of faces. This equation is known as Euler's polyhedron formula. Thus the number of edges is 2 less than the sum of the numbers of vertices and faces. For example, a cube has 8 vertices and 6 faces, and hence 12 edges. Incidences with other faces. In a polygon, two edges meet at each vertex; more generally, by Balinski's theorem, at least "d" edges meet at every vertex of a "d"-dimensional convex polytope. Similarly, in a polyhedron, exactly two two-dimensional faces meet at every edge, while in higher dimensional polytopes three or more two-dimensional faces meet at every edge. Alternative terminology. In the theory of high-dimensional convex polytopes, a "facet" or "side" of a "d"-dimensional polytope is one of its ("d" − 1)-dimensional features, a "ridge" is a ("d" − 2)-dimensional feature, and a "peak" is a ("d" − 3)-dimensional feature. Thus, the edges of a polygon are its facets, the edges of a 3-dimensional convex polyhedron are its ridges, and the edges of a 4-dimensional polytope are its peaks. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "V - E + F = 2," } ]
https://en.wikipedia.org/wiki?curid=12158034
1215833
Centered heptagonal number
Centered figurate number that represents a heptagon with a dot in the center A centered heptagonal number is a centered figurate number that represents a heptagon with a dot in the center and all other dots surrounding the center dot in successive heptagonal layers. The centered heptagonal number for "n" is given by the formula formula_0. The first few centered heptagonal numbers are 1, 8, 22, 43, 71, 106, 148, 197, 253, 316, 386, 463, 547, 638, 736, 841, 953 Centered heptagonal prime. A centered heptagonal prime is a centered heptagonal number that is prime. The first few centered heptagonal primes are 43, 71, 197, 463, 547, 953, 1471, 1933, 2647, 2843, 3697, ... The centered heptagonal twin prime numbers are 43, 71, 197, 463, 1933, 5741, 8233, 9283, 11173, 14561, 34651, ... References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "{7n^2 - 7n + 2}\\over2" } ]
https://en.wikipedia.org/wiki?curid=1215833
1215851
Centered octagonal number
Centered figurate number that represents an octagon with a dot in the center A centered octagonal number is a centered figurate number that represents an octagon with a dot in the center and all other dots surrounding the center dot in successive octagonal layers. The centered octagonal numbers are the same as the odd square numbers. Thus, the "n"th odd square number and "t"th centered octagonal number is given by the formula formula_0 The first few centered octagonal numbers are 1, 9, 25, 49, 81, 121, 169, 225, 289, 361, 441, 529, 625, 729, 841, 961, 1089, 1225 Calculating Ramanujan's tau function on a centered octagonal number yields an odd number, whereas for any other number the function yields an even number. formula_1 is the number of 2x2 matrices with elements from 0 to n that their determinant is twice their permanent. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "O_n=(2n-1)^2 = 4n^2-4n+1 | (2t+1)^2=4t^2+4t+1." }, { "math_id": 1, "text": "O_n" } ]
https://en.wikipedia.org/wiki?curid=1215851
1216013
Centered nonagonal number
Centered figurate number that represents a nonagon with a dot in the center A centered nonagonal number (or centered enneagonal number) is a centered figurate number that represents a nonagon with a dot in the center and all other dots surrounding the center dot in successive nonagonal layers. The centered nonagonal number for "n" layers is given by the formula formula_0 Multiplying the ("n" - 1)th triangular number by 9 and then adding 1 yields the "n"th centered nonagonal number, but centered nonagonal numbers have an even simpler relation to triangular numbers: every third triangular number (the 1st, 4th, 7th, etc.) is also a centered nonagonal number. Thus, the first few centered nonagonal numbers are 1, 10, 28, 55, 91, 136, 190, 253, 325, 406, 496, 595, 703, 820, 946. The list above includes the perfect numbers 28 and 496. All even perfect numbers are triangular numbers whose index is an odd Mersenne prime. Since every Mersenne prime greater than 3 is congruent to 1 modulo 3, it follows that every even perfect number greater than 6 is a centered nonagonal number. In 1850, Sir Frederick Pollock conjectured that every natural number is the sum of at most eleven centered nonagonal numbers. Pollock's conjecture was confirmed as true in 2023. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "Nc(n) = \\frac{(3n-2)(3n-1)}{2}." } ]
https://en.wikipedia.org/wiki?curid=1216013
1216087
Centered decagonal number
Centered figurate number that represents a decagon with a dot in the center A centered decagonal number is a centered figurate number that represents a decagon with a dot in the center and all other dots surrounding the center dot in successive decagonal layers. The centered decagonal number for "n" is given by the formula formula_0 Thus, the first few centered decagonal numbers are 1, 11, 31, 61, 101, 151, 211, 281, 361, 451, 551, 661, 781, 911, 1051, ... (sequence in the OEIS) Like any other centered "k"-gonal number, the "n"th centered decagonal number can be reckoned by multiplying the ("n" − 1)th triangular number by "k", 10 in this case, then adding 1. As a consequence of performing the calculation in base 10, the centered decagonal numbers can be obtained by simply adding a 1 to the right of each triangular number. Therefore, all centered decagonal numbers are odd and in base 10 always end in 1. Another consequence of this relation to triangular numbers is the simple recurrence relation for centered decagonal numbers: formula_1 where formula_2 Generating Function. The generating function of the centered decagonal number is formula_3 Continued fraction forms. formula_4 has the continued fraction expansion [5n-3;{2,2n-2,2,10n-6}].
[ { "math_id": 0, "text": "5n^2-5n+1 \\, " }, { "math_id": 1, "text": "CD_{n} = CD_{n-1}+10n ," }, { "math_id": 2, "text": "CD_0 = 1 ." }, { "math_id": 3, "text": "\\frac{x*(1+8x+x^2)}{(1-x)^3}" }, { "math_id": 4, "text": "\\sqrt{5CD_{n}}" } ]
https://en.wikipedia.org/wiki?curid=1216087
12162212
EcosimPro
Simulation software EcosimPro is a simulation tool developed by Empresarios Agrupados A.I.E for modelling simple and complex physical processes that can be expressed in terms of Differential algebraic equations or Ordinary differential equations and Discrete event simulation. The application runs on the various Microsoft Windows platforms and uses its own graphic environment for model design. The modelling of physical components is based on the EcosimPro language (EL) which is very similar to other conventional Object-oriented programming languages but is powerful enough to model continuous and discrete processes. This tool employs a set of libraries containing various types of components (mechanical, electrical, pneumatic, hydraulic, etc.) that can be reused to model any type of system. It is used within ESA for propulsion systems analysis and is the recommended ESA analysis tool for ECLS systems. Origins. The EcosimPro Tool Project began in 1989 with funds from the European Space Agency (ESA) and with the goal of simulating environmental control and life support systems for crewed spacecraft, such as the Hermes shuttle. The multidisciplinary nature of this modelling tool led to its use in many other disciplines, including fluid mechanics, chemical processing, control, energy, propulsion and flight dynamics. These complex applications have demonstrated that EcosimPro is very robust and ready for use in many other fields. The modelling language. Code examples. Differential equation To familiarize yourself with the use of EcosimPro, first create a simple component to solve a differential equation. Although EcosimPro is designed to simulate complex systems, it can also be used independently of a physical system as if it were a pure equation solver. The example in this section illustrates this type of use. It solves the following differential equation to introduce a delay to variable "x": formula_0 which is equivalent to formula_1 where "x" and "y" have a time dependence that will be defined in the experiment. "Tau" is datum provided given by the user; we will use a value of 0.6 seconds. This equation introduces a delay in the "x" variable with respect to "y" with value "tau". To simulate this equation we will create an EcosimPro component with the equation in it. The component to be simulated in EL is like thus: Pendulum One example of applied calculus could be the movement of a perfect pendulum (no friction taken into account). We would have the following data: the force of gravity ‘g’; the length of the pendulum ‘L’; and the pendulum's mass ‘M’. As variables to be calculated we would have: the Cartesian position at each moment in time of the pendulum ‘x’ and ‘y’ and the tension on the wire of the pendulum ‘T’. The equations that define the model would be: - Projecting the length of the cable on the Cartesian axes and applying Pythagoras’ theorem we get: formula_2 By decomposing force in Cartesians we get formula_3 and formula_4 To obtain the differential equations we can convert: formula_5 and formula_6 "(note: formula_7 is the first derivative of the position and equals the speed. formula_8 is the second derivative of the position and equals the acceleration)" This example can be found in the DEFAULT_LIB library as “pendulum.el”: The last two equations respectively express the accelerations, "x’’" and "y’’", on the X and Y axes Applications. EcosimPro has been used in many fields and disciplines. The following paragraphs show several applications References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\frac{dy}{dt} = (x - y ) / tau" }, { "math_id": 1, "text": "y' = (x - y ) / tau" }, { "math_id": 2, "text": "x^2 + y^2 = L^2" }, { "math_id": 3, "text": "F_x = -T \\frac{x}{L}" }, { "math_id": 4, "text": "F_y = -T \\frac{y}{L}-M\\;g" }, { "math_id": 5, "text": "F_x = M\\;a_x = M\\;\\ddot{x}" }, { "math_id": 6, "text": "F_y = M\\;a_y = M\\;\\ddot{y}" }, { "math_id": 7, "text": "\\dot{x}" }, { "math_id": 8, "text": "\\ddot{x}" } ]
https://en.wikipedia.org/wiki?curid=12162212
1216721
Wiener filter
Signal processing algorithm In signal processing, the Wiener filter is a filter used to produce an estimate of a desired or target random process by linear time-invariant (LTI) filtering of an observed noisy process, assuming known stationary signal and noise spectra, and additive noise. The Wiener filter minimizes the mean square error between the estimated random process and the desired process. Description. The goal of the Wiener filter is to compute a statistical estimate of an unknown signal using a related signal as an input and filtering that known signal to produce the estimate as an output. For example, the known signal might consist of an unknown signal of interest that has been corrupted by additive noise. The Wiener filter can be used to filter out the noise from the corrupted signal to provide an estimate of the underlying signal of interest. The Wiener filter is based on a statistical approach, and a more statistical account of the theory is given in the minimum mean square error (MMSE) estimator article. Typical deterministic filters are designed for a desired frequency response. However, the design of the Wiener filter takes a different approach. One is assumed to have knowledge of the spectral properties of the original signal and the noise, and one seeks the linear time-invariant filter whose output would come as close to the original signal as possible. Wiener filters are characterized by the following: This filter is frequently used in the process of deconvolution; for this application, see Wiener deconvolution. Wiener filter solutions. Let formula_0 be an unknown signal which must be estimated from a measurement signal formula_1. Where alpha is a tunable parameter. formula_2 is known as prediction, formula_3 is known as filtering, and formula_4 is known as smoothing (see Wiener filtering chapter of for more details). The Wiener filter problem has solutions for three possible cases: one where a noncausal filter is acceptable (requiring an infinite amount of both past and future data), the case where a causal filter is desired (using an infinite amount of past data), and the finite impulse response (FIR) case where only input data is used (i.e. the result or output is not fed back into the filter as in the IIR case). The first case is simple to solve but is not suited for real-time applications. Wiener's main accomplishment was solving the case where the causality requirement is in effect; Norman Levinson gave the FIR solution in an appendix of Wiener's book. formula_5 Noncausal solution. where formula_6 are spectral densities. Provided that formula_7 is optimal, then the minimum mean-square error equation reduces to formula_8 and the solution formula_7 is the inverse two-sided Laplace transform of formula_9. formula_10 Causal solution. where This general formula is complicated and deserves a more detailed explanation. To write down the solution formula_18 in a specific case, one should follow these steps: Finite impulse response Wiener filter for discrete series. The causal finite impulse response (FIR) Wiener filter, instead of using some given data matrix X and output vector Y, finds optimal tap weights by using the statistics of the input and output signals. It populates the input matrix X with estimates of the auto-correlation of the input signal (T) and populates the output vector Y with estimates of the cross-correlation between the output and input signals (V). In order to derive the coefficients of the Wiener filter, consider the signal "w"["n"] being fed to a Wiener filter of order (number of past taps) "N" and with coefficients formula_23. The output of the filter is denoted "x"["n"] which is given by the expression formula_24 The residual error is denoted "e"["n"] and is defined as "e"["n"] = "x"["n"] − "s"["n"] (see the corresponding block diagram). The Wiener filter is designed so as to minimize the mean square error (MMSE criteria) which can be stated concisely as follows: formula_25 where formula_26 denotes the expectation operator. In the general case, the coefficients formula_27 may be complex and may be derived for the case where "w"["n"] and "s"["n"] are complex as well. With a complex signal, the matrix to be solved is a Hermitian Toeplitz matrix, rather than symmetric Toeplitz matrix. For simplicity, the following considers only the case where all these quantities are real. The mean square error (MSE) may be rewritten as: formula_28 To find the vector formula_29 which minimizes the expression above, calculate its derivative with respect to each formula_30 formula_31 Assuming that "w"["n"] and "s"["n"] are each stationary and jointly stationary, the sequences formula_32 and formula_33 known respectively as the autocorrelation of "w"["n"] and the cross-correlation between "w"["n"] and "s"["n"] can be defined as follows: formula_34 The derivative of the MSE may therefore be rewritten as: formula_35 Note that for real formula_36, the autocorrelation is symmetric:formula_37Letting the derivative be equal to zero results in: formula_38 which can be rewritten (using the above symmetric property) in matrix form formula_39 These equations are known as the Wiener–Hopf equations. The matrix T appearing in the equation is a symmetric Toeplitz matrix. Under suitable conditions on formula_40, these matrices are known to be positive definite and therefore non-singular yielding a unique solution to the determination of the Wiener filter coefficient vector, formula_41. Furthermore, there exists an efficient algorithm to solve such Wiener–Hopf equations known as the Levinson-Durbin algorithm so an explicit inversion of T is not required. In some articles, the cross correlation function is defined in the opposite way:formula_42Then, the formula_43 matrix will contain formula_44; this is just a difference in notation. Whichever notation is used, note that for real formula_45:formula_46 Relationship to the least squares filter. The realization of the causal Wiener filter looks a lot like the solution to the least squares estimate, except in the signal processing domain. The least squares solution, for input matrix formula_47 and output vector formula_48 is formula_49 The FIR Wiener filter is related to the least mean squares filter, but minimizing the error criterion of the latter does not rely on cross-correlations or auto-correlations. Its solution converges to the Wiener filter solution. Complex signals. For complex signals, the derivation of the complex Wiener filter is performed by minimizing formula_50 =formula_51. This involves computing partial derivatives with respect to both the real and imaginary parts of formula_27, and requiring them both to be zero. The resulting Wiener-Hopf equations are: formula_52 which can be rewritten in matrix form: formula_53 Note here that:formula_54 The Wiener coefficient vector is then computed as:formula_55 Applications. The Wiener filter has a variety of applications in signal processing, image processing, control systems, and digital communications. These applications generally fall into one of four main categories: For example, the Wiener filter can be used in image processing to remove noise from a picture. For example, using the Mathematica function: codice_0 on the first image on the right, produces the filtered image below it. It is commonly used to denoise audio signals, especially speech, as a preprocessor before speech recognition. History. The filter was proposed by Norbert Wiener during the 1940s and published in 1949. The discrete-time equivalent of Wiener's work was derived independently by Andrey Kolmogorov and published in 1941. Hence the theory is often called the "Wiener–Kolmogorov" filtering theory ("cf." Kriging). The Wiener filter was the first statistically designed filter to be proposed and subsequently gave rise to many others including the Kalman filter. See also. &lt;templatestyles src="Div col/styles.css"/&gt; References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "s(t+ \\alpha )" }, { "math_id": 1, "text": "x(t)" }, { "math_id": 2, "text": "\\alpha > 0" }, { "math_id": 3, "text": "\\alpha = 0 " }, { "math_id": 4, "text": "\\alpha < 0" }, { "math_id": 5, "text": "G(s) = \\frac{S_{x,s}(s)}{S_x(s)}e^{\\alpha s}," }, { "math_id": 6, "text": "S" }, { "math_id": 7, "text": " g(t)" }, { "math_id": 8, "text": "E(e^2) = R_s(0) - \\int_{-\\infty}^{\\infty} g(\\tau)R_{x,s}(\\tau + \\alpha)\\,d\\tau," }, { "math_id": 9, "text": "G(s)" }, { "math_id": 10, "text": "G(s) = \\frac{H(s)}{S_x^{+}(s)}," }, { "math_id": 11, "text": " H(s)" }, { "math_id": 12, "text": " \\frac{S_{x,s}(s)}{S_x^{-}(s)}e^{\\alpha s}" }, { "math_id": 13, "text": " S_x^{+}(s)" }, { "math_id": 14, "text": " S_x(s)" }, { "math_id": 15, "text": " t \\ge 0" }, { "math_id": 16, "text": " S_x^{-}(s)" }, { "math_id": 17, "text": " t < 0" }, { "math_id": 18, "text": " G(s)" }, { "math_id": 19, "text": "S_x(s) = S_x^{+}(s) S_x^{-}(s)" }, { "math_id": 20, "text": " S^{+}" }, { "math_id": 21, "text": " S^{-}" }, { "math_id": 22, "text": " S_{x,s}(s)e^{\\alpha s}" }, { "math_id": 23, "text": "\\{a_0, \\cdots, a_N\\}" }, { "math_id": 24, "text": "x[n] = \\sum_{i=0}^N a_i w[n-i] ." }, { "math_id": 25, "text": "a_i = \\arg \\min E \\left [e^2[n] \\right ]," }, { "math_id": 26, "text": "E[\\cdot]" }, { "math_id": 27, "text": "a_i" }, { "math_id": 28, "text": "\\begin{align}\nE \\left [e^2[n] \\right ] &= E \\left [ (x[n]-s[n])^2 \\right ]\\\\\n&= E \\left [ x^2[n] \\right ] + E \\left [s^2[n] \\right ] - 2E[x[n]s[n]]\\\\\n&= E \\left [ \\left ( \\sum_{i=0}^N a_i w[n-i] \\right)^2\\right ] + E \\left [s^2[n] \\right ] - 2E\\left [\\sum_{i=0}^N a_i w[n-i]s[n] \\right ]\n\\end{align}" }, { "math_id": 29, "text": " [a_0,\\, \\ldots,\\, a_N]" }, { "math_id": 30, "text": " a_i" }, { "math_id": 31, "text": "\\begin{align}\n\\frac{\\partial}{\\partial a_i} E \\left [e^2[n] \\right ] &= \\frac{\\partial}{\\partial a_i} \\left \\{ E \\left [ \\left ( \\sum_{j=0}^N a_j w[n-j] \\right)^2\\right ] + E \\left [s^2[n] \\right ] - 2E\\left [\\sum_{j=0}^N a_j w[n-j]s[n] \\right ]\\right \\} \\\\\n&= 2E\\left [ \\left ( \\sum_{j=0}^N a_j w[n-j] \\right ) w[n-i] \\right ] - 2E [w[n-i]s[n]] \\\\\n&= 2 \\left ( \\sum_{j=0}^N E [w[n-j]w[n-i] ] a_j \\right ) - 2E [ w[n-i]s[n]]\n\\end{align}" }, { "math_id": 32, "text": " R_w[m]" }, { "math_id": 33, "text": "R_{ws}[m]" }, { "math_id": 34, "text": "\\begin{align}\nR_w[m] &= E\\{w[n]w[n+m]\\} \\\\\nR_{ws}[m] &= E\\{w[n]s[n+m]\\}\n\\end{align}" }, { "math_id": 35, "text": "\\frac{\\partial}{\\partial a_i} E \\left [e^2[n] \\right ]= 2 \\left ( \\sum_{j=0}^{N} R_w[j-i] a_j \\right ) - 2 R_{ws}[i] \\qquad i = 0,\\cdots, N." }, { "math_id": 36, "text": "w[n]" }, { "math_id": 37, "text": " R_w[j-i] = R_w[i-j]" }, { "math_id": 38, "text": "\\sum_{j=0}^N R_w[j-i] a_j = R_{ws}[i] \\qquad i = 0,\\cdots, N." }, { "math_id": 39, "text": "\\underbrace{\\begin{bmatrix}\nR_w[0] & R_w[1] & \\cdots & R_w[N] \\\\\nR_w[1] & R_w[0] & \\cdots & R_w[N-1] \\\\\n\\vdots & \\vdots & \\ddots & \\vdots \\\\\nR_w[N] & R_w[N-1] & \\cdots & R_w[0]\n\\end{bmatrix}}_{\\mathbf{T}} \\underbrace{\\begin{bmatrix} a_0 \\\\ a_1 \\\\ \\vdots \\\\ a_N \\end{bmatrix}}_{\\mathbf{a}} = \\underbrace{\\begin{bmatrix} R_{ws}[0] \\\\R_{ws}[1] \\\\ \\vdots \\\\ R_{ws}[N] \\end{bmatrix}}_{\\mathbf{v}} " }, { "math_id": 40, "text": "R" }, { "math_id": 41, "text": "\\mathbf{a} = \\mathbf{T}^{-1}\\mathbf{v}" }, { "math_id": 42, "text": "R_{sw}[m] = E\\{w[n]s[n+m]\\}" }, { "math_id": 43, "text": "\\mathbf{v}" }, { "math_id": 44, "text": "R_{sw}[0] \\ldots R_{sw}[N]" }, { "math_id": 45, "text": "w[n], s[n]" }, { "math_id": 46, "text": "R_{sw}[k] = R_{ws}[-k]" }, { "math_id": 47, "text": "\\mathbf{X}" }, { "math_id": 48, "text": "\\mathbf{y}" }, { "math_id": 49, "text": "\\boldsymbol{\\hat\\beta} = (\\mathbf{X} ^\\mathbf{T}\\mathbf{X})^{-1}\\mathbf{X}^{\\mathbf{T}}\\boldsymbol y ." }, { "math_id": 50, "text": "E \\left [|e[n]|^2 \\right ]" }, { "math_id": 51, "text": "E \\left [e[n]e^*[n] \\right ]" }, { "math_id": 52, "text": "\\sum_{j=0}^N R_w[j-i] a_j^* = R_{ws}[i] \\qquad i = 0,\\cdots, N." }, { "math_id": 53, "text": "\\underbrace{\\begin{bmatrix}\nR_w[0] & R_w^*[1] & \\cdots & R_w^*[N-1] & R_w^*[N] \\\\\nR_w[1] & R_w[0] & \\cdots& R_w^*[N-2] & R_w^*[N-1] \\\\\n\\vdots & \\vdots & \\ddots & \\vdots & \\vdots \\\\\nR_w[N-1] & R_w[N-2] & \\cdots & R_w[0] & R_w^*[1] \\\\\nR_w[N] & R_w[N-1] & \\cdots & R_w[1] & R_w[0]\n\\end{bmatrix}}_{\\mathbf{T}} \\underbrace{\\begin{bmatrix} a_0^* \\\\ a_1^* \\\\ \\vdots \\\\a_{N-1}^* \\\\ a_N^* \\end{bmatrix}}_{\\mathbf{a^*}} = \\underbrace{\\begin{bmatrix} R_{ws}[0] \\\\R_{ws}[1] \\\\ \\vdots\\\\ R_{ws}[N-1] \\\\ R_{ws}[N] \\end{bmatrix}}_{\\mathbf{v}} " }, { "math_id": 54, "text": "\\begin{align}\nR_w[-k] &= R_w^*[k] \\\\\nR_{sw}[k] &= R_{ws}^*[-k]\n\\end{align}" }, { "math_id": 55, "text": "\\mathbf{a} = {(\\mathbf{T}^{-1}\\mathbf{v})}^*" } ]
https://en.wikipedia.org/wiki?curid=1216721
1216914
Discrete-time Fourier transform
Fourier analysis technique applied to sequences In mathematics, the discrete-time Fourier transform (DTFT) is a form of Fourier analysis that is applicable to a sequence of discrete values. The DTFT is often used to analyze samples of a continuous function. The term "discrete-time" refers to the fact that the transform operates on discrete data, often samples whose interval has units of time. From uniformly spaced samples it produces a function of frequency that is a periodic summation of the continuous Fourier transform of the original continuous function. Under certain theoretical conditions, described by the sampling theorem, the original continuous function can be recovered perfectly from the DTFT and thus from the original discrete samples. The DTFT itself is a continuous function of frequency, but discrete samples of it can be readily calculated via the discrete Fourier transform (DFT) (see ), which is by far the most common method of modern Fourier analysis. Both transforms are invertible. The inverse DTFT is the original sampled data sequence. The inverse DFT is a periodic summation of the original sequence. The fast Fourier transform (FFT) is an algorithm for computing one cycle of the DFT, and its inverse produces one cycle of the inverse DFT. Introduction. Relation to Fourier Transform. We begin with a common definition of the Fourier transform integral: formula_0 This reduces to a summation (see ) when formula_1 is replaced by a discrete sequence of its samples, formula_2 for integer values of formula_3  formula_4 is also replaced by formula_5 leaving: formula_6 which is a Fourier series in frequency, with periodicity formula_7 The subscript formula_8 distinguishes it from formula_9 and from the angular frequency form of the DTFT. I.e., when the frequency variable, formula_10 has normalized units of "radians/sample", the periodicity is formula_11 and the Fourier series is: The utility of the DTFT is rooted in the Poisson summation formula, which tells us that the periodic function represented by the Fourier series is a periodic summation of the Fourier transform: Poisson summation The integer formula_12 has units of "cycles/sample", and formula_8 is the sample-rate, formula_13 ("samples/sec").  So formula_14 comprises exact copies of formula_9 that are shifted by multiples of formula_13 hertz and combined by addition. For sufficiently large formula_13 the formula_15 term can be observed in the region formula_16 with little or no distortion (aliasing) from the other terms.  Fig.1 depicts an example where formula_8 is not large enough to prevent aliasing. We also note that formula_17 is the Fourier transform of formula_18 Therefore, an alternative definition of DTFT is: The modulated Dirac comb function is a mathematical abstraction sometimes referred to as "impulse sampling". Inverse transform. An operation that recovers the discrete data sequence from the DTFT function is called an "inverse DTFT". For instance, the inverse continuous Fourier transform of both sides of Eq.3 produces the sequence in the form of a modulated Dirac comb function: formula_19 However, noting that formula_14 is periodic, all the necessary information is contained within any interval of length formula_7  In both Eq.1 and Eq.2, the summations over formula_20 are a Fourier series, with coefficients formula_21  The standard formulas for the Fourier coefficients are also the inverse transforms: Periodic data. When the input data sequence formula_22 is formula_23-periodic, Eq.2 can be computationally reduced to a discrete Fourier transform (DFT), because: The DFT of one cycle of the formula_22 sequence is: formula_26 And formula_22 can be expressed in terms of the inverse transform: formula_27 The inverse DFT is sometimes referred to as a Discrete Fourier series (DFS). formula_28      Due to the formula_23-periodicity of both functions of formula_29 this can be simplified to: formula_30 which satisfies the inverse transform requirement: formula_31 Sampling the DTFT. When the DTFT is continuous, a common practice is to compute an arbitrary number of samples formula_32 of one cycle of the periodic function formula_33:  formula_34 where formula_35 is a periodic summation: formula_36     (see Discrete Fourier series) The formula_35 sequence is the inverse DFT. Thus, our sampling of the DTFT causes the inverse transform to become periodic. The array of formula_37 values is known as a "periodogram", and the parameter formula_23 is called NFFT in the Matlab function of the same name. In order to evaluate one cycle of formula_35 numerically, we require a finite-length formula_22 sequence. For instance, a long sequence might be truncated by a window function of length formula_38 resulting in three cases worthy of special mention. For notational simplicity, consider the formula_22 values below to represent the values modified by the window function. Case: Frequency decimation. formula_39 for some integer formula_40 (typically 6 or 8) A cycle of formula_35 reduces to a summation of formula_40 segments of length formula_41  The DFT then goes by various names, such as: Recall that decimation of sampled data in one domain (time or frequency) produces overlap (sometimes known as aliasing) in the other, and vice versa. Compared to an formula_38-length DFT, the formula_35 summation/overlap causes decimation in frequency, leaving only DTFT samples least affected by spectral leakage. That is usually a priority when implementing an FFT filter-bank (channelizer). With a conventional window function of length formula_42 scalloping loss would be unacceptable. So multi-block windows are created using FIR filter design tools.  Their frequency profile is flat at the highest point and falls off quickly at the midpoint between the remaining DTFT samples. The larger the value of parameter formula_43 the better the potential performance. Case: formula_44 When a symmetric, formula_38-length window function (formula_45) is truncated by 1 coefficient it is called "periodic" or "DFT-even". That is a common practice, but the truncation affects the DTFT (spectral leakage) by a small amount. It is at least of academic interest to characterize that effect.  An formula_23-length DFT of the truncated window produces frequency samples at intervals of formula_46 instead of formula_47  The samples are real-valued,  but their values do not exactly match the DTFT of the symmetric window. The periodic summation, formula_48 along with an formula_23-length DFT, can also be used to sample the DTFT at intervals of formula_49  Those samples are also real-valued and do exactly match the DTFT (example: ). To use the full symmetric window for spectral analysis at the formula_50 spacing, one would combine the formula_51 and formula_52 data samples (by addition, because the symmetrical window weights them equally) and then apply the truncated symmetric window and the formula_23-length DFT. Case: Frequency interpolation. formula_53 In this case, the DFT simplifies to a more familiar form: formula_54 In order to take advantage of a fast Fourier transform algorithm for computing the DFT, the summation is usually performed over all formula_23 terms, even though formula_55 of them are zeros. Therefore, the case formula_56 is often referred to as zero-padding. Spectral leakage, which increases as formula_38 decreases, is detrimental to certain important performance metrics, such as resolution of multiple frequency components and the amount of noise measured by each DTFT sample. But those things don't always matter, for instance when the formula_22 sequence is a noiseless sinusoid (or a constant), shaped by a window function. Then it is a common practice to use "zero-padding" to graphically display and compare the detailed leakage patterns of window functions. To illustrate that for a rectangular window, consider the sequence: formula_57 and formula_58 Figures 2 and 3 are plots of the magnitude of two different sized DFTs, as indicated in their labels. In both cases, the dominant component is at the signal frequency: formula_59. Also visible in Fig 2 is the spectral leakage pattern of the formula_60 rectangular window. The illusion in Fig 3 is a result of sampling the DTFT at just its zero-crossings. Rather than the DTFT of a finite-length sequence, it gives the impression of an infinitely long sinusoidal sequence. Contributing factors to the illusion are the use of a rectangular window, and the choice of a frequency (1/8 = 8/64) with exactly 8 (an integer) cycles per 64 samples. A Hann window would produce a similar result, except the peak would be widened to 3 samples (see DFT-even Hann window). Convolution. The convolution theorem for sequences is: formula_61 An important special case is the circular convolution of sequences x and y defined by formula_62 where formula_35 is a periodic summation. The discrete-frequency nature of formula_63 means that the product with the continuous function formula_64 is also discrete, which results in considerable simplification of the inverse transform: formula_65 For x and y sequences whose non-zero duration is less than or equal to N, a final simplification is: formula_66 The significance of this result is explained at Circular convolution and Fast convolution algorithms. Symmetry properties. When the real and imaginary parts of a complex function are decomposed into their even and odd parts, there are four components, denoted below by the subscripts RE, RO, IE, and IO. And there is a one-to-one mapping between the four components of a complex time function and the four components of its complex frequency transform: formula_67 From this, various relationships are apparent, for example: Relationship to the Z-transform. formula_76 is a Fourier series that can also be expressed in terms of the bilateral Z-transform.  I.e.: formula_77 where the formula_78 notation distinguishes the Z-transform from the Fourier transform. Therefore, we can also express a portion of the Z-transform in terms of the Fourier transform: formula_79 Note that when parameter T changes, the terms of formula_76 remain a constant separation formula_80 apart, and their width scales up or down. The terms of "X"1/"T"("f") remain a constant width and their separation 1/"T" scales up or down. Table of discrete-time Fourier transforms. Some common transform pairs are shown in the table below. The following notation applies: Properties. This table shows some mathematical operations in the time domain and the corresponding effects in the frequency domain. Notes. &lt;templatestyles src="Reflist/styles.css" /&gt; Page citations. &lt;templatestyles src="Reflist/styles.css" /&gt; References. &lt;templatestyles src="Reflist/styles.css" /&gt; &lt;templatestyles src="Refbegin/styles.css" /&gt;
[ { "math_id": 0, "text": "X(f) \\triangleq \\int_{-\\infty}^\\infty x(t)\\cdot e^{-i 2\\pi ft} dt." }, { "math_id": 1, "text": "x(t)" }, { "math_id": 2, "text": "x(nT)," }, { "math_id": 3, "text": "n." }, { "math_id": 4, "text": "dt" }, { "math_id": 5, "text": "T," }, { "math_id": 6, "text": "X_{1/T}(f) \\triangleq \\sum_{n=-\\infty}^{\\infty} \\underbrace{T\\cdot x(nT)}_{x[n]}\\ e^{-i 2\\pi f T n}," }, { "math_id": 7, "text": "1/T." }, { "math_id": 8, "text": "1/T" }, { "math_id": 9, "text": "X(f)" }, { "math_id": 10, "text": "\\omega," }, { "math_id": 11, "text": "2\\pi," }, { "math_id": 12, "text": "k" }, { "math_id": 13, "text": "f_s" }, { "math_id": 14, "text": "X_{1/T}(f)" }, { "math_id": 15, "text": "k=0" }, { "math_id": 16, "text": "[-f_s/2, f_s/2]" }, { "math_id": 17, "text": "e^{-i2\\pi fTn}" }, { "math_id": 18, "text": "\\delta(t-nT)." }, { "math_id": 19, "text": "\\sum_{n=-\\infty}^{\\infty} x[n]\\cdot \\delta(t-n T) = \\mathcal{F}^{-1}\\left \\{X_{1/T}(f)\\right\\} \\ \\triangleq \\int_{-\\infty}^\\infty X_{1/T}(f)\\cdot e^{i 2 \\pi f t} df." }, { "math_id": 20, "text": "n" }, { "math_id": 21, "text": "x[n]." }, { "math_id": 22, "text": "x[n]" }, { "math_id": 23, "text": "N" }, { "math_id": 24, "text": "1/(NT)," }, { "math_id": 25, "text": "(1/T)/(1/(NT)) = N." }, { "math_id": 26, "text": "X[k] \\triangleq \\underbrace{\\sum_{N} x[n]\\cdot e^{-i 2 \\pi \\frac{k}{N}n}}_{\\text{any n-sequence of length N}}, \\quad k \\in \\mathbf{Z}." }, { "math_id": 27, "text": "x[n] = \\frac{1}{N} \\underbrace{\\sum_{N} X[k]\\cdot e^{i 2 \\pi \\frac{k}{N}n}}_{\\text{any k-sequence of length N}}, \\quad n \\in \\mathbf{Z}." }, { "math_id": 28, "text": "\n\\begin{align}\nX_{1/T}(f) &\\triangleq \\sum_{n=-\\infty}^{\\infty} x[n]\\cdot e^{-i 2\\pi f nT}\\\\\n&= \\sum_{n=-\\infty}^{\\infty} \\left[\\frac{1}{N} \\sum_{k=0}^{N-1} X[k]\\cdot e^{i 2 \\pi \\frac{k}{N}n}\\right] \\cdot e^{-i 2\\pi f n T}\\\\\n&= \\frac{1}{N} \\sum_{k=0}^{N-1} X[k] \\underbrace{\\left[\\sum_{n=-\\infty}^{\\infty} e^{i 2 \\pi \\frac{k}{N}n} \\cdot e^{-i 2\\pi f n T}\\right]}_{\\operatorname{DTFT}\\left(e^{i 2 \\pi \\frac{k}{N}n}\\right)}\\\\\n&= \\frac{1}{N} \\sum_{k=0}^{N-1} X[k] \\cdot \\frac{1}{T}\\sum_{M=-\\infty}^\\infty \\delta \\left(f - \\tfrac{k}{NT} - \\tfrac{M}{T} \\right)\n\\end{align}\n" }, { "math_id": 29, "text": "k," }, { "math_id": 30, "text": "X_{1/T}(f) = \\frac{1}{NT} \\sum_{k=-\\infty}^{\\infty} X[k] \\cdot \\delta\\left(f-\\frac{k}{NT}\\right)," }, { "math_id": 31, "text": "\\begin{align}\nx[n] &= T \\int_{0}^{\\frac{1}{T}} X_{1/T}(f)\\cdot e^{i 2 \\pi f nT} df\\\\\n&=\\frac{1}{N} \\sum_{k=-\\infty}^\\infty X[k] \\underbrace{\\int_{0}^{\\frac{1}{T}} \\delta \\left(f-\\tfrac{k}{NT}\\right) e^{i 2 \\pi f nT} df}_{\\text{zero for } k\\ \\notin\\ [0,N-1]}\\\\\n&=\\frac{1}{N} \\sum_{k=0}^{N-1} X[k] \\int_{0}^{\\frac{1}{T}} \n\\delta \\left(f-\\tfrac{k}{NT}\\right) e^{i 2 \\pi f nT} df\\\\\n&=\\frac{1}{N} \\sum_{k=0}^{N-1} X[k]\\cdot e^{i 2 \\pi \\tfrac{k}{NT} nT}\\\\\n&=\\frac{1}{N} \\sum_{k=0}^{N-1} X[k]\\cdot e^{i 2 \\pi \\tfrac{k}{N} n}\n\\end{align}\n" }, { "math_id": 32, "text": "(N)" }, { "math_id": 33, "text": "X_{1/T}" }, { "math_id": 34, "text": "\n\\begin{align}\n\\underbrace{X_{1/T}\\left(\\frac{k}{NT}\\right)}_{X_k} &= \\sum_{n=-\\infty}^\\infty x[n]\\cdot e^{-i 2\\pi \\frac{k}{N}n} \\quad \\quad k = 0, \\dots, N-1 \\\\\n&= \\underbrace{\\sum_{N} x_{_N}[n]\\cdot e^{-i 2\\pi \\frac{k}{N}n},}_{\\text{DFT}}\\quad \\scriptstyle{\\text{(sum over any }n\\text{-sequence of length }N)}\n\\end{align} \n" }, { "math_id": 35, "text": "x_{_N}" }, { "math_id": 36, "text": "x_{_N}[n]\\ \\triangleq\\ \\sum_{m=-\\infty}^{\\infty} x[n-mN]." }, { "math_id": 37, "text": "|X_k|^2" }, { "math_id": 38, "text": "L" }, { "math_id": 39, "text": "L=N\\cdot I," }, { "math_id": 40, "text": "I" }, { "math_id": 41, "text": "N." }, { "math_id": 42, "text": "L," }, { "math_id": 43, "text": "I," }, { "math_id": 44, "text": "L=N+1" }, { "math_id": 45, "text": "x" }, { "math_id": 46, "text": "1/N," }, { "math_id": 47, "text": "1/L." }, { "math_id": 48, "text": "x_{_N}," }, { "math_id": 49, "text": "1/N." }, { "math_id": 50, "text": "1/N" }, { "math_id": 51, "text": "n=0" }, { "math_id": 52, "text": "n=N" }, { "math_id": 53, "text": "L \\le N" }, { "math_id": 54, "text": "X_k = \\sum_{n=0}^{N-1} x[n]\\cdot e^{-i 2\\pi \\frac{k}{N}n}." }, { "math_id": 55, "text": "N-L" }, { "math_id": 56, "text": "L < N" }, { "math_id": 57, "text": "x[n] = e^{i 2\\pi \\frac{1}{8} n},\\quad " }, { "math_id": 58, "text": "L=64." }, { "math_id": 59, "text": "f = 1/8 = 0.125" }, { "math_id": 60, "text": "L=64" }, { "math_id": 61, "text": "x * y\\ =\\ \\scriptstyle{\\rm DTFT}^{-1} \\displaystyle \\left[\\scriptstyle{\\rm DTFT} \\displaystyle \\{x\\}\\cdot \\scriptstyle{\\rm DTFT} \\displaystyle \\{y\\}\\right]." }, { "math_id": 62, "text": "x_{_N}*y," }, { "math_id": 63, "text": "\\scriptstyle{\\rm DTFT} \\displaystyle \\{x_{_N}\\}" }, { "math_id": 64, "text": "\\scriptstyle{\\rm DTFT} \\displaystyle \\{y\\}" }, { "math_id": 65, "text": "x_{_N} * y\\ =\\ \\scriptstyle{\\rm DTFT}^{-1} \\displaystyle \\left[\\scriptstyle{\\rm DTFT} \\displaystyle \\{x_{_N}\\}\\cdot \\scriptstyle{\\rm DTFT} \\displaystyle \\{y\\}\\right]\\ =\\ \\scriptstyle{\\rm DFT}^{-1} \\displaystyle \\left[\\scriptstyle{\\rm DFT} \\displaystyle \\{x_{_N}\\}\\cdot \\scriptstyle{\\rm DFT} \\displaystyle \\{y_{_N}\\}\\right]." }, { "math_id": 66, "text": "x_{_N} * y\\ =\\ \\scriptstyle{\\rm DFT}^{-1} \\displaystyle \\left[\\scriptstyle{\\rm DFT} \\displaystyle \\{x\\}\\cdot \\scriptstyle{\\rm DFT} \\displaystyle \\{y\\}\\right]." }, { "math_id": 67, "text": "\n\\begin{align}\n\\mathsf{Time\\ domain} \\quad &\\ x \\quad &= \\quad & x_{_{RE}} \\quad &+ \\quad & x_{_{RO}} \\quad &+ \\quad i\\ & x_{_{IE}} \\quad &+ \\quad &\\underbrace{i\\ x_{_{IO}}} \\\\\n&\\Bigg\\Updownarrow\\mathcal{F} & &\\Bigg\\Updownarrow\\mathcal{F} & &\\ \\ \\Bigg\\Updownarrow\\mathcal{F} & &\\ \\ \\Bigg\\Updownarrow\\mathcal{F} & &\\ \\ \\Bigg\\Updownarrow\\mathcal{F}\\\\\n\\mathsf{Frequency\\ domain} \\quad &X \\quad &= \\quad & X_{RE} \\quad &+ \\quad &\\overbrace{i\\ X_{IO}} \\quad &+ \\quad i\\ & X_{IE} \\quad &+ \\quad & X_{RO}\n\\end{align}\n" }, { "math_id": 68, "text": "(x_{_{RE}}+x_{_{RO}})" }, { "math_id": 69, "text": "X_{RE}+i\\ X_{IO}." }, { "math_id": 70, "text": "(i\\ x_{_{IE}}+i\\ x_{_{IO}})" }, { "math_id": 71, "text": "X_{RO}+i\\ X_{IE}," }, { "math_id": 72, "text": "(x_{_{RE}}+i\\ x_{_{IO}})" }, { "math_id": 73, "text": "X_{RE}+X_{RO}," }, { "math_id": 74, "text": "(x_{_{RO}}+i\\ x_{_{IE}})" }, { "math_id": 75, "text": "i\\ X_{IE}+i\\ X_{IO}," }, { "math_id": 76, "text": "X_{2\\pi}(\\omega)" }, { "math_id": 77, "text": "X_{2\\pi}(\\omega) = \\left. X_z(z) \\, \\right|_{z = e^{i \\omega}} = X_z(e^{i \\omega})," }, { "math_id": 78, "text": "X_z" }, { "math_id": 79, "text": "\n\\begin{align}\nX_z(e^{i \\omega}) &= \\ X_{1/T}\\left(\\tfrac{\\omega}{2\\pi T}\\right)\n \\ = \\ \\sum_{k=-\\infty}^{\\infty} X\\left(\\tfrac{\\omega}{2\\pi T} - k/T\\right)\\\\\n&= \\sum_{k=-\\infty}^{\\infty} X\\left(\\tfrac{\\omega - 2\\pi k}{2\\pi T} \\right).\n\\end{align}\n" }, { "math_id": 80, "text": "2 \\pi" }, { "math_id": 81, "text": "\\omega=2 \\pi f T" }, { "math_id": 82, "text": "f" }, { "math_id": 83, "text": "T" }, { "math_id": 84, "text": "\\omega" }, { "math_id": 85, "text": "-\\infty < \\omega < \\infty " }, { "math_id": 86, "text": "X_o(\\omega)" }, { "math_id": 87, "text": "-\\pi < \\omega \\le \\pi" }, { "math_id": 88, "text": "X_{2\\pi}(\\omega)\\ \\triangleq \\sum_{k=-\\infty}^{\\infty} X_o(\\omega - 2\\pi k)." }, { "math_id": 89, "text": "\\delta ( \\omega )" }, { "math_id": 90, "text": "\\operatorname{sinc} (t)" }, { "math_id": 91, "text": "\\operatorname{rect}\\left[{n \\over L}\\right] \\triangleq \\begin{cases}\n1 & |n|\\leq L/2 \\\\\n0 & |n| > L/2\n\\end{cases}" }, { "math_id": 92, "text": "\\operatorname{tri} (t)" }, { "math_id": 93, "text": "u[n]" }, { "math_id": 94, "text": "\\delta[n]" }, { "math_id": 95, "text": "\\delta_{n,0}" }, { "math_id": 96, "text": "*\\!" }, { "math_id": 97, "text": "x[n]^{*}" } ]
https://en.wikipedia.org/wiki?curid=1216914
12169694
Geometric measure theory
Study of geometric properties of sets through measure theory In mathematics, geometric measure theory (GMT) is the study of geometric properties of sets (typically in Euclidean space) through measure theory. It allows mathematicians to extend tools from differential geometry to a much larger class of surfaces that are not necessarily smooth. History. Geometric measure theory was born out of the desire to solve Plateau's problem (named after Joseph Plateau) which asks if for every smooth closed curve in formula_0 there exists a surface of least area among all surfaces whose boundary equals the given curve. Such surfaces mimic soap films. The problem had remained open since it was posed in 1760 by Lagrange. It was solved independently in the 1930s by Jesse Douglas and Tibor Radó under certain topological restrictions. In 1960 Herbert Federer and Wendell Fleming used the theory of currents with which they were able to solve the orientable Plateau's problem analytically without topological restrictions, thus sparking geometric measure theory. Later Jean Taylor after Fred Almgren proved Plateau's laws for the kind of singularities that can occur in these more general soap films and soap bubbles clusters. Important notions. The following objects are central in geometric measure theory: The following theorems and concepts are also central: Examples. The Brunn–Minkowski inequality for the "n"-dimensional volumes of convex bodies "K" and "L", formula_1 can be proved on a single page and quickly yields the classical isoperimetric inequality. The Brunn–Minkowski inequality also leads to Anderson's theorem in statistics. The proof of the Brunn–Minkowski inequality predates modern measure theory; the development of measure theory and Lebesgue integration allowed connections to be made between geometry and analysis, to the extent that in an integral form of the Brunn–Minkowski inequality known as the Prékopa–Leindler inequality the geometry seems almost entirely absent.
[ { "math_id": 0, "text": "\\mathbb{R}^3" }, { "math_id": 1, "text": "\\mathrm{vol} \\big( (1 - \\lambda) K + \\lambda L \\big)^{1/n} \\geq (1 - \\lambda) \\mathrm{vol} (K)^{1/n} + \\lambda \\, \\mathrm{vol} (L)^{1/n}," } ]
https://en.wikipedia.org/wiki?curid=12169694
1217120
Algebra of random variables
Mathematical technique The algebra of random variables in statistics, provides rules for the symbolic manipulation of random variables, while avoiding delving too deeply into the mathematically sophisticated ideas of probability theory. Its symbolism allows the treatment of sums, products, ratios and general functions of random variables, as well as dealing with operations such as finding the probability distributions and the expectations (or expected values), variances and covariances of such combinations. In principle, the elementary algebra of random variables is equivalent to that of conventional non-random (or deterministic) variables. However, the changes occurring on the probability distribution of a random variable obtained after performing algebraic operations are not straightforward. Therefore, the behavior of the different operators of the probability distribution, such as expected values, variances, covariances, and moments, may be different from that observed for the random variable using symbolic algebra. It is possible to identify some key rules for each of those operators, resulting in different types of algebra for random variables, apart from the elementary symbolic algebra: Expectation algebra, Variance algebra, Covariance algebra, Moment algebra, etc. Elementary symbolic algebra of random variables. Considering two random variables formula_0 and formula_1, the following algebraic operations are possible: In all cases, the variable formula_7 resulting from each operation is also a random variable. All commutative and associative properties of conventional algebraic operations are also valid for random variables. If any of the random variables is replaced by a deterministic variable or by a constant value, all the previous properties remain valid. Expectation algebra for random variables. The expected value formula_8 of the random variable formula_7 resulting from an algebraic operation between two random variables can be calculated using the following set of rules: If any of the random variables is replaced by a deterministic variable or by a constant value (formula_16), the previous properties remain valid considering that formula_17 and, therefore, formula_18. If formula_7 is defined as a general non-linear algebraic function formula_19 of a random variable formula_0, then: formula_20 Some examples of this property include: The exact value of the expectation of the non-linear function will depend on the particular probability distribution of the random variable formula_0. Variance algebra for random variables. The variance formula_25 of the random variable formula_7 resulting from an algebraic operation between random variables can be calculated using the following set of rules: where formula_36 represents the covariance operator between random variables formula_0 and formula_1. The variance of a random variable can also be expressed directly in terms of the covariance or in terms of the expected value: formula_37 If any of the random variables is replaced by a deterministic variable or by a constant value (formula_16), the previous properties remain valid considering that formula_17 and formula_18, formula_38 and formula_39. Special cases are the addition and multiplication of a random variable with a deterministic variable or a constant, where: If formula_7 is defined as a general non-linear algebraic function formula_19 of a random variable formula_0, then: formula_42 The exact value of the variance of the non-linear function will depend on the particular probability distribution of the random variable formula_0. Covariance algebra for random variables. The covariance (formula_43) between the random variable formula_7 resulting from an algebraic operation and the random variable formula_0 can be calculated using the following set of rules: The covariance of a random variable can also be expressed directly in terms of the expected value: formula_56 If any of the random variables is replaced by a deterministic variable or by a constant value ( formula_16), the previous properties remain valid considering that formula_57, formula_58 and formula_59. If formula_7 is defined as a general non-linear algebraic function formula_19of a random variable formula_0, then: formula_60 The exact value of the variance of the non-linear function will depend on the particular probability distribution of the random variable formula_0. Approximations by Taylor series expansions of moments. If the moments of a certain random variable formula_0 are known (or can be determined by integration if the probability density function is known), then it is possible to approximate the expected value of any general non-linear function formula_61as a Taylor series expansion of the moments, as follows: formula_62, where formula_63is the mean value of formula_0. formula_64, where formula_65is the "n"-th moment of formula_0 about its mean. Note that by their definition, formula_66 and formula_67. The first order term always vanishes but was kept to obtain a closed form expression. Then, formula_68, where the Taylor expansion is truncated after the formula_69-th moment. Particularly for functions of normal random variables, it is possible to obtain a Taylor expansion in terms of the standard normal distribution: formula_70, where formula_71is a normal random variable, and formula_72is the standard normal distribution. Thus, formula_73, where the moments of the standard normal distribution are given by: formula_74 Similarly for normal random variables, it is also possible to approximate the variance of the non-linear function as a Taylor series expansion as: formula_75, where formula_76, and formula_77 Algebra of complex random variables. In the algebraic axiomatization of probability theory, the primary concept is not that of probability of an event, but rather that of a random variable. Probability distributions are determined by assigning an expectation to each random variable. The measurable space and the probability measure arise from the random variables and expectations by means of well-known representation theorems of analysis. One of the important features of the algebraic approach is that apparently infinite-dimensional probability distributions are not harder to formalize than finite-dimensional ones. Random variables are assumed to have the following properties: "Y"*"X"* and "X"** "X" for all random variables "X","Y" and coinciding with complex conjugation if "X" is a constant. This means that random variables form complex commutative *-algebras. If "X" "X"* then the random variable "X" is called "real". An expectation "E" on an algebra "A" of random variables is a normalized, positive linear functional. What this means is that "k" where "k" is a constant; "E"["X"] + "E"["Y"] for all random variables "X" and "Y"; and "kE"["X"] if "k" is a constant. One may generalize this setup, allowing the algebra to be noncommutative. This leads to other areas of noncommutative probability such as quantum probability, random matrix theory, and free probability.
[ { "math_id": 0, "text": "X" }, { "math_id": 1, "text": "Y" }, { "math_id": 2, "text": "Z=X+Y=Y+X" }, { "math_id": 3, "text": "Z=X-Y=-Y+X" }, { "math_id": 4, "text": "Z=XY=YX" }, { "math_id": 5, "text": "Z=X / Y=X \\cdot (1/Y)=(1/Y) \\cdot X" }, { "math_id": 6, "text": "Z=X^Y=e^{Y\\ln(X)}" }, { "math_id": 7, "text": "Z" }, { "math_id": 8, "text": "E" }, { "math_id": 9, "text": "E[Z]=E[X+Y]=E[X]+E[Y]=E[Y]+E[X]" }, { "math_id": 10, "text": "E[Z]=E[X-Y]=E[X]-E[Y]=-E[Y]+E[X]" }, { "math_id": 11, "text": "E[Z]=E[XY]=E[YX]" }, { "math_id": 12, "text": "E[XY]=E[X] \\cdot E[Y]=E[Y] \\cdot E[X]" }, { "math_id": 13, "text": "E[Z]=E[X/Y]=E[X \\cdot (1/Y)]=E[(1/Y) \\cdot X]" }, { "math_id": 14, "text": "E[X/Y]=E[X] \\cdot E[1/Y]=E[1/Y] \\cdot E[X]" }, { "math_id": 15, "text": "E[Z]=E[X^Y]=E[e^{Y\\ln(X)}]" }, { "math_id": 16, "text": "k" }, { "math_id": 17, "text": "P[X = k] = 1" }, { "math_id": 18, "text": "E[X]=k" }, { "math_id": 19, "text": "f" }, { "math_id": 20, "text": "E[Z]=E[f(X)] \\neq f(E[X])" }, { "math_id": 21, "text": "E[X^2] \\neq E[X]^2" }, { "math_id": 22, "text": "E[1/X] \\neq 1/E[X]" }, { "math_id": 23, "text": "E[e^X] \\neq e^{E[X]}" }, { "math_id": 24, "text": "E[\\ln(X)] \\neq \\ln(E[X])" }, { "math_id": 25, "text": "\\mathrm{Var}" }, { "math_id": 26, "text": "\\mathrm{Var}[Z]=\\mathrm{Var}[X+Y]=\\mathrm{Var}[X]+2\\mathrm{Cov}[X,Y]+\\mathrm{Var}[Y]" }, { "math_id": 27, "text": "\\mathrm{Var}[X+Y]=\\mathrm{Var}[X]+\\mathrm{Var}[Y]" }, { "math_id": 28, "text": "\\mathrm{Var}[Z]=\\mathrm{Var}[X-Y]=\\mathrm{Var}[X]-2\\mathrm{Cov}[X,Y]+\\mathrm{Var}[Y]" }, { "math_id": 29, "text": "\\mathrm{Var}[X-Y]=\\mathrm{Var}[X]+\\mathrm{Var}[Y]" }, { "math_id": 30, "text": "\\mathrm{Var}[X+Y]=\\mathrm{Var}[X-Y]=\\mathrm{Var}[Y-X]=\\mathrm{Var}[-X-Y]" }, { "math_id": 31, "text": "\\mathrm{Var}[Z]=\\mathrm{Var}[XY]=\\mathrm{Var}[YX]" }, { "math_id": 32, "text": "\\mathrm{Var}[XY]=E[X^2]\\cdot E[Y^2]-(E[X]\\cdot E[Y])^2 =\\mathrm{Var}[X] \\cdot \\mathrm{Var}[Y]+\\mathrm{Var}[X] \\cdot (E[Y])^2+\\mathrm{Var}[Y] \\cdot (E[X])^2" }, { "math_id": 33, "text": "\\mathrm{Var}[Z]=\\mathrm{Var}[X/Y]=\\mathrm{Var}[X \\cdot (1/Y)]=\\mathrm{Var}[(1/Y) \\cdot X]" }, { "math_id": 34, "text": "\\mathrm{Var}[X/Y]=E[X^2]\\cdot E[1/Y^2]-(E[X]\\cdot E[1/Y])^2 =\\mathrm{Var}[X] \\cdot \\mathrm{Var}[1/Y]+\\mathrm{Var}[X] \\cdot (E[1/Y])^2+\\mathrm{Var}[1/Y] \\cdot (E[X])^2" }, { "math_id": 35, "text": "\\mathrm{Var}[Z]=\\mathrm{Var}[X^Y]=\\mathrm{Var}[e^{Y\\ln(X)}]" }, { "math_id": 36, "text": "\\mathrm{Cov}[X,Y]=\\mathrm{Cov}[Y,X]" }, { "math_id": 37, "text": "\\mathrm{Var}[X] = \\mathrm{Cov}(X,X) = E[X^2] - E[X]^2" }, { "math_id": 38, "text": "\\mathrm{Var}[X]=0" }, { "math_id": 39, "text": "\\mathrm{Cov}[Y,k]=0" }, { "math_id": 40, "text": "\\mathrm{Var}[k+Y]=\\mathrm{Var}[Y]" }, { "math_id": 41, "text": "\\mathrm{Var}[kY]=k^2 \\mathrm{Var}[Y]" }, { "math_id": 42, "text": "\\mathrm{Var}[Z]=\\mathrm{Var}[f(X)] \\neq f(\\mathrm{Var}[X])" }, { "math_id": 43, "text": "\\mathrm{Cov}" }, { "math_id": 44, "text": "\\mathrm{Cov}[Z,X]=\\mathrm{Cov}[X+Y,X]=\\mathrm{Var}[X]+\\mathrm{Cov}[X,Y]" }, { "math_id": 45, "text": "\\mathrm{Cov}[X+Y,X]=\\mathrm{Var}[X]" }, { "math_id": 46, "text": "\\mathrm{Cov}[Z,X]=\\mathrm{Cov}[X-Y,X]=\\mathrm{Var}[X]-\\mathrm{Cov}[X,Y]" }, { "math_id": 47, "text": "\\mathrm{Cov}[X-Y,X]=\\mathrm{Var}[X]" }, { "math_id": 48, "text": "\\mathrm{Cov}[Z,X]=\\mathrm{Cov}[XY,X]=E[X^2Y]-E[XY]E[X]" }, { "math_id": 49, "text": "\\mathrm{Cov}[XY,X]=\\mathrm{Var}[X] \\cdot E[Y]" }, { "math_id": 50, "text": "\\mathrm{Cov}[Z,X]=\\mathrm{Cov}[X/Y,X]=E[X^2/Y]-E[X/Y]E[X]" }, { "math_id": 51, "text": "\\mathrm{Cov}[X/Y,X]=\\mathrm{Var}[X] \\cdot E[1/Y]" }, { "math_id": 52, "text": "\\mathrm{Cov}[Z,X]=\\mathrm{Cov}[Y/X,X]=E[Y]-E[Y/X]E[X]" }, { "math_id": 53, "text": "\\mathrm{Cov}[Y/X,X]=E[Y] \\cdot (1-E[X] \\cdot E[1/X])" }, { "math_id": 54, "text": "\\mathrm{Cov}[Z,X]=\\mathrm{Cov}[X^Y,X]=E[X^{Y+1}]-E[X^Y]E[X]" }, { "math_id": 55, "text": "\\mathrm{Cov}[Z,X]=\\mathrm{Cov}[Y^X,X]=E[XY^X]-E[Y^X]E[X]" }, { "math_id": 56, "text": "\\mathrm{Cov}(X,Y) = E[XY] - E[X]E[Y]" }, { "math_id": 57, "text": "E[k]=k" }, { "math_id": 58, "text": "\\mathrm{Var}[k]=0" }, { "math_id": 59, "text": "\\mathrm{Cov}[X,k]=0" }, { "math_id": 60, "text": "\\mathrm{Cov}[Z,X]=\\mathrm{Cov}[f(X),X]=E[Xf(X)]-E[f(X)]E[X]" }, { "math_id": 61, "text": "f(X)" }, { "math_id": 62, "text": "f(X)= \\displaystyle \\sum_{n=0}^\\infty \\displaystyle \\frac{1}{n!}\\biggl({d^nf \\over dX^n}\\biggr)_{X=\\mu}(X-\\mu)^n" }, { "math_id": 63, "text": "\\mu=E[X]" }, { "math_id": 64, "text": "E[f(X)]=E\\biggl(\\textstyle \\sum_{n=0}^\\infty \\displaystyle {1 \\over n!}\\biggl({d^nf \\over dX^n}\\biggr)_{X=\\mu}(X-\\mu)^n\\biggr)=\\displaystyle \\sum_{n=0}^\\infty \\displaystyle {1 \\over n!}\\biggl({d^nf \\over dX^n}\\biggr)_{X=\\mu}E[(X-\\mu)^n]=\\textstyle \\sum_{n=0}^\\infty \\displaystyle \\frac{1}{n!}\\biggl({d^nf \\over dX^n}\\biggr)_{X=\\mu}\\mu_n(X)" }, { "math_id": 65, "text": "\\mu_n(X)=E[(X-\\mu)^n]" }, { "math_id": 66, "text": "\\mu_0(X)=1" }, { "math_id": 67, "text": "\\mu_1(X)=0" }, { "math_id": 68, "text": "E[f(X)]\\approx \\textstyle \\sum_{n=0}^{n_{max}} \\displaystyle {1 \\over n!}\\biggl({d^nf \\over dX^n}\\biggr)_{X=\\mu}\\mu_n(X) " }, { "math_id": 69, "text": "n_{max} " }, { "math_id": 70, "text": "f(X)= \\textstyle \\sum_{n=0}^\\infty \\displaystyle {\\sigma^n \\over n!}\\biggl({d^nf \\over dX^n}\\biggr)_{X=\\mu}\\mu_n(Z)" }, { "math_id": 71, "text": "X\\sim N(\\mu,\\sigma ^2)" }, { "math_id": 72, "text": "Z\\sim N(0,1)" }, { "math_id": 73, "text": "E[f(X)]\\approx \\textstyle \\sum_{n=0}^{n_{max}} \\displaystyle {\\sigma ^n \\over n!}\\biggl({d^nf \\over dX^n}\\biggr)_{X=\\mu}\\mu_n(Z) " }, { "math_id": 74, "text": "\\mu_n(Z)= \\begin{cases} \\prod_{i=1}^{n/2}(2i-1), & \\text{if }n\\text{ is even} \\\\ 0, & \\text{if }n\\text{ is odd} \\end{cases}" }, { "math_id": 75, "text": "Var[f(X)]\\approx \\textstyle \\sum_{n=1}^{n_{max}} \\displaystyle \\biggl({\\sigma^n \\over n!}\\biggl({d^nf \\over dX^n}\\biggr)_{X=\\mu}\\biggr)^2Var[Z^n]+\\textstyle \\sum_{n=1}^{n_{max}} \\displaystyle \\textstyle \\sum_{m \\neq n} \\displaystyle {\\sigma^{n+m} \\over {n!m!}}\\biggl({d^nf \\over dX^n}\\biggr)_{X=\\mu}\\biggl({d^mf \\over dX^m}\\biggr)_{X=\\mu}Cov[Z^n,Z^m]" }, { "math_id": 76, "text": "Var[Z^n]= \\begin{cases} \\prod_{i=1}^{n}(2i-1) -\\prod_{i=1}^{n/2}(2i-1)^2, & \\text{if }n\\text{ is even} \\\\ \\prod_{i=1}^{n}(2i-1), & \\text{if }n\\text{ is odd} \\end{cases}" }, { "math_id": 77, "text": "Cov[Z^n,Z^m]= \\begin{cases} \\prod_{i=1}^{(n+m)/2}(2i-1) -\\prod_{i=1}^{n/2}(2i-1)\\prod_{j=1}^{m/2}(2j-1), & \\text{if }n\\text{ and }m \\text{ are even} \\\\ \\prod_{i=1}^{(n+m)/2}(2i-1), & \\text{if }n\\text{ and }m\\text{ are odd} \\\\ 0, & \\text{otherwise} \\end{cases}" } ]
https://en.wikipedia.org/wiki?curid=1217120
1217124
CERN Axion Solar Telescope
Experiment in astroparticle physics, sited at CERN in Switzerland The CERN Axion Solar Telescope (CAST) is an experiment in astroparticle physics to search for axions originating from the Sun. The experiment, sited at CERN in Switzerland, was commissioned in 1999 and came online in 2002 with the first data-taking run starting in May 2003. The successful detection of solar axions would constitute a major discovery in particle physics, and would also open up a brand new window on the astrophysics of the solar core. CAST is currently the most sensitive axion helioscope. Theory and operation. If the axions exist, they may be produced in the Sun's core when X-rays scatter off electrons and protons in the presence of strong electric fields. The experimental setup is built around a 9.26 m long decommissioned test magnet for the LHC capable of producing a field of up to . This strong magnetic field is expected to convert solar axions back into X-rays for subsequent detection by X-ray detectors. The telescope observes the Sun for about 1.5 hours at sunrise and another 1.5 hours at sunset each day. The remaining 21 hours, with the instrument pointing away from the Sun, are spent measuring background axion levels. CAST began operation in 2003 searching for axions up to . In 2005, Helium-4 was added to the magnet, extending sensitivity to masses up to 0.39 eV, then Helium-3 was used during 2008–2011 for masses up to 1.15 eV. CAST then ran with vacuum again searching for axions below 0.02 eV. As of 2014, CAST has not turned up definitive evidence for solar axions. It has considerably narrowed down the range of parameters where these elusive particles may exist. CAST has set significant limits on axion coupling to electrons and photons. A 2017 paper using data from the 2013–2015 run reported a new best limit on axion-photon coupling of 0.66×10−10 "/" GeV. Built upon the experience of CAST, a much larger, new-generation, axion helioscope, the International Axion Observatory (IAXO), has been proposed and is now under preparation. Detectors. The CAST focuses on the solar axions using a helioscope, which is a 9.2 m superconducting LHC prototype dipole magnet. The superconductive magnet is maintained by constantly keeping it at 1.8 Kelvin using superfluid helium. There are two magnetic bores of 43 mm diameter and 9.2 6m length with X-ray detectors placed at all ends. These detectors are sensitive to photons from inverse Primakoff conversion of solar axions. The two X-ray telescopes of CAST measures both signal and background simultaneously with the same detector and reduces the systematic uncertainties. From 2003 to 2013, the following three detectors were attached to ends of the dipole magnet, all based on the inverse Primakoff effect, to detect the photons converted from the solar axions. After 2013 several new detectors such as the RADES, GridPix, and KWISP were installed, with modified goals and newly enhanced technologies. Conventional time projection chamber detectors (TPC). TPC is a gas-filled drift chambers type of detector, designed to detect the low-intensity X-ray signals at CAST. The interactions in this detector take place in a very large gaseous chamber and produce ionizing electrons. These electrons travel towards the multiwire proportional chamber (MWPC), where the signal is then amplified through the avalanche process. MICROMEsh GAseous Structure detectors (MICROMEGAS). This detector operated during the period of 2002 to 2004. It is a gaseous detector and was primarily employed for to detect X-rays in the energy range of 1–10 KeV. The detector itself was made up of low radioactive materials. The choice of material was mainly based on reducing the background noise, and Micromegas achieved a significantly low background rejection of without any shielding. X-ray telescope with a charged couple device (CCD). This detector has a pn-CCD chip located at the focal plane of the X-ray telescope. The X-ray telescope is based on the popular Wolter-I mirror optics concept. This technique is widely used in almost all X-ray astronomy telescopes. Its mirror is made up of 27 gold-coated nickel shells. These parabolic and hyperbolic shells are confocally arranged to optimize the resolution. The largest shell is 163 mm in diameter, while the smallest is 76 mm. The overall mirror system has a focal length of 1.6 m. This detector achieved a remarkably good signal to noise ratio by focusing the axions created inside the magnetic field chamber onto small, about few formula_0 area. GridPix detector. In 2016, The GridPix detector was installed to detect the soft X-rays (energy range of 200 eV to 10 KeV) generated by solar chameleons through the primakoff effect. During the search period of 2014 to 2015 the detected signal-to-noise ratio was below the required levels. InGrid Based X-ray detector. The sole aim of this detector is to enhance the sensitivity of CAST to energy thresholds around 1 KeV range. This is an improved sensitive detector set up in 2014 behind the X-ray telescope, for the search of solar chameleons which have low threshold energies. The InGrid detector and its granular Timepix pad readout with low energy threshold of 0.1 KeV for photon detection hunts the solar chameleons in this range. Relic Axion Dark Matter Exploratory Setup (RADES). The RADES started searching for axion-like dark matter in 2018, and the first results from this detector were published in early 2021. Although no significant axion signal was detected above the noise background during the 2018 to 2021 period, RADES became the first detector to search for axions above formula_1. CAST helioscope (looks at sun) was made a haloscope (looks at galactic halo) in late 2017. RADES detector attached to this haloscope has a 1 m long alternating-irises stainless-steel cavity able to search for dark matter axions around formula_2. Further prospects of improving the detector system with enhancements such as superconductive cavities and ferro-magnetic tunings are being looked into. KWISP detector. KWISP at CAST is designed to detect the coupling of solar chameleons with matter particles. It uses a very sensitive optomechanical force sensor, capable of detecting a displacement in a thin membrane caused by the mechanical effects from the solar chameleon interactions. CAST-CAPP. This detector has a delicate tuning mechanism, made of 2 parallel sapphire plates and activated by a piezoelectric motor. The maximum tuning corresponds to axions masses between 21–23 μeV. CAST-CAPP detector is also sensitive to dark matter axion tidal or cosmological streams and to the theorized axion mini-clusters. A newer and better version of CAPP is being developed at CAPP, South Korea. Results. The CAST experiment began with the goal of devising new methods and implementing novel technologies for the detection of solar axions. Owing to the inter-disciplinary and interrelated field of axion studies, dark matter, dark energy, and axion-like exotic particles, the new collaborations at CAST have broadened their research into the wide field of astroparticle physics. Results from these different domains are described below. Constraints on axions. During the initial years, axion detection was the primary goal of CAST. Although the CAST experiment did not yet observe axions directly, it has constraint the search parameters. Mass and the coupling constant of an axion are primary aspects of its detectability.  Over almost 20 years of the operation period, CAST has added very significant details and limitations to the properties of solar axions and axion-like particles. In the initial run period, the first three CAST detectors put an upper limit of formula_3 on formula_4 (parameter for axion-photon coupling) with a 95% confidence limit (CL) for axion mass- formula_5. For axion mass range between formula_6 and formula_7, RADES constrained the axion-photon coupling constant formula_8 with just about 5% error. The most recent results, in 2017 set an upper limit on formula_4 formula_9 (with 95% CL) for all axions with masses below 0.02 eV. CAST has thus improved the previous astrophysical limits and has probed numerous relevant axion models of sub-electron-volt mass. Search for dark matter. CAST was able to constrain the axion-photon coupling constant from the very low up to the hot dark matter sector; and the current search range overlaps with the present cosmic hot dark matter bound which is axion mass, formula_10. The new detectors at CAST are also looking for proposed dark matter candidates such as the solar chameleons and pharaphotons as well as the relic axions from the Big bang and Inflation. In late 2017, the CAST helioscope which originally was searching for solar axion and ALPs, was converted into haloscope to hunt for the Dark Matter wind in milky way's galactic halo while it crosses the Earth. These idea of streaming dark wind is thought to affect and cause the random and anisotropic orientation of solar flares, for which the CAST haloscope will serve as a testbed. Search for dark energy. In the dark energy domain CAST is currently looking for signatures of a chameleon, which is hypothesized to be a particle produced when dark energy interacts with the photons. This area is currently in its beginning stages, wherein possible ways of dark energy particles coupling with normal matter are being theorized. Using the GridPix detector, the upper bound on the chameleon photon coupling constant- formula_11 was determined to be equal to formula_12 for formula_13 (chameleon matter coupling constant) in the range of 1 to formula_14. KWISP detector obtained an upper limit on the force acting on its detector membrane due to chameleons as formula_15 pNewton, which corresponds to a specific exclusion zone in formula_11-formula_13 plane and complements the results obtained by GridPix. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "mm^2" }, { "math_id": 1, "text": "30 \\mu eV" }, { "math_id": 2, "text": "34 \\mu eV" }, { "math_id": 3, "text": "\\mathrm{8.8 \\times 10^{-11} GeV^{-1}} " }, { "math_id": 4, "text": "g_{a\\gamma}" }, { "math_id": 5, "text": "\\mathrm{m_{a}\\lesssim 0.02 eV} " }, { "math_id": 6, "text": " \\mathrm{34.6771 \\mu eV}" }, { "math_id": 7, "text": " \\mathrm{34.6738 \\mu eV}" }, { "math_id": 8, "text": "\\mathrm{g_{a\\gamma} \\gtrsim 4 \\times 10^{-13} GeV^{-1}}" }, { "math_id": 9, "text": "\\mathrm{< 0.66 \\times 10^{-10} GeV^{-1}}" }, { "math_id": 10, "text": "m_a \\lesssim 0.9 eV" }, { "math_id": 11, "text": "\\beta_{\\gamma}" }, { "math_id": 12, "text": "5.74 \\times 10^{10} " }, { "math_id": 13, "text": "\\beta_m" }, { "math_id": 14, "text": "10^{6}" }, { "math_id": 15, "text": "44\\pm18 " } ]
https://en.wikipedia.org/wiki?curid=1217124
1217160
Prompt criticality
Sustained nuclear fission achieved solely by prompt neutron emission In nuclear engineering, prompt criticality describes a nuclear fission event in which criticality (the threshold for an exponentially growing nuclear fission chain reaction) is achieved with prompt neutrons alone and does not rely on delayed neutrons. As a result, prompt supercriticality causes a much more rapid growth in the rate of energy release than other forms of criticality. Nuclear weapons are based on prompt criticality, while nuclear reactors rely on delayed neutrons or external neutrons to achieve criticality. Criticality. An assembly is critical if each fission event causes, on average, exactly one additional such event in a continual chain. Such a chain is a self-sustaining fission chain reaction. When a uranium-235 (U-235) atom undergoes nuclear fission, it typically releases between one and seven neutrons (with an average of 2.4). In this situation, an assembly is critical if every released neutron has a 1/2.4 = 0.42 = 42 % probability of causing another fission event as opposed to either being absorbed by a non-fission capture event or escaping from the fissile core. The average number of neutrons that cause new fission events is called the effective neutron multiplication factor, usually denoted by the symbols "k-effective", "k-eff" or "k". When "k-effective" is equal to 1, the assembly is called critical, if "k-effective" is less than 1 the assembly is said to be subcritical, and if "k-effective" is greater than 1 the assembly is called supercritical. Critical versus prompt-critical. In a supercritical assembly, the number of fissions per unit time, "N", along with the power production, increases exponentially with time. How fast it grows depends on the average time it takes, "T", for the neutrons released in a fission event to cause another fission. The growth rate of the reaction is given by: formula_0 Most of the neutrons released by a fission event are the ones released in the fission itself. These are called prompt neutrons, and strike other nuclei and cause additional fissions within nanoseconds (an average time interval used by scientists in the Manhattan Project was one shake, or 10 ns). A small additional source of neutrons is the fission products. Some of the nuclei resulting from the fission are radioactive isotopes with short half-lives, and nuclear reactions among them release additional neutrons after a long delay of up to several minutes after the initial fission event. These neutrons, which on average account for less than one percent of the total neutrons released by fission, are called delayed neutrons. The relatively slow timescale on which delayed neutrons appear is an important aspect for the design of nuclear reactors, as it allows the reactor power level to be controlled via the gradual, mechanical movement of control rods. Typically, control rods contain neutron poisons (substances, for example boron or hafnium, that easily capture neutrons without producing any additional ones) as a means of altering "k-effective". With the exception of experimental pulsed reactors, nuclear reactors are designed to operate in a delayed-critical mode and are provided with safety systems to prevent them from ever achieving prompt criticality. In a delayed-critical assembly, the delayed neutrons are needed to make "k-effective" greater than one. Thus the time between successive generations of the reaction, "T", is dominated by the time it takes for the delayed neutrons to be released, of the order of seconds or minutes. Therefore, the reaction will increase slowly, with a long time constant. This is slow enough to allow the reaction to be controlled with electromechanical control systems such as control rods, and accordingly all nuclear reactors are designed to operate in the delayed-criticality regime. In contrast, a critical assembly is said to be prompt-critical if it is critical ("k = 1") without any contribution from delayed neutrons and prompt-supercritical if it is supercritical (the fission rate growing exponentially, "k &gt; 1") without any contribution from delayed neutrons. In this case the time between successive generations of the reaction, "T", is limited only by the fission rate from the prompt neutrons, and the increase in the reaction will be extremely rapid, causing a rapid release of energy within a few milliseconds. Prompt-critical assemblies are created by design in nuclear weapons and some specially designed research experiments. The difference between a prompt neutron and a delayed neutron has to do with the source from which the neutron has been released into the reactor. The neutrons, once released, have no difference except the energy or speed that have been imparted to them. A nuclear weapon relies heavily on prompt-supercriticality (to produce a high peak power in a fraction of a second), whereas nuclear power reactors use delayed-criticality to produce controllable power levels for months or years. Nuclear reactors. In order to start up a controllable fission reaction, the assembly must be delayed-critical. In other words, "k" must be greater than 1 (supercritical) without crossing the prompt-critical threshold. In nuclear reactors this is possible due to delayed neutrons. Because it takes some time before these neutrons are emitted following a fission event, it is possible to control the nuclear reaction using control rods. A steady-state (constant power) reactor is operated so that it is critical due to the delayed neutrons, but would not be so without their contribution. During a gradual and deliberate increase in reactor power level, the reactor is delayed-supercritical. The exponential increase of reactor activity is slow enough to make it possible to control the criticality factor, "k", by inserting or withdrawing rods of neutron absorbing material. Using careful control rod movements, it is thus possible to achieve a supercritical reactor core without reaching an unsafe prompt-critical state. Once a reactor plant is operating at its target or design power level, it can be operated to maintain its critical condition for long periods of time. Prompt critical accidents. Nuclear reactors can be susceptible to prompt-criticality accidents if a large increase in reactivity (or "k-effective") occurs, e.g., following failure of their control and safety systems. The rapid uncontrollable increase in reactor power in prompt-critical conditions is likely to irreparably damage the reactor and in extreme cases, may breach the containment of the reactor. Nuclear reactors' safety systems are designed to prevent prompt criticality and, for defense in depth, reactor structures also provide multiple layers of containment as a precaution against any accidental releases of radioactive fission products. With the exception of research and experimental reactors, only a small number of reactor accidents are thought to have achieved prompt criticality, for example Chernobyl #4, the U.S. Army's SL-1, and Soviet submarine K-431. In all these examples the uncontrolled surge in power was sufficient to cause an explosion that destroyed each reactor and released radioactive fission products into the atmosphere. At Chernobyl in 1986, a poorly understood positive scram effect resulted in an overheated reactor core. This led to the rupturing of the fuel elements and water pipes, vaporization of water, a steam explosion, and a graphite fire. Estimated power levels prior to the incident suggest that it operated in excess of 30 GW, ten times its 3 GW maximum thermal output. The reactor chamber's 2000-ton lid was lifted by the steam explosion. Since the reactor was not designed with a containment building capable of containing this catastrophic explosion, the accident released large amounts of radioactive material into the environment. In the other two incidents, the reactor plants failed due to errors during a maintenance shutdown that was caused by the rapid and uncontrolled removal of at least one control rod. The SL-1 was a prototype reactor intended for use by the US Army in remote polar locations. At the SL-1 plant in 1961, the reactor was brought from shutdown to prompt critical state by manually extracting the central control rod too far. As the water in the core quickly converted to steam and expanded (in just a few milliseconds), the reactor vessel jumped , leaving impressions in the ceiling above. All three men performing the maintenance procedure died from injuries. 1,100 curies of fission products were released as parts of the core were expelled. It took 2 years to investigate the accident and clean up the site. The excess prompt reactivity of the SL-1 core was calculated in a 1962 report: &lt;templatestyles src="Template:Blockquote/styles.css" /&gt;The delayed neutron fraction of the SL-1 is 0.70%... Conclusive evidence revealed that the SL-1 excursion was caused by the partial withdrawal of the central control rod. The reactivity associated with the 20-inch withdrawal of this one rod has been estimated to be 2.4% δk/k, which was sufficient to induce prompt criticality and place the reactor on a 4 millisecond period. In the "K-431" reactor accident, 10 were killed during a refueling operation. The "K-431" explosion destroyed the adjacent machinery rooms and ruptured the submarine's hull. In these two catastrophes, the reactor plants went from complete shutdown to extremely high power levels in a fraction of a second, damaging the reactor plants beyond repair. List of accidental prompt critical excursions. A number of research reactors and tests have purposely examined the operation of a prompt critical reactor plant. CRAC, KEWB, SPERT-I, Godiva device, and BORAX experiments contributed to this research. Many accidents have also occurred, however, primarily during research and processing of nuclear fuel. SL-1 is the notable exception. The following list of prompt critical power excursions is adapted from a report submitted in 2000 by a team of American and Russian nuclear scientists who studied criticality accidents, published by the Los Alamos Scientific Laboratory, the location of many of the excursions. A typical power excursion is about 1 x 1017 fissions. Nuclear weapons. In the design of nuclear weapons, in contrast, achieving prompt criticality is essential. Indeed, one of the design problems to overcome in constructing a bomb is to compress the fissile materials enough to achieve prompt criticality before the chain reaction has a chance to produce enough energy to cause the core to expand too much. A good bomb design must therefore win the race to a dense, prompt critical core before a less-powerful chain reaction disassembles the core without allowing a significant amount of fuel to fission (known as a fizzle). This generally means that nuclear bombs need special attention paid to the way the core is assembled, such as the implosion method invented by Richard C. Tolman, Robert Serber, and other scientists at the University of California, Berkeley in 1942. References and links. &lt;templatestyles src="Reflist/styles.css" /&gt; * "Nuclear Energy: Principles", Physics Department, Faculty of Science, Mansoura University, Mansoura, Egypt; apparently excerpted from notes from the University of Washington Department of Mechanical Engineering; themselves apparently summarized from Bodansky, D. (1996), "Nuclear Energy: Principles, Practices, and Prospects", AIP
[ { "math_id": 0, "text": "N(t) = N_0 k^\\frac{t}{T} \\," } ]
https://en.wikipedia.org/wiki?curid=1217160
12173319
Killed process
Stochastic process that is forced to assume an undefined or "killed" state at some time In probability theory — specifically, in stochastic analysis — a killed process is a stochastic process that is forced to assume an undefined or "killed" state at some (possibly random) time. Definition. Let "X" : "T" × Ω → "S" be a stochastic process defined for "times" "t" in some ordered index set "T", on a probability space (Ω, Σ, P), and taking values in a measurable space "S". Let "ζ" : Ω → "T" be a random time, referred to as the killing time. Then the killed process "Y" associated to "X" is defined by formula_0 and "Y""t" is left undefined for "t" ≥ "ζ". Alternatively, one may set "Y""t" = "c" for "t" ≥ "ζ", where "c" is a "coffin state" not in "S". References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "Y_{t} = X_{t} \\mbox{ for } t < \\zeta," } ]
https://en.wikipedia.org/wiki?curid=12173319
1217358
Misuse of statistics
Use of statistical arguments to assert falsehoods Statistics, when used in a misleading fashion, can trick the casual observer into believing something other than what the data shows. That is, a misuse of statistics occurs when a statistical argument asserts a falsehood. In some cases, the misuse may be accidental. In others, it is purposeful and for the gain of the perpetrator. When the statistical reason involved is false or misapplied, this constitutes a statistical fallacy. The consequences of such misinterpretations can be quite severe. For example, in medical science, correcting a falsehood may take decades and cost lives. Misuses can be easy to fall into. Professional scientists, mathematicians and even professional statisticians, can be fooled by even some simple methods, even if they are careful to check everything. Scientists have been known to fool themselves with statistics due to lack of knowledge of probability theory and lack of standardization of their tests. Definition, limitations and context. One usable definition is: "Misuse of Statistics: Using numbers in such a manner that – either by intent or through ignorance or carelessness – the conclusions are unjustified or incorrect." The "numbers" include misleading graphics discussed in other sources. The term is not commonly encountered in statistics texts and there is no single authoritative definition. It is a generalization of lying with statistics which was richly described by examples from statisticians 60 years ago. The definition confronts some problems (some are addressed by the source): "How to Lie with Statistics" acknowledges that statistics can "legitimately" take many forms. Whether the statistics show that a product is "light and economical" or "flimsy and cheap" can be debated whatever the numbers. Some object to the substitution of statistical correctness for moral leadership (for example) as an objective. Assigning blame for misuses is often difficult because scientists, pollsters, statisticians and reporters are often employees or consultants. An insidious misuse of statistics is completed by the listener, observer, audience, or juror. The supplier provides the "statistics" as numbers or graphics (or before/after photographs), allowing the consumer to draw conclusions that may be unjustified or incorrect. The poor state of public statistical literacy and the non-statistical nature of human intuition make it possible to mislead without explicitly producing faulty conclusion. The definition is weak on the responsibility of the consumer of statistics. A historian listed over 100 fallacies in a dozen categories including those of generalization and those of causation. A few of the fallacies are explicitly or potentially statistical including sampling, statistical nonsense, statistical probability, false extrapolation, false interpolation and insidious generalization. All of the technical/mathematical problems of applied probability would fit in the single listed fallacy of statistical probability. Many of the fallacies could be coupled to statistical analysis, allowing the possibility of a false conclusion flowing from a statistically sound analysis. An example use of statistics is in the analysis of medical research. The process includes experimental planning, the conduct of the experiment, data analysis, drawing the logical conclusions and presentation/reporting. The report is summarized by the popular press and by advertisers. Misuses of statistics can result from problems at any step in the process. The statistical standards ideally imposed on the scientific report are much different than those imposed on the popular press and advertisers; however, cases exist of advertising disguised as science. The definition of the misuse of statistics is weak on the required completeness of statistical reporting. The opinion is expressed that newspapers must provide at least the source for the statistics reported. Simple causes. Many misuses of statistics occur because Types of misuse. Discarding unfavorable observations. To promote a neutral (useless) product, a company must find or conduct, for example, 40 studies with a confidence level of 95%. If the product is useless, this would produce one study showing the product was beneficial, one study showing it was harmful, and thirty-eight inconclusive studies (38 is 95% of 40). This tactic becomes more effective when there are more studies available. Organizations that do not publish every study they carry out, such as tobacco companies denying a link between smoking and cancer, anti-smoking advocacy groups and media outlets trying to prove a link between smoking and various ailments, or miracle pill vendors, are likely to use this tactic. Ronald Fisher considered this issue in his famous lady tasting tea example experiment (from his 1935 book, "The Design of Experiments"). Regarding repeated experiments, he said, "It would be illegitimate and would rob our calculation of its basis if unsuccessful results were not all brought into the account." Another term related to this concept is cherry picking. Ignoring important features. Multivariable datasets have two or more features/dimensions. If too few of these features are chosen for analysis (for example, if just one feature is chosen and simple linear regression is performed instead of multiple linear regression), the results can be misleading. This leaves the analyst vulnerable to any of various , or in some (not all) cases false causality as below. Loaded questions. The answers to surveys can often be manipulated by wording the question in such a way as to induce a prevalence towards a certain answer from the respondent. For example, in polling support for a war, the questions: will likely result in data skewed in different directions, although they are both polling about the support for the war. A better way of wording the question could be "Do you support the current US military action abroad?" A still more nearly neutral way to put that question is "What is your view about the current US military action abroad?" The point should be that the person being asked has no way of guessing from the wording what the questioner might want to hear. Another way to do this is to precede the question by information that supports the "desired" answer. For example, more people will likely answer "yes" to the question "Given the increasing burden of taxes on middle-class families, do you support cuts in income tax?" than to the question "Considering the rising federal budget deficit and the desperate need for more revenue, do you support cuts in income tax?" The proper formulation of questions can be very subtle. The responses to two questions can vary dramatically depending on the order in which they are asked. "A survey that asked about 'ownership of stock' found that most Texas ranchers owned stock, though probably not the kind traded on the New York Stock Exchange." Overgeneralization. Overgeneralization is a fallacy occurring when a statistic about a particular population is asserted to hold among members of a group for which the original population is not a representative sample. For example, suppose 100% of apples are observed to be red in summer. The assertion "All apples are red" would be an instance of overgeneralization because the original statistic was true only of a specific subset of apples (those in summer), which is not expected to be representative of the population of apples as a whole. A real-world example of the overgeneralization fallacy can be observed as an artifact of modern polling techniques, which prohibit calling cell phones for over-the-phone political polls. As young people are more likely than other demographic groups to lack a conventional "landline" phone, a telephone poll that exclusively surveys responders of calls landline phones, may cause the poll results to undersample the views of young people, if no other measures are taken to account for this skewing of the sampling. Thus, a poll examining the voting preferences of young people using this technique may not be a perfectly accurate representation of young peoples' true voting preferences as a whole without overgeneralizing, because the sample used excludes young people that carry only cell phones, who may or may not have voting preferences that differ from the rest of the population. Overgeneralization often occurs when information is passed through nontechnical sources, in particular mass media. Biased samples. Scientists have learned at great cost that gathering good experimental data for statistical analysis is difficult. Example: The placebo effect (mind over body) is very powerful. 100% of subjects developed a rash when exposed to an inert substance that was falsely called poison ivy while few developed a rash to a "harmless" object that really was poison ivy. Researchers combat this effect by double-blind randomized comparative experiments. Statisticians typically worry more about the validity of the data than the analysis. This is reflected in a field of study within statistics known as the design of experiments. Pollsters have learned at great cost that gathering good survey data for statistical analysis is difficult. The selective effect of cellular telephones on data collection (discussed in the Overgeneralization section) is one potential example; If young people with traditional telephones are not representative, the sample can be biased. Sample surveys have many pitfalls and require great care in execution. One effort required almost 3000 telephone calls to get 1000 answers. The simple random sample of the population "isn't simple and may not be random." Misreporting or misunderstanding of estimated error. If a research team wants to know how 300 million people feel about a certain topic, it would be impractical to ask all of them. However, if the team picks a random sample of about 1000 people, they can be fairly certain that the results given by this group are representative of what the larger group would have said if they had all been asked. This confidence can actually be quantified by the central limit theorem and other mathematical results. Confidence is expressed as a probability of the true result (for the larger group) being within a certain range of the estimate (the figure for the smaller group). This is the "plus or minus" figure often quoted for statistical surveys. The probability part of the confidence level is usually not mentioned; if so, it is assumed to be a standard number like 95%. The two numbers are related. If a survey has an estimated error of ±5% at 95% confidence, it also has an estimated error of ±6.6% at 99% confidence. ±formula_0% at 95% confidence is always ±formula_1% at 99% confidence for a normally distributed population. The smaller the estimated error, the larger the required sample, at a given confidence level; for example, at 95.4% confidence: People may assume, because the confidence figure is omitted, that there is a 100% certainty that the true result is within the estimated error. This is not mathematically correct. Many people may not realize that the randomness of the sample is very important. In practice, many opinion polls are conducted by phone, which distorts the sample in several ways, including exclusion of people who do not have phones, favoring the inclusion of people who have more than one phone, favoring the inclusion of people who are willing to participate in a phone survey over those who refuse, etc. Non-random sampling makes the estimated error unreliable. On the other hand, people may consider that statistics are inherently unreliable because not everybody is called, or because they themselves are never polled. People may think that it is impossible to get data on the opinion of dozens of millions of people by just polling a few thousands. This is also inaccurate. A poll with perfect unbiased sampling and truthful answers has a mathematically determined margin of error, which only depends on the number of people polled. However, often only one margin of error is reported for a survey. When results are reported for population subgroups, a larger margin of error will apply, but this may not be made clear. For example, a survey of 1000 people may contain 100 people from a certain ethnic or economic group. The results focusing on that group will be much less reliable than results for the full population. If the margin of error for the full sample was 4%, say, then the margin of error for such a subgroup could be around 13%. There are also many other measurement problems in population surveys. The problems mentioned above apply to all statistical experiments, not just population surveys. False causality. When a statistical test shows a correlation between A and B, there are usually six possibilities: The sixth possibility can be quantified by statistical tests that can calculate the probability that the correlation observed would be as large as it is just by chance if, in fact, there is no relationship between the variables. However, even if that possibility has a small probability, there are still the five others. If the number of people buying ice cream at the beach is statistically related to the number of people who drown at the beach, then nobody would claim ice cream causes drowning because it's obvious that it isn't so. (In this case, both drowning and ice cream buying are clearly related by a third factor: the number of people at the beach). This fallacy can be used, for example, to prove that exposure to a chemical causes cancer. Replace "number of people buying ice cream" with "number of people exposed to chemical X", and "number of people who drown" with "number of people who get cancer", and many people will believe you. In such a situation, there may be a statistical correlation even if there is no real effect. For example, if there is a perception that a chemical site is "dangerous" (even if it really isn't) property values in the area will decrease, which will entice more low-income families to move to that area. If low-income families are more likely to get cancer than high-income families (due to a poorer diet, for example, or less access to medical care) then rates of cancer will go up, even though the chemical itself is not dangerous. It is believed that this is exactly what happened with some of the early studies showing a link between EMF (electromagnetic fields) from power lines and cancer. In well-designed studies, the effect of false causality can be eliminated by assigning some people into a "treatment group" and some people into a "control group" at random, and giving the treatment group the treatment and not giving the control group the treatment. In the above example, a researcher might expose one group of people to chemical X and leave a second group unexposed. If the first group had higher cancer rates, the researcher knows that there is no third factor that affected whether a person was exposed because he controlled who was exposed or not, and he assigned people to the exposed and non-exposed groups at random. However, in many applications, actually doing an experiment in this way is either prohibitively expensive, infeasible, unethical, illegal, or downright impossible. For example, it is highly unlikely that an IRB would accept an experiment that involved intentionally exposing people to a dangerous substance in order to test its toxicity. The obvious ethical implications of such types of experiments limit researchers' ability to empirically test causation. Proof of the null hypothesis. In a statistical test, the null hypothesis (formula_2) is considered valid until enough data proves it wrong. Then formula_2 is rejected and the alternative hypothesis (formula_3) is considered to be proven as correct. By chance this can happen, although formula_2 is true, with a probability denoted formula_4 (the significance level). This can be compared to the judicial process, where the accused is considered innocent (formula_2) until proven guilty (formula_3) beyond reasonable doubt (formula_4). But if data does not give us enough proof to reject that formula_2, this does not automatically prove that formula_2 is correct. If, for example, a tobacco producer wishes to demonstrate that its products are safe, it can easily conduct a test with a small sample of smokers versus a small sample of non-smokers. It is unlikely that any of them will develop lung cancer (and even if they do, the difference between the groups has to be very big in order to reject formula_2). Therefore, it is likely—even when smoking is dangerous—that our test will not reject formula_2. If formula_2 is accepted, it does not automatically follow that smoking is proven harmless. The test has insufficient power to reject formula_2, so the test is useless and the value of the "proof" of formula_2 is also null. This can—using the judicial analogue above—be compared with the truly guilty defendant who is released just because the proof is not enough for a guilty verdict. This does not prove the defendant's innocence, but only that there is not proof enough for a guilty verdict. "...the null hypothesis is never proved or established, but it is possibly disproved, in the course of experimentation. Every experiment may be said to exist only in order to give the facts a chance of disproving the null hypothesis." (Fisher in "The Design of Experiments") Many reasons for confusion exist including the use of double negative logic and terminology resulting from the merger of Fisher's "significance testing" (where the null hypothesis is never accepted) with "hypothesis testing" (where some hypothesis is always accepted). Confusing statistical significance with practical significance. Statistical significance is a measure of probability; practical significance is a measure of effect. A baldness cure is statistically significant if a sparse peach-fuzz usually covers the previously naked scalp. The cure is practically significant when a hat is no longer required in cold weather and the barber asks how much to take off the top. The bald want a cure that is both statistically and practically significant; It will probably work and if it does, it will have a big hairy effect. Scientific publication often requires only statistical significance. This has led to complaints (for the last 50 years) that statistical significance testing is a misuse of statistics. Data dredging. Data dredging is an abuse of data mining. In data dredging, large compilations of data are examined in order to find a correlation, without any pre-defined choice of a hypothesis to be tested. Since the required confidence interval to establish a relationship between two parameters is usually chosen to be 95% (meaning that there is a 95% chance that the relationship observed is not due to random chance), there is thus a 5% chance of finding a correlation between any two sets of completely random variables. Given that data dredging efforts typically examine large datasets with many variables, and hence even larger numbers of pairs of variables, spurious but apparently statistically significant results are almost certain to be found by any such study. Note that data dredging is a valid way of "finding" a possible hypothesis but that hypothesis "must" then be tested with data not used in the original dredging. The misuse comes in when that hypothesis is stated as fact without further validation. "You cannot legitimately test a hypothesis on the same data that first suggested that hypothesis. The remedy is clear. Once you have a hypothesis, design a study to search specifically for the effect you now think is there. If the result of this test is statistically significant, you have real evidence at last." Data manipulation. Informally called "fudging the data," this practice includes selective reporting (see also publication bias) and even simply making up false data. Examples of selective reporting abound. The easiest and most common examples involve choosing a group of results that follow a pattern consistent with the preferred hypothesis while ignoring other results or "data runs" that contradict the hypothesis. Scientists, in general, question the validity of study results that cannot be reproduced by other investigators. However, some scientists refuse to publish their data and methods. Data manipulation is a serious issue/consideration in the most honest of statistical analyses. Outliers, missing data and non-normality can all adversely affect the validity of statistical analysis. It is appropriate to study the data and repair real problems before analysis begins. "[I]n any scatter diagram there will be some points more or less detached from the main part of the cloud: these points should be rejected only for cause." Other fallacies. Pseudoreplication is a technical error associated with analysis of variance. Complexity hides the fact that statistical analysis is being attempted on a single sample (N=1). For this degenerate case the variance cannot be calculated (division by zero). An (N=1) will always give the researcher the highest statistical correlation between intent bias and actual findings. The gambler's fallacy assumes that an event for which a future likelihood can be measured had the same likelihood of happening once it has already occurred. Thus, if someone had already tossed 9 coins and each has come up heads, people tend to assume that the likelihood of a tenth toss also being heads is 1023 to 1 against (which it was before the first coin was tossed) when in fact the chance of the tenth head is 50% (assuming the coin is unbiased). The prosecutor's fallacy assumes that the probability of an apparently criminal event being random chance is equal to the chance that the suspect is innocent. A prominent example in the UK is the wrongful conviction of Sally Clark for killing her two sons who appeared to have died of Sudden Infant Death Syndrome (SIDS). In his expert testimony, now discredited Professor Sir Roy Meadow claimed that due to the rarity of SIDS, the probability of Clark being innocent was 1 in 73 million. This was later questioned by the Royal Statistical Society; assuming Meadows figure was accurate, one has to weigh up all the possible explanations against each other to make a conclusion on which most likely caused the unexplained death of the two children. Available data suggest that the odds would be in favour of double SIDS compared to double homicide by a factor of nine. The 1 in 73 million figure was also misleading as it was reached by finding the probability of a baby from an affluent, non-smoking family dying from SIDS and squaring it: this erroneously treats each death as statistically independent, assuming that there is no factor, such as genetics, that would make it more likely for two siblings to die from SIDS. This is also an example of the ecological fallacy as it assumes the probability of SIDS in Clark's family was the same as the average of all affluent, non-smoking families; social class is a highly complex and multifaceted concept, with numerous other variables such as education, line of work, and many more. Assuming that an individual will have the same attributes as the rest of a given group fails to account for the effects of other variables which in turn can be misleading. The conviction of Sally Clark was eventually overturned and Meadow was struck from the medical register. The ludic fallacy. Probabilities are based on simple models that ignore real (if remote) possibilities. Poker players do not consider that an opponent may draw a gun rather than a card. The insured (and governments) assume that insurers will remain solvent, but see AIG and systemic risk. Other types of misuse. Other misuses include comparing apples and oranges, using the wrong average, regression toward the mean, and the umbrella phrase garbage in, garbage out. Some statistics are simply irrelevant to an issue. Certain advertising phrasing such as "[m]ore than 99 in 100," may be misinterpreted as 100%. Anscombe's quartet is a made-up dataset that exemplifies the shortcomings of simple descriptive statistics (and the value of data plotting before numerical analysis). References. Notes. &lt;templatestyles src="Reflist/styles.css" /&gt; Sources. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "x" }, { "math_id": 1, "text": "1.32x" }, { "math_id": 2, "text": "H_0" }, { "math_id": 3, "text": "H_A" }, { "math_id": 4, "text": "\\alpha" } ]
https://en.wikipedia.org/wiki?curid=1217358
12177566
Eric Ghysels
Belgian economist (born 1956) Eric Ghysels (born 1956 in Brussels) is a Belgian economist with interest in finance and time series econometrics, and in particular the fields of financial econometrics and financial technology. He is the Edward M. Bernstein Distinguished Professor of Economics at the University of North Carolina and a Professor of Finance at the Kenan-Flagler Business School. He is also the Faculty Research Director of the Rethinc.Labs at the Frank Hawkins Kenan Institute of Private Enterprise. Early life and education. Ghysels was born in Brussels, Belgium, as the son of Pierre Ghysels (a civil servant) and Anna Janssens (a homemaker). He completed his undergraduate studies in economics (Supra Cum Laude) at the Vrije Universiteit Brussel in 1979. He obtained a Fulbright Fellowship from the Belgian American Educational Foundation in 1980 and started graduate studies at Northwestern University that year, finishing his PhD at the Kellogg Graduate School of Management of Northwestern University in 1984. In 2019 he was awarded an honorary doctorate (Doctor Honoris Causa) by HEC University of Liège. Career. After graduation from the Kellogg School of Management at Northwestern University he took a faculty position at the Université de Montréal in the Department of Economics. In 1996 he became a Professor of Economics at Penn State University and joined the University of North Carolina at Chapel Hill in 2000. He is currently the Edward M. Bernstein Distinguished Professor of Economics at UNC Chapel Hill and a Professor of Finance and the Kenan-Flagler Business School. Since 2018 he is the Faculty Research Director, Rethinc.Labs, at the Kenan Institute for Private Enterprise at UNC Chapel Hill. Since 2020 he is also affiliated with the Department of Electrical and Computer Engineering at the North Carolina State University. Ghysels is a fellow of the American Statistical Association and co-founded with Robert Engle the Society for Financial Econometrics (SoFiE). He was editor of the Journal of Business and Economic Statistics (with Alastair R. Hall, 2001–2004) editor of the Journal of Financial Econometrics (2012–2015). He is currently co-editor of the Journal of Applied Econometrics. In 2008–2009 Ghysels was resident scholar at the Federal Reserve Bank of New York, in 2011 Duisenberg Fellow at the European Central Bank, both at the height of the Great Recession, and has since been a regular visitor of several other central banks around the world. He has also been visiting professor at Bocconi University (Tommaso Padoa-Schioppa Visiting Professor, 2017), the Stevanovich Center at the University of Chicago (2015), Cambridge University (INET Visiting Professor, 2014), New York University Stern School of Business (2007), among others, and holds a courtesy appointment at Louvain Finance, Université catholique de Louvain. Books. In 2001, he published a monograph on "The Econometric Analysis of Seasonal Time Series" together with Denise R. Osborn. In 2018, he published a textbook entitled "Applied Economic Forecasting using Time Series Methods" together with Massimiliano Marcellino. Honors and awards. His honors and awards include: Research. Ghysels' most recent research focuses on Mixed data sampling (MIDAS) regression models and filtering methods with applications in finance and other fields. He has also worked on diverse topics such as seasonality in economic times series, machine learning and AI applications in finance, quantum computing applications in finance, among many other topics. Mixed data sampling or MIDAS regressions are econometric regression models can be viewed in some cases as substitutes for the Kalman filter when applied in the context of mixed frequency data. There is now a substantial literature on MIDAS regressions and their applications, including Ghysels, Santa-Clara and Valkanov (2006), Ghysels, Sinko and Valkanov, Andreou, Ghysels and Kourtellos (2010) and Andreou, Ghysels and Kourtellos (2013). A MIDAS regression is a direct forecasting tool which can relate future low-frequency data with current and lagged high-frequency indicators, and yield different forecasting models for each forecast horizon. It can flexibly deal with data sampled at different frequencies and provide a direct forecast of the low-frequency variable. It incorporates each individual high-frequency data in the regression, which solves the problems of losing potentially useful information and including mis-specification. A simple regression example has the independent variable appearing at a higher frequency than the dependent variable: formula_0 where "y" is the dependent variable, "x" is the regressor, "m" denotes the frequency – for instance if "y" is yearly formula_1 is quarterly – formula_2 is the disturbance and formula_3 is a lag distribution, for instance the Beta function or the Almon Lag. The regression models can be viewed in some cases as substitutes for the Kalman filter when applied in the context of mixed frequency data. Bai, Ghysels and Wright (2013) examine the relationship between MIDAS regressions and Kalman filter state space models applied to mixed frequency data. In general, the latter involves a system of equations, whereas, in contrast, MIDAS regressions involve a (reduced form) single equation. As a consequence, MIDAS regressions might be less efficient, but also less prone to specification errors. In cases where the MIDAS regression is only an approximation, the approximation errors tend to be small. The MIDAS can also be used for machine learning time series and panel data nowcasting. The machine learning MIDAS regressions involve Legendre polynomials. High-dimensional mixed frequency time series regressions involve certain data structures that once taken into account should improve the performance of unrestricted estimators in small samples. These structures are represented by groups covering lagged dependent variables and groups of lags for a single (high-frequency) covariate. To that end, the machine learning MIDAS approach exploits the sparse-group LASSO (sg-LASSO) regularization that accommodates conveniently such structures. The attractive feature of the sg-LASSO estimator is that it allows us to combine effectively the approximately sparse and dense signals. Several software packages feature MIDAS regressions and related econometric methods. These include: References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "y_t = \\beta_0 + \\beta_1 B(L^{1/m};\\theta)x_t^{(m)} + \\varepsilon_t^{(m)}," }, { "math_id": 1, "text": "x_t^{(4)}" }, { "math_id": 2, "text": "\\varepsilon" }, { "math_id": 3, "text": "B(L^{1/m};\\theta)" } ]
https://en.wikipedia.org/wiki?curid=12177566
12180274
Steffensen's method
Newton-like root-finding algorithm that does not use derivatives In numerical analysis, Steffensen's method is an iterative method for root-finding named after Johan Frederik Steffensen which is similar to Newton's method, but with certain situational advantages. In particular, Steffensen's method achieves similar quadratic convergence, but without using derivatives, as required for Newton's method. Simple description. The simplest form of the formula for Steffensen's method occurs when it is used to find a zero of a real function formula_0 that is, to find the real value formula_1 that satisfies formula_2 Near the solution formula_3 the function formula_4 is supposed to approximately satisfy formula_5 this condition makes formula_4 adequate as a correction-function for formula_6 for finding its "own" solution, although it is not required to work efficiently. For some functions, Steffensen's method can work even if this condition is not met, but in such a case, the starting value formula_7 must be "very" close to the actual solution formula_3 and convergence to the solution may be slow. Adjustments of the method's step size, mentioned later, can improve convergence in some of these cases. Given an adequate starting value formula_8 a sequence of values formula_9 can be generated using the formula below. When it works, each value in the sequence is much closer to the solution formula_10 than the prior value. The value formula_11 from the current step generates the value formula_12 for the next step, via this formula: formula_13 for formula_14 where the slope function formula_15 is a composite of the original function formula_4 given by the following formula: formula_16 or perhaps more clearly, formula_17 where formula_18 is a step-size between the last iteration point, formula_19 and an auxiliary point located at formula_20 Technically, the function formula_21 is called the first-order divided difference of formula_4 between those two points ( it is either a "forward"-type or "backward"-type divided difference, depending on the sign of formula_22). Practically, it is the averaged value of the slope formula_23 of the function formula_4 between the last sequence point formula_24 and the auxiliary point at formula_25 with step size (and direction) formula_26 Because the value of formula_21 is an approximation for formula_27 its value can optionally be checked to see if it meets the condition formula_28 which is required of formula_27 to guarantee convergence of Steffensen's algorithm. Although slight non-conformance may not necessarily be dire, any large departure from the condition warns that Steffensen's method is liable to fail, and temporary use of some fallback algorithm is warranted (e.g. the more robust Illinois algorithm, or plain regula falsi). It is only for the purpose of finding formula_22 for this auxiliary point that the value of the function formula_4 must be an adequate correction to get closer to its own solution, and for that reason fulfill the requirement that formula_29 For all other parts of the calculation, Steffensen's method only requires the function formula_4 to be continuous and to actually have a nearby solution. Several modest modifications of the step formula_22 in the formula for the slope formula_21 exist, such as multiplying it by or , to accommodate functions formula_4 that do not quite meet the requirement. Advantages and drawbacks. The main advantage of Steffensen's method is that it has quadratic convergence like Newton's method – that is, both methods find roots to an equation formula_30 just as 'quickly'. In this case "quickly" means that for both methods, the number of correct digits in the answer doubles with each step. But the formula for Newton's method requires evaluation of the function's derivative formula_31 as well as the function formula_32 while Steffensen's method only requires formula_30 itself. This is important when the derivative is not easily or efficiently available. The price for the quick convergence is the double function evaluation: Both formula_33 and formula_34 must be calculated, which might be time-consuming if formula_30 is a complicated function. For comparison, the secant method needs only one function evaluation per step. The secant method increases the number of correct digits by "only" a factor of roughly 1.6 per step, but one can do twice as many steps of the secant method within a given time. Since the secant method can carry out twice as many steps in the same time as Steffensen's method, in practical use the secant method actually converges faster than Steffensen's method, when both algorithms succeed: The secant method achieves a factor of about (1.6)2 ≈ 2.6 times as many digits for every two steps (two function evaluations), compared to Steffensen's factor of 2 for every one step (two function evaluations). Similar to most other iterative root-finding algorithms, the crucial weakness in Steffensen's method is choosing an 'adequate' starting value formula_35 If the value of formula_36 is not 'close enough' to the actual solution formula_37 the method may fail and the sequence of values formula_38 may either erratically flip-flop between two extremes, or diverge to infinity, or both. Derivation using Aitken's delta-squared process. The version of Steffensen's method implemented in the MATLAB code shown below can be found using the Aitken's delta-squared process for accelerating convergence of a sequence. To compare the following formulae to the formulae in the section above, notice that formula_39 This method assumes starting with a linearly convergent sequence and increases the rate of convergence of that sequence. If the signs of formula_40 agree and formula_41 is 'sufficiently close' to the desired limit of the sequence formula_42 we can assume the following: formula_43 then formula_44 so formula_45 and hence formula_46 Solving for the desired limit of the sequence formula_47 gives: formula_48 formula_49 formula_50 which results in the more rapidly convergent sequence: formula_51 Code example. In Matlab. Here is the source for an implementation of Steffensen's Method in MATLAB. function Steffensen(f, p0, tol) % This function takes as inputs: a fixed point iteration function, f, % and initial guess to the fixed point, p0, and a tolerance, tol. % The fixed point iteration function is assumed to be input as an % inline function. % This function will calculate and return the fixed point, p, % that makes the expression f(x) = p true to within the desired % tolerance, tol. format compact % This shortens the output. format long % This prints more decimal places. for i = 1:1000 % get ready to do a large, but finite, number of iterations. % This is so that if the method fails to converge, we won't % be stuck in an infinite loop. p1 = f(p0); % calculate the next two guesses for the fixed point. p2 = f(p1); p = p0-(p1-p0)^2/(p2-2*p1+p0) % use Aitken's delta squared method to % find a better approximation to p0. if abs(p - p0) &lt; tol % test to see if we are within tolerance. break % if we are, stop the iterations, we have our answer. end p0 = p; % update p0 for the next iteration. end if abs(p - p0) &gt; tol % If we fail to meet the tolerance, we output a % message of failure. 'failed to converge in 1000 iterations.' end In Python. Here is the source for an implementation of Steffensen's Method in Python. from typing import Callable, Iterator Func = Callable[[float], float] def g(f: Func, x: float, fx: float) -&gt; Func: """First-order divided difference function. Arguments: f: Function input to g x: Point at which to evaluate g fx: Function f evaluated at x return lambda x: f(x + fx) / fx - 1 def steff(f: Func, x: float) -&gt; Iterator[float]: """Steffenson algorithm for finding roots. This recursive generator yields the x_{n+1} value first then, when the generator iterates, it yields x_{n+2} from the next level of recursion. Arguments: f: Function whose root we are searching for x: Starting value upon first call, each level n that the function recurses x is x_n while True: fx = f(x) gx = g(f, x, fx)(x) if gx == 0: break else: yield x # Yield value Generalization to Banach space. Steffensen's method can also be used to find an input formula_52 for a different kind of function formula_53 that produces output the same as its input: formula_54 for the special value formula_55 Solutions like formula_10 are called "[[fixed point (mathematics)|fixed point]]s". Many of these functions can be used to find their own solutions by repeatedly recycling the result back as input, but the rate of convergence can be slow, or the function can fail to converge at all, depending on the individual function. Steffensen's method accelerates this convergence, to make it [[quadratic convergence|quadratic]]. For orientation, the root function formula_4 and the fixed-point functions are simply related by formula_56 where formula_57 is some scalar constant small enough in magnitude to make formula_53 stable under iteration, but large enough for the [[non-linearity]] of the function formula_4 to be appreciable. All issues of a more general [[Banach space]] vs. basic [[real numbers]] being momentarily ignored for the sake of the comparison. This method for finding fixed points of a real-valued function has been generalised for functions formula_58 that map a [[Banach space]] formula_59 onto itself or even more generally formula_60 that map from one [[Banach space]] formula_59 into another [[Banach space]] formula_61 The generalized method assumes that a [[Indexed family|family]] of [[Bounded set|bounded]] [[linear operators]] formula_62 associated with formula_63 and formula_64 can be devised that (locally) satisfies this condition: formula_65 style="text-align:right"| "If division is possible" in the [[Banach space]], the linear operator formula_66 can be obtained from formula_67 which may provide some insight: Expressed in this way, the linear operator formula_66 can be more easily seen to be an elaborate version of the [[divided difference]] formula_21 discussed in the first section, above. The quotient form is shown here for orientation only; it is "not" required "per se". Note also that division within the Banach space is not necessary for the elaborated Steffensen's method to be viable; the only requirement is that the operator formula_66 satisfy the [[#⸎|equation marked]] with the [[Coronis (textual symbol)|coronis]], . For the [[Function of a real variable|basic real number function]] formula_68 given in the first section, the function simply takes in and puts out real numbers. There, the function formula_21 is a "[[divided difference]]". In the generalized form here, the operator formula_66 is the analogue of a divided difference for use in the [[Banach space]]. The operator formula_66 is roughly equivalent to a [[Matrix (mathematics)|matrix]] whose entries are all functions of [[vector (mathematics)|vector]] [[Argument of a function|arguments]] formula_63 and formula_69 Steffensen's method is then very similar to the Newton's method, except that it uses the divided difference formula_70 instead of the derivative formula_71 Note that for arguments formula_72 close to some fixed point formula_73 fixed point functions formula_53 and their linear operators formula_66 meeting the [[#⸎|condition marked ]], formula_74 where formula_75 is the identity operator. In the case that division is possible in the Banach space, the generalized iteration formula is given by formula_76 for formula_77 In the more general case in which division may not be possible, the iteration formula requires finding a solution formula_12 close to formula_78 for which formula_79 Equivalently one may seek the solution formula_12 to the somewhat reduced form formula_80 with all the values inside square brackets being independent of formula_81 The bracketed terms all only depend on formula_82 Be aware, however, that the second form may not be as numerically stable as the first: Because the first form involves finding a value for a (hopefully) small difference it may be numerically more likely to avoid excessively large or erratic changes to the iterated value formula_83 If the linear operator formula_84 satisfies formula_85 for some positive real constant formula_86 then the method converges quadratically to a fixed point of formula_53 if the initial approximation formula_7 is 'sufficiently close' to the desired solution formula_73 that satisfies formula_87 Notes. &lt;templatestyles src="Reflist/styles.css" /&gt; References. &lt;templatestyles src="Reflist/styles.css" /&gt; [[Category:Quasi-Newton methods]]
[ { "math_id": 0, "text": "\\ f\\, ;" }, { "math_id": 1, "text": "x_\\star" }, { "math_id": 2, "text": "\\ f(x_\\star) = 0 ~." }, { "math_id": 3, "text": "\\ x_\\star\\, ," }, { "math_id": 4, "text": "\\ f\\ " }, { "math_id": 5, "text": "\\ -1 < f'(x_\\star) < 0\\, ;" }, { "math_id": 6, "text": "~ x ~" }, { "math_id": 7, "text": "\\ x_0\\ " }, { "math_id": 8, "text": "\\ x_0\\, ," }, { "math_id": 9, "text": "\\ x_0,\\ x_1,\\ x_2, \\dots,\\ x_n,\\ \\dots\\ " }, { "math_id": 10, "text": "\\ x_\\star\\ " }, { "math_id": 11, "text": "\\ x_n\\ " }, { "math_id": 12, "text": "\\ x_{n+1}\\ " }, { "math_id": 13, "text": "\\ x_{n+1} = x_n - \\frac{\\, f(x_n) \\,}{g(x_n)}\\ " }, { "math_id": 14, "text": "\\ n = 0, 1, 2, 3, ...\\, ;" }, { "math_id": 15, "text": "\\ g(x)\\ " }, { "math_id": 16, "text": "\\ g(x) = \\frac{\\, f\\bigl( x + f(x) \\bigr) \\,}{f(x)} - 1\\ " }, { "math_id": 17, "text": "\\ g(x) ~=~ \\frac{\\ f(x + h) - f(x) \\ }{h}\\ ~~\\approx~~ \\frac{\\ \\operatorname{d}f( x )\\ }{ \\operatorname{d}x } ~\\equiv~ f'( x )\\ ," }, { "math_id": 18, "text": "\\ h = f(x)\\ " }, { "math_id": 19, "text": "\\ x\\ ," }, { "math_id": 20, "text": "\\ x + h ~." }, { "math_id": 21, "text": "\\ g\\ " }, { "math_id": 22, "text": "\\ h\\ " }, { "math_id": 23, "text": "\\ f'\\ " }, { "math_id": 24, "text": "\\ \\left( x, y \\right) = \\bigl( x_n,\\ f\\left( x_n \\right) \\bigr)\\ " }, { "math_id": 25, "text": "\\ \\bigl( x, y \\bigr) = \\bigl(\\ x_n + h,\\ f\\left( x_n + h \\right)\\ \\bigr)\\ ," }, { "math_id": 26, "text": "\\ h = f(x_n) ~." }, { "math_id": 27, "text": "\\ f'\\ ," }, { "math_id": 28, "text": "\\ -1 < g < 0\\ " }, { "math_id": 29, "text": "\\ -1 < f'(x_\\star) < 0 ~." }, { "math_id": 30, "text": "~ f ~" }, { "math_id": 31, "text": "~ f' ~" }, { "math_id": 32, "text": "~ f ~," }, { "math_id": 33, "text": "~ f(x_n) ~" }, { "math_id": 34, "text": "~ f(x_n + h) ~" }, { "math_id": 35, "text": "~ x_0 ~." }, { "math_id": 36, "text": "~ x_0 ~" }, { "math_id": 37, "text": "~ x_\\star ~," }, { "math_id": 38, "text": "~ x_0, \\, x_1, \\, x_2, \\, x_3, \\, \\dots ~" }, { "math_id": 39, "text": "x_n = p \\, - \\, p_n ~." }, { "math_id": 40, "text": "~ p_n, \\, p_{n+1}, \\, p_{n+2} ~" }, { "math_id": 41, "text": "~ p_n ~" }, { "math_id": 42, "text": "~ p ~," }, { "math_id": 43, "text": "\\frac{\\, p_{n+1} - p \\,}{\\, p_n - p \\,} ~ \\approx ~ \\frac{\\, p_{n+2} - p \\,}{\\, p_{n+1} - p \\,} " }, { "math_id": 44, "text": " ( p_{n+1} - p)^2 ~ \\approx ~ (\\, p_{n+2} - p \\,)\\,(\\, p_n - p \\, ) ~" }, { "math_id": 45, "text": " p_{n+1}^2 - 2 \\, p_{n+1} \\, p + p^2 ~ \\approx ~ p_{n+2} \\; p_n - (\\, p_n + p_{n+2} \\,) \\, p + p^2 " }, { "math_id": 46, "text": "(\\, p_{n+2} - 2 \\, p_{n+1} + p_n \\,) \\, p ~ \\approx ~ p_{n+2} \\, p_n - p_{n+1}^2 ~." }, { "math_id": 47, "text": "~ p ~" }, { "math_id": 48, "text": " p ~ \\approx ~ \\frac{\\, p_{n+2} \\, p_n - p_{n+1}^2 \\,}{\\, p_{n+2} - 2 \\, p_{n+1} + p_n \\,} ~=~ \\frac{\\, p_{n}^2 + p_{n} \\, p_{n+2} + 2 \\, p_{n} \\, p_{n+1} - 2 \\, p_{n} \\, p_{n+1} - p_{n}^2 - p_{n+1}^2 \\,}{\\, p_{n+2} - 2 \\, p_{n+1} + p_n \\,}" }, { "math_id": 49, "text": " =~ \\frac{\\, (\\, p_{n}^2 + p_{n} \\, p_{n+2} - 2 \\, p_{n} \\, p_{n+1} \\,) - (\\, p_{n}^2 - 2 \\, p_{n} \\, p_{n+1} + p_{n+1}^2 \\, ) \\, }{\\, p_{n+2} - 2 \\, p_{n+1} + p_n \\,}" }, { "math_id": 50, "text": " =~ p_n - \\frac{\\,(\\, p_{n+1} - p_n )^2 \\,}{\\, p_{n+2} - 2 \\, p_{n+1} + p_n \\,} ~," }, { "math_id": 51, "text": " p ~ \\approx ~ p_{n+3} ~=~ p_n - \\frac{ \\, ( \\, p_{n+1} - p_n \\, )^2 \\, }{ \\, p_{n+2} - 2 \\, p_{n+1} + p_n \\,} ~." }, { "math_id": 52, "text": "\\ x = x_\\star\\ " }, { "math_id": 53, "text": "\\ F\\ " }, { "math_id": 54, "text": "\\ x_\\star = F(x_\\star)\\ " }, { "math_id": 55, "text": "\\ x_\\star ~." }, { "math_id": 56, "text": "\\ F(x) = x + \\varepsilon\\ f(x)\\ ," }, { "math_id": 57, "text": "\\ \\varepsilon\\ " }, { "math_id": 58, "text": "\\ F : X \\to X\\ " }, { "math_id": 59, "text": "\\ X\\ " }, { "math_id": 60, "text": "\\ F : X \\to Y\\ " }, { "math_id": 61, "text": "\\ Y ~." }, { "math_id": 62, "text": "\\ \\{\\; G(u,v): u, v \\in X \\;\\}\\ " }, { "math_id": 63, "text": "\\ u\\ " }, { "math_id": 64, "text": "\\ v\\ " }, { "math_id": 65, "text": "\\ F\\left( u \\right) - F\\left( v \\right) = G\\left( u, v \\right) \\, \\bigl[\\, u - v \\,\\bigr]\\ \\qquad\\qquad\\qquad\\qquad " }, { "math_id": 66, "text": "\\ G\\ " }, { "math_id": 67, "text": "\\ G\\left( u, v \\right) = \\bigl[\\ F\\left( u \\right)- F\\left( v \\right)\\ \\bigr] \\bigl[\\ u - v\\ \\bigr]^{-1}\\ ," }, { "math_id": 68, "text": "\\ f\\ ," }, { "math_id": 69, "text": "\\ v ~." }, { "math_id": 70, "text": "\\ G \\bigl( \\, F\\left( x \\right), \\, x \\,\\bigr)\\ " }, { "math_id": 71, "text": "\\ F'(x) ~." }, { "math_id": 72, "text": "\\ x\\ " }, { "math_id": 73, "text": "\\ x_\\star\\ ," }, { "math_id": 74, "text": "\\ F'(x) \\approx G \\bigl( \\, F\\left( x \\right), \\, x \\,\\bigr) \\approx I\\ " }, { "math_id": 75, "text": "\\ I\\ " }, { "math_id": 76, "text": " x_{n+1} = x_n + \\Bigl[ I - G\\bigl( F\\left( x_n \\right), x_n \\bigr) \\Bigr]^{-1}\\Bigl[ F\\left( x_n \\right) - x_n \\Bigr]\\ ," }, { "math_id": 77, "text": "\\ n = 1,\\,2,\\,3,\\,... ~." }, { "math_id": 78, "text": "\\ x_{n}\\ " }, { "math_id": 79, "text": " \\Bigl[ I - G\\bigl( F\\left( x_n \\right), x_n \\bigr) \\Bigr] \\Bigl[ x_{n+1} - x_n \\Bigr] = F\\left( x_n \\right) - x_n ~." }, { "math_id": 80, "text": " \\Bigl[ I - G\\bigl( F\\left( x_n \\right), x_n \\bigr) \\Bigr] x_{n+1} = \\Bigl[ F\\left( x_n \\right) - G\\bigl( F\\left( x_n \\right), x_n \\bigr) \\ x_n \\Bigr]\\ ," }, { "math_id": 81, "text": "\\ x_{n+1}\\ :" }, { "math_id": 82, "text": "\\ x_{n} ~." }, { "math_id": 83, "text": "\\ x_n ~." }, { "math_id": 84, "text": "~ G ~" }, { "math_id": 85, "text": " \\Bigl\\| G \\left( u, v \\right) - G \\left( x, y \\right) \\Bigr\\| \\le k \\biggl( \\Bigl\\|u - x \\Bigr\\| + \\Bigr\\| v - y \\Bigr\\| \\biggr) ~" }, { "math_id": 86, "text": "\\ k ~," }, { "math_id": 87, "text": "\\ x_\\star = F(x_\\star) ~." } ]
https://en.wikipedia.org/wiki?curid=12180274
12181377
Positively invariant set
In mathematical analysis, a positively (or positive) invariant set is a set with the following properties: Suppose formula_0 is a dynamical system, formula_1 is a trajectory, and formula_2 is the initial point. Let formula_3 where formula_4 is a real-valued function. The set formula_5 is said to be positively invariant if formula_6 implies that formula_7 In other words, once a trajectory of the system enters formula_5, it will never leave it again.
[ { "math_id": 0, "text": "\\dot{x}=f(x)" }, { "math_id": 1, "text": " x(t,x_0) " }, { "math_id": 2, "text": " x_0 " }, { "math_id": 3, "text": " \\mathcal{O} := \\left \\lbrace x \\in \\mathbb{R}^n\\mid \\varphi (x) = 0 \\right \\rbrace" }, { "math_id": 4, "text": "\\varphi" }, { "math_id": 5, "text": "\\mathcal{O}" }, { "math_id": 6, "text": "x_0 \\in \\mathcal{O}" }, { "math_id": 7, "text": "x(t,x_0) \\in \\mathcal{O} \\ \\forall \\ t \\ge 0 " } ]
https://en.wikipedia.org/wiki?curid=12181377
12181958
Eikonal approximation
In theoretical physics, the eikonal approximation (Greek εἰκών for likeness, icon or image) is an approximative method useful in wave scattering equations, which occur in optics, seismology, quantum mechanics, quantum electrodynamics, and partial wave expansion. Informal description. The main advantage that the eikonal approximation offers is that the equations reduce to a differential equation in a single variable. This reduction into a single variable is the result of the straight line approximation or the eikonal approximation, which allows us to choose the straight line as a special direction. Relation to the WKB approximation. The early steps involved in the eikonal approximation in quantum mechanics are very closely related to the WKB approximation for one-dimensional waves. The WKB method, like the eikonal approximation, reduces the equations into a differential equation in a single variable. But the difficulty with the WKB approximation is that this variable is described by the trajectory of the particle which, in general, is complicated. Formal description. Making use of WKB approximation we can write the wave function of the scattered system in terms of action "S": formula_0 Inserting the wavefunction Ψ in the Schrödinger equation without the presence of a magnetic field we obtain formula_1 formula_2 formula_3 We write "S" as a power series in "ħ" formula_4 For the zero-th order: formula_5 If we consider the one-dimensional case then formula_6. We obtain a differential equation with the boundary condition: formula_7 for formula_8, formula_9. formula_10 formula_11
[ { "math_id": 0, "text": "\\Psi=e^{iS/{\\hbar}} " }, { "math_id": 1, "text": " -\\frac{{\\hbar}^2}{2m} {\\nabla}^2 \\Psi= (E-V) \\Psi" }, { "math_id": 2, "text": " -\\frac{{\\hbar}^2}{2m} {\\nabla}^2 {e^{iS/{\\hbar}}}=(E-V) e^{iS/{\\hbar}}" }, { "math_id": 3, "text": "\\frac{1}{2m} {(\\nabla S)}^2 - \\frac{i\\hbar}{2m}{\\nabla}^2 S= E-V" }, { "math_id": 4, "text": "S= S_0 + \\frac {\\hbar}{i} S_1 + ..." }, { "math_id": 5, "text": " \\frac{1}{2m} {(\\nabla S_0)}^2 = E-V" }, { "math_id": 6, "text": "{\\nabla}^2 \\rightarrow {\\partial_z}^2" }, { "math_id": 7, "text": "\\frac{S(z=z_0)}{\\hbar}= k z_0" }, { "math_id": 8, "text": "V \\rightarrow 0" }, { "math_id": 9, "text": "z \\rightarrow -\\infty" }, { "math_id": 10, "text": "\\frac{d}{dz}\\frac{S_0}{\\hbar}= \\sqrt{k^2 - 2mV/{\\hbar}^2}" }, { "math_id": 11, "text": "\\frac{S_0(z)}{\\hbar}= kz - \\frac{m}{{\\hbar}^2 k} \\int_{-\\infty}^{z}{V dz'} " } ]
https://en.wikipedia.org/wiki?curid=12181958