id
stringlengths
2
8
title
stringlengths
1
130
text
stringlengths
0
252k
formulas
listlengths
1
823
url
stringlengths
38
44
8063161
Quasi-Frobenius Lie algebra
In mathematics, a quasi-Frobenius Lie algebra formula_0 over a field formula_1 is a Lie algebra formula_2 equipped with a nondegenerate skew-symmetric bilinear form formula_3, which is a Lie algebra 2-cocycle of formula_4 with values in formula_1. In other words, formula_5 for all formula_6, formula_7, formula_8 in formula_4. If formula_9 is a coboundary, which means that there exists a linear form formula_10 such that formula_11 then formula_0 is called a Frobenius Lie algebra. Equivalence with pre-Lie algebras with nondegenerate invariant skew-symmetric bilinear form. If formula_0 is a quasi-Frobenius Lie algebra, one can define on formula_4 another bilinear product formula_12 by the formula formula_13. Then one has formula_14 and formula_15 is a pre-Lie algebra.
[ { "math_id": 0, "text": "(\\mathfrak{g},[\\,\\,\\,,\\,\\,\\,],\\beta )" }, { "math_id": 1, "text": "k" }, { "math_id": 2, "text": "(\\mathfrak{g},[\\,\\,\\,,\\,\\,\\,] )" }, { "math_id": 3, "text": "\\beta : \\mathfrak{g}\\times\\mathfrak{g}\\to k" }, { "math_id": 4, "text": "\\mathfrak{g}" }, { "math_id": 5, "text": " \\beta \\left(\\left[X,Y\\right],Z\\right)+\\beta \\left(\\left[Z,X\\right],Y\\right)+\\beta \\left(\\left[Y,Z\\right],X\\right)=0 " }, { "math_id": 6, "text": "X" }, { "math_id": 7, "text": "Y" }, { "math_id": 8, "text": "Z" }, { "math_id": 9, "text": "\\beta" }, { "math_id": 10, "text": "f : \\mathfrak{g}\\to k" }, { "math_id": 11, "text": "\\beta(X,Y)=f(\\left[X,Y\\right])," }, { "math_id": 12, "text": "\\triangleleft" }, { "math_id": 13, "text": " \\beta \\left(\\left[X,Y\\right],Z\\right)=\\beta \\left(Z \\triangleleft Y,X \\right) " }, { "math_id": 14, "text": "\\left[X,Y\\right]=X \\triangleleft Y-Y \\triangleleft X" }, { "math_id": 15, "text": "(\\mathfrak{g}, \\triangleleft)" } ]
https://en.wikipedia.org/wiki?curid=8063161
8065677
Distance measure
Definitions for distance between two objects or events in the universe Distance measures are used in physical cosmology to give a natural notion of the distance between two objects or events in the universe. They are often used to tie some "observable" quantity (such as the luminosity of a distant quasar, the redshift of a distant galaxy, or the angular size of the acoustic peaks in the cosmic microwave background (CMB) power spectrum) to another quantity that is not "directly" observable, but is more convenient for calculations (such as the comoving coordinates of the quasar, galaxy, etc.). The distance measures discussed here all reduce to the common notion of Euclidean distance at low redshift. In accord with our present understanding of cosmology, these measures are calculated within the context of general relativity, where the Friedmann–Lemaître–Robertson–Walker solution is used to describe the universe. Overview. There are a few different definitions of "distance" in cosmology which are all asymptotic one to another for small redshifts. The expressions for these distances are most practical when written as functions of redshift formula_0, since redshift is always the observable. They can also be written as functions of scale factor formula_1 In the remainder of this article, the peculiar velocity is assumed to be negligible unless specified otherwise. We first give formulas for several distance measures, and then describe them in more detail further down. Defining the "Hubble distance" as formula_2 where formula_3 is the speed of light, formula_4 is the Hubble parameter today, and h is the dimensionless Hubble constant, all the distances are asymptotic to formula_5 for small z. According to the Friedmann equations, we also define a dimensionless Hubble "parameter": formula_6 Here, formula_7 and formula_8 are normalized values of the present radiation energy density, matter density, and "dark energy density", respectively (the latter representing the cosmological constant), and formula_9 determines the curvature. The Hubble parameter at a given redshift is then formula_10. The formula for comoving distance, which serves as the basis for most of the other formulas, involves an integral. Although for some limited choices of parameters (see below) the comoving distance integral has a closed analytic form, in general—and specifically for the parameters of our universe—we can only find a solution numerically. Cosmologists commonly use the following measures for distances from the observer to an object at redshift formula_0 along the line of sight (LOS): Alternative terminology. Peebles calls the transverse comoving distance the "angular size distance", which is not to be mistaken for the angular diameter distance. Occasionally, the symbols formula_16 or formula_17 are used to denote both the comoving and the angular diameter distance. Sometimes, the light-travel distance is also called the "lookback distance" and/or "lookback time". Details. Peculiar velocity. In real observations, the movement of the Earth with respect to the Hubble flow has an effect on the observed redshift. There are actually two notions of redshift. One is the redshift that would be observed if both the Earth and the object were not moving with respect to the "comoving" surroundings (the Hubble flow), defined by the cosmic microwave background. The other is the actual redshift measured, which depends both on the peculiar velocity of the object observed and on their peculiar velocity. Since the Solar System is moving at around 370 km/s in a direction between Leo and Crater, this decreases formula_18 for distant objects in that direction by a factor of about 1.0012 and increases it by the same factor for distant objects in the opposite direction. (The speed of the motion of the Earth around the Sun is only 30 km/s.) Comoving distance. The comoving distance formula_19 between fundamental observers, i.e. observers that are both moving with the Hubble flow, does not change with time, as comoving distance accounts for the expansion of the universe. Comoving distance is obtained by integrating the proper distances of nearby fundamental observers along the line of sight (LOS), whereas the proper distance is what a measurement at constant cosmic time would yield. In standard cosmology, comoving distance and proper distance are two closely related distance measures used by cosmologists to measure distances between objects; the comoving distance is the proper distance at the present time. The comoving distance (with a small correction for our own motion) is the distance that would be obtained from parallax, because the parallax in degrees equals the ratio of an astronomical unit to the circumference of a circle at the present time going through the sun and centred on the distant object, multiplied by 360°. However, objects beyond a megaparsec have parallax too small to be measured (the Gaia space telescope measures the parallax of the brightest stars with a precision of 7 microarcseconds), so the parallax of galaxies outside our Local Group is too small to be measured. There is a closed-form expression for the integral in the definition of the comoving distance if formula_20 or, by substituting the scale factor formula_21 for formula_22, if formula_23. Our universe now seems to be closely represented by formula_24 In this case, we have: formula_25 where formula_26 The comoving distance should be calculated using the value of z that would pertain if neither the object nor we had a peculiar velocity. Together with the scale factor it gives the proper distance of the object when the light we see now was emitted by the it, and set off on its journey to us: formula_27 Proper distance. Proper distance roughly corresponds to where a distant object would be at a specific moment of cosmological time, which can change over time due to the expansion of the universe. "Comoving distance" factors out the expansion of the universe, which gives a distance that does not change in time due to the expansion of space (though this may change due to other, local factors, such as the motion of a galaxy within a cluster); the comoving distance is the proper distance at the present time. Transverse comoving distance. Two comoving objects at constant redshift formula_0 that are separated by an angle formula_28 on the sky are said to have the distance formula_29, where the transverse comoving distance formula_30 is defined appropriately. Angular diameter distance. An object of size formula_31 at redshift formula_0 that appears to have angular size formula_28 has the angular diameter distance of formula_32. This is commonly used to observe so called standard rulers, for example in the context of baryon acoustic oscillations. When accounting for the earth's peculiar velocity, the redshift that would pertain in that case should be used but formula_33 should be corrected for the motion of the solar system by a factor between 0.99867 and 1.00133, depending on the direction. (If one starts to move with velocity v towards an object, at any distance, the angular diameter of that object decreases by a factor of formula_34) Luminosity distance. If the intrinsic luminosity formula_35 of a distant object is known, we can calculate its luminosity distance by measuring the flux formula_36 and determine formula_37, which turns out to be equivalent to the expression above for formula_38. This quantity is important for measurements of standard candles like type Ia supernovae, which were first used to discover the acceleration of the expansion of the universe. When accounting for the earth's peculiar velocity, the redshift that would pertain in that case should be used for formula_39 but the factor formula_40 should use the measured redshift, and another correction should be made for the peculiar velocity of the object by multiplying by formula_41 where now v is the component of the object's peculiar velocity away from us. In this way, the luminosity distance will be equal to the angular diameter distance multiplied by formula_42 where z is the measured redshift, in accordance with Etherington's reciprocity theorem (see below). Light-travel distance. This distance formula_43 is the time that it took light to reach the observer from the object multiplied by the speed of light. For instance, the radius of the observable universe in this distance measure becomes the age of the universe multiplied by the speed of light (1 light year/year), which turns out to be approximately 13.8 billion light years. There is a closed-form solution of the light-travel distance if formula_44 involving the inverse hyperbolic functions formula_45 or formula_46 (or involving inverse trigonometric functions if the cosmological constant has the other sign). If formula_47 then there is a closed-form solution for formula_48 but not for formula_49 Note that the comoving distance is recovered from the transverse comoving distance by taking the limit formula_50, such that the two distance measures are equivalent in a flat universe. There are websites for calculating light-travel distance from redshift. The age of the universe then becomes formula_51, and the time elapsed since redshift formula_0 until now is: formula_52 Etherington's distance duality. The Etherington's distance-duality equation is the relationship between the luminosity distance of standard candles and the angular-diameter distance. It is expressed as follows: formula_53 References. <templatestyles src="Reflist/styles.css" />
[ { "math_id": 0, "text": "z" }, { "math_id": 1, "text": "a=1/(1+z)." }, { "math_id": 2, "text": "d_H = \\frac{c}{H_0}\\approx 3000 h^{-1} \\text{Mpc}\\approx 9.26 \\cdot 10^{25} h^{-1} \\text{m}" }, { "math_id": 3, "text": "c" }, { "math_id": 4, "text": "H_0" }, { "math_id": 5, "text": "z\\cdot d_H" }, { "math_id": 6, "text": "E(z) = \\frac{H(z)}{H_0}=\\sqrt{\\Omega_r(1+z)^4+\\Omega_m(1+z)^3+\\Omega_k(1+z)^2+\\Omega_\\Lambda}" }, { "math_id": 7, "text": "\\Omega_r, \\Omega_m," }, { "math_id": 8, "text": "\\Omega_\\Lambda" }, { "math_id": 9, "text": "\\Omega_k = 1-\\Omega_r-\\Omega_m-\\Omega_\\Lambda" }, { "math_id": 10, "text": "H(z) = H_0E(z)" }, { "math_id": 11, "text": "d_C(z) = d_H \\int_0^z \\frac{dz'}{E(z')}" }, { "math_id": 12, "text": " d_M(z) = \\begin{cases}\n \\frac{d_H}{\\sqrt{\\Omega_k}} \\sinh\\left(\\frac{\\sqrt{\\Omega_k}d_C(z)}{d_H}\\right) & \\Omega_k>0\\\\\n d_C(z) & \\Omega_k=0\\\\\n \\frac{d_H}{\\sqrt{|\\Omega_k|}} \\sin\\left(\\frac{\\sqrt{|\\Omega_k|}d_C(z)}{d_H}\\right) & \\Omega_k<0\n\\end{cases}" }, { "math_id": 13, "text": " d_A(z) = \\frac{d_M(z)}{1+z}" }, { "math_id": 14, "text": "d_L(z)=(1+z) d_M(z)" }, { "math_id": 15, "text": "d_T(z) = d_H \\int_0^z \\frac{d z'}{(1+z')E(z')} " }, { "math_id": 16, "text": "\\chi" }, { "math_id": 17, "text": "r" }, { "math_id": 18, "text": "1+z" }, { "math_id": 19, "text": "d_C" }, { "math_id": 20, "text": "\\Omega_r=\\Omega_m=0" }, { "math_id": 21, "text": "a" }, { "math_id": 22, "text": "1/(1+z)" }, { "math_id": 23, "text": "\\Omega_\\Lambda=0" }, { "math_id": 24, "text": "\\Omega_r=\\Omega_k=0." }, { "math_id": 25, "text": "d_C(z) = d_H \\Omega_m^{-1/3}\\Omega_\\Lambda^{-1/6}[f((1+z)(\\Omega_m/\\Omega_\\Lambda)^{1/3})-f((\\Omega_m/\\Omega_\\Lambda)^{1/3})]" }, { "math_id": 26, "text": "f(x)\\equiv\\int_0^x \\frac{dx}{\\sqrt{x^3+1}}" }, { "math_id": 27, "text": "d = a d_C" }, { "math_id": 28, "text": "\\delta\\theta" }, { "math_id": 29, "text": "\\delta\\theta d_M(z)" }, { "math_id": 30, "text": "d_M" }, { "math_id": 31, "text": "x" }, { "math_id": 32, "text": "d_A(z)=x/\\delta\\theta" }, { "math_id": 33, "text": "d_A" }, { "math_id": 34, "text": "\\sqrt{(1+v/c) / (1-v/c)}." }, { "math_id": 35, "text": "L" }, { "math_id": 36, "text": "S" }, { "math_id": 37, "text": "d_L(z) = \\sqrt{L/4\\pi S}" }, { "math_id": 38, "text": "d_L(z)" }, { "math_id": 39, "text": "d_M," }, { "math_id": 40, "text": "(1+z)" }, { "math_id": 41, "text": "\\sqrt{(1+v/c) / (1-v/c)}," }, { "math_id": 42, "text": "(1+z)^2," }, { "math_id": 43, "text": "d_T" }, { "math_id": 44, "text": "\\Omega_r = \\Omega_m = 0" }, { "math_id": 45, "text": "\\text{arcosh}" }, { "math_id": 46, "text": "\\text{arsinh}" }, { "math_id": 47, "text": "\\Omega_r = \\Omega_\\Lambda = 0" }, { "math_id": 48, "text": "d_T(z)" }, { "math_id": 49, "text": "z(d_T)." }, { "math_id": 50, "text": "\\Omega_k \\to 0" }, { "math_id": 51, "text": "\\lim_{z\\to\\infty} d_T(z)/c" }, { "math_id": 52, "text": " t(z) = d_T(z)/c." }, { "math_id": 53, "text": "d_L = (1+z)^2 d_A " } ]
https://en.wikipedia.org/wiki?curid=8065677
8067200
Spectral flatness
Spectral flatness or tonality coefficient, also known as Wiener entropy, is a measure used in digital signal processing to characterize an audio spectrum. Spectral flatness is typically measured in decibels, and provides a way to quantify how much a sound resembles a pure tone, as opposed to being noise-like. Interpretation. The meaning of "tonal" in this context is in the sense of the amount of peaks or resonant structure in a power spectrum, as opposed to the flat spectrum of white noise. A high spectral flatness (approaching 1.0 for white noise) indicates that the spectrum has a similar amount of power in all spectral bands — this would sound similar to white noise, and the graph of the spectrum would appear relatively flat and smooth. A low spectral flatness (approaching 0.0 for a pure tone) indicates that the spectral power is concentrated in a relatively small number of bands — this would typically sound like a mixture of sine waves, and the spectrum would appear "spiky". Dubnov has shown that spectral flatness is equivalent to information theoretic concept of mutual information that is known as "dual total correlation". Formulation. The spectral flatness is calculated by dividing the geometric mean of the power spectrum by the arithmetic mean of the power spectrum, i.e.: formula_0 where "x(n)" represents the magnitude of bin number "n". Note that a single (or more) empty bin yields a flatness of 0, so this measure is most useful when bins are generally not empty. The ratio produced by this calculation is often converted to a decibel scale for reporting, with a maximum of 0 dB and a minimum of −∞ dB. The spectral flatness can also be measured within a specified sub-band, rather than across the whole band. Applications. This measurement is one of the many audio descriptors used in the MPEG-7 standard, in which it is labelled "AudioSpectralFlatness". In birdsong research, it has been used as one of the features measured on birdsong audio, when testing similarity between two excerpts. Spectral flatness has also been used in the analysis of electroencephalography (EEG) diagnostics and research, and psychoacoustics in humans. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": " \n\\mathrm{Flatness} = \\frac{\\sqrt[N]{\\prod_{n=0}^{N-1}x(n)}}{\\frac{\\sum_{n=0}^{N-1}x(n)}{N}} = \\frac{\\exp\\left(\\frac{1}{N}\\sum_{n=0}^{N-1} \\ln x(n)\\right)}{\\frac{1}{N} \\sum_{n=0}^{N-1}x(n)}\n" } ]
https://en.wikipedia.org/wiki?curid=8067200
8075001
K-set (geometry)
Points separated from others by a line In discrete geometry, a formula_0-set of a finite point set formula_1 in the Euclidean plane is a subset of formula_0 elements of formula_1 that can be strictly separated from the remaining points by a line. More generally, in Euclidean space of higher dimensions, a formula_0-set of a finite point set is a subset of formula_0 elements that can be separated from the remaining points by a hyperplane. In particular, when formula_2 (where formula_3 is the size of formula_1), the line or hyperplane that separates a formula_0-set from the rest of formula_1 is a halving line or halving plane. The formula_0-sets of a set of points in the plane are related by projective duality to the formula_0-levels in an arrangement of lines. The formula_0-level in an arrangement of formula_3 lines in the plane is the curve consisting of the points that lie on one of the lines and have exactly formula_0 lines below them. Discrete and computational geometers have also studied levels in arrangements of more general kinds of curves and surfaces. Combinatorial bounds. &lt;templatestyles src="Unsolved/styles.css" /&gt; Unsolved problem in mathematics: What is the largest possible number of halving lines for a set of formula_3 points in the plane? It is of importance in the analysis of geometric algorithms to bound the number of formula_0-sets of a planar point set, or equivalently the number of formula_0-levels of a planar line arrangement, a problem first studied by Lovász and Erdős et al. The best known upper bound for this problem is formula_4, as was shown by Tamal Dey using the crossing number inequality of Ajtai, Chvátal, Newborn, and Szemerédi. However, the best known lower bound is far from Dey's upper bound: it is formula_5 for some constant formula_6, as shown by Tóth. In three dimensions, the best upper bound known is formula_7, and the best lower bound known is formula_8. For points in three dimensions that are in convex position, that is, are the vertices of some convex polytope, the number of formula_0-sets is formula_9, which follows from arguments used for bounding the complexity of formula_0th order Voronoi diagrams. For the case when formula_2 (halving lines), the maximum number of combinatorially distinct lines through two points of formula_1 that bisect the remaining points when formula_10 is &lt;templatestyles src="Block indent/styles.css"/&gt; Bounds have also been proven on the number of formula_11-sets, where a formula_11-set is a formula_12-set for some formula_13. In two dimensions, the maximum number of formula_11-sets is exactly formula_14, while in formula_15 dimensions the bound is formula_16. Construction algorithms. Edelsbrunner and Welzl first studied the problem of constructing all formula_0-sets of an input point set, or dually of constructing the formula_0-level of an arrangement. The formula_0-level version of their algorithm can be viewed as a plane sweep algorithm that constructs the level in left-to-right order. Viewed in terms of formula_0-sets of point sets, their algorithm maintains a dynamic convex hull for the points on each side of a separating line, repeatedly finds a bitangent of these two hulls, and moves each of the two points of tangency to the opposite hull. Chan surveys subsequent results on this problem, and shows that it can be solved in time proportional to Dey's formula_4 bound on the complexity of the formula_0-level. Agarwal and Matoušek describe algorithms for efficiently constructing an approximate level; that is, a curve that passes between the formula_17-level and the formula_18-level for some small approximation parameter formula_19. They show that such an approximation can be found, consisting of a number of line segments that depends only on formula_20 and not on formula_3 or formula_0. Matroid generalizations. The planar formula_0-level problem can be generalized to one of parametric optimization in a matroid: one is given a matroid in which each element is weighted by a linear function of a parameter formula_21, and must find the minimum weight basis of the matroid for each possible value of formula_21. If one graphs the weight functions as lines in a plane, the formula_0-level of the arrangement of these lines graphs as a function of formula_21 the weight of the largest element in an optimal basis in a uniform matroid, and Dey showed that his formula_4 bound on the complexity of the formula_0-level could be generalized to count the number of distinct optimal bases of any matroid with formula_3 elements and rank formula_0. For instance, the same formula_4 upper bound holds for counting the number of different minimum spanning trees formed in a graph with formula_3 edges and formula_0 vertices, when the edges have weights that vary linearly with a parameter formula_21. This parametric minimum spanning tree problem has been studied by various authors and can be used to solve other bicriterion spanning tree optimization problems. However, the best known lower bound for the parametric minimum spanning tree problem is formula_22, a weaker bound than that for the formula_0-set problem. For more general matroids, Dey's formula_4 upper bound has a matching lower bound. Notes. &lt;templatestyles src="Reflist/styles.css" /&gt; References. &lt;templatestyles src="Refbegin/styles.css" /&gt;
[ { "math_id": 0, "text": "k" }, { "math_id": 1, "text": "S" }, { "math_id": 2, "text": "k=n/2" }, { "math_id": 3, "text": "n" }, { "math_id": 4, "text": "O(nk^{1/3})" }, { "math_id": 5, "text": "\\Omega(nc^{\\sqrt{\\log k}})" }, { "math_id": 6, "text": "c" }, { "math_id": 7, "text": "O(nk^{3/2})" }, { "math_id": 8, "text": "\\Omega(nkc^{\\sqrt{\\log k}})" }, { "math_id": 9, "text": "\\Theta\\bigl((n-k)k\\bigr)" }, { "math_id": 10, "text": "k=1,2,\\dots" }, { "math_id": 11, "text": "\\le k" }, { "math_id": 12, "text": "j" }, { "math_id": 13, "text": "j\\le k" }, { "math_id": 14, "text": "nk" }, { "math_id": 15, "text": "d" }, { "math_id": 16, "text": "O(n^{\\lfloor d/2\\rfloor}k^{\\lceil d/2\\rceil})" }, { "math_id": 17, "text": "(k-\\delta)" }, { "math_id": 18, "text": "(k+\\delta)" }, { "math_id": 19, "text": "\\delta" }, { "math_id": 20, "text": "n/\\delta" }, { "math_id": 21, "text": "\\lambda" }, { "math_id": 22, "text": "\\Omega(n\\log k)" } ]
https://en.wikipedia.org/wiki?curid=8075001
8075308
Kasner metric
Solution of Einstein field equations The Kasner metric (developed by and named for the American mathematician Edward Kasner in 1921) is an exact solution to Albert Einstein's theory of general relativity. It describes an anisotropic universe without matter (i.e., it is a vacuum solution). It can be written in any spacetime dimension formula_0 and has strong connections with the study of gravitational chaos. Metric and conditions. The metric in formula_0 spacetime dimensions is formula_1, and contains formula_2 constants formula_3, called the "Kasner exponents." The metric describes a spacetime whose equal-time slices are spatially flat, however space is expanding or contracting at different rates in different directions, depending on the values of the formula_3. Test particles in this metric whose comoving coordinate differs by formula_4 are separated by a physical distance formula_5. The Kasner metric is an exact solution to Einstein's equations in vacuum when the Kasner exponents satisfy the following "Kasner conditions," formula_6 formula_7 The first condition defines a plane, the "Kasner plane," and the second describes a sphere, the "Kasner sphere." The solutions (choices of formula_3) satisfying the two conditions therefore lie on the sphere where the two intersect (sometimes confusingly also called the Kasner sphere). In formula_8 spacetime dimensions, the space of solutions therefore lie on a formula_9 dimensional sphere formula_10. Features. There are several noticeable and unusual features of the Kasner solution: formula_13 where we have used the first Kasner condition. Therefore formula_14 can describe either a Big Bang or a Big Crunch, depending on the sense of formula_15 formula_17 The Friedmann–Lemaître–Robertson–Walker metric employed in cosmology, by contrast, is able to expand or contract isotropically because of the presence of matter. Notes. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "D>3" }, { "math_id": 1, "text": "\\text{d}s^2 = -\\text{d}t^2 + \\sum_{j=1}^{D-1} t^{2p_j} [\\text{d}x^j]^2" }, { "math_id": 2, "text": "D-1" }, { "math_id": 3, "text": "p_j" }, { "math_id": 4, "text": "\\Delta x^j" }, { "math_id": 5, "text": "t^{p_j}\\Delta x^j" }, { "math_id": 6, "text": "\\sum_{j=1}^{D-1} p_j = 1," }, { "math_id": 7, "text": "\\sum_{j=1}^{D-1} p_j^2 = 1." }, { "math_id": 8, "text": "D" }, { "math_id": 9, "text": "D-3" }, { "math_id": 10, "text": "S^{D-3}" }, { "math_id": 11, "text": "O(t)" }, { "math_id": 12, "text": "\\sqrt{-g}" }, { "math_id": 13, "text": "\\sqrt{-g} = t^{p_1 + p_2 + \\cdots + p_{D-1}} = t" }, { "math_id": 14, "text": "t\\to 0" }, { "math_id": 15, "text": "t" }, { "math_id": 16, "text": "p_j = 1/(D-1)" }, { "math_id": 17, "text": "\\sum_{j=1}^{D-1} p_j^2 = \\frac{1}{D-1} \\ne 1." }, { "math_id": 18, "text": "p_j=1" }, { "math_id": 19, "text": "t' = t \\cosh x_j" }, { "math_id": 20, "text": "x_j' = t \\sinh x_j" } ]
https://en.wikipedia.org/wiki?curid=8075308
8077953
Riemann–von Mangoldt formula
In mathematics, the Riemann–von Mangoldt formula, named for Bernhard Riemann and Hans Carl Friedrich von Mangoldt, describes the distribution of the zeros of the Riemann zeta function. The formula states that the number "N"("T") of zeros of the zeta function with imaginary part greater than 0 and less than or equal to "T" satisfies formula_0 The formula was stated by Riemann in his notable paper "On the Number of Primes Less Than a Given Magnitude" (1859) and was finally proved by Mangoldt in 1905. Backlund gives an explicit form of the error for all "T" &gt; 2: formula_1 Under the Lindelöf and Riemann hypotheses the error term can be improved to formula_2 and formula_3 respectively. Similarly, for any primitive Dirichlet character "χ" modulo "q", we have formula_4 where "N(T,χ)" denotes the number of zeros of "L(s,χ)" with imaginary part between "-T" and "T". Notes. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "N(T)=\\frac{T}{2\\pi}\\log{\\frac{T}{2\\pi}}-\\frac{T}{2\\pi}+O(\\log{T})." }, { "math_id": 1, "text": "\\left\\vert{ N(T) - \\left({\\frac{T}{2\\pi}\\log{\\frac{T}{2\\pi}}-\\frac{T}{2\\pi} } - \\frac{7}{8}\\right)}\\right\\vert < 0.137 \\log T + 0.443 \\log\\log T + 4.350 \\ . " }, { "math_id": 2, "text": "o(\\log{T})" }, { "math_id": 3, "text": "O(\\log{T}/\\log{\\log{T}})" }, { "math_id": 4, "text": "N(T,\\chi)=\\frac{T}{\\pi}\\log{\\frac{qT}{2\\pi e}}+O(\\log{qT})," } ]
https://en.wikipedia.org/wiki?curid=8077953
8081933
Fundamental plane (elliptical galaxies)
The fundamental plane is a set of bivariate correlations connecting some of the properties of normal elliptical galaxies. Some correlations have been empirically shown. The fundamental plane is usually expressed as a relationship between the effective radius, average surface brightness and central velocity dispersion of normal elliptical galaxies. Any one of the three parameters may be estimated from the other two, as together they describe a plane that falls within their more general three-dimensional space. Properties correlated also include: color, density (of luminosity, mass, or phase space), luminosity, mass, metallicity, and, to a lesser degree, the shape of their radial surface brightness profiles. Motivation. Many characteristics of a galaxy are correlated. For example, as one would expect, a galaxy with a higher luminosity has a larger effective radius. The usefulness of these correlations is when a characteristic that can be determined without prior knowledge of the galaxy's distance (such as central velocity dispersion – the Doppler width of spectral lines in the central parts of the galaxy) can be correlated with a property, such as luminosity, that can be determined only for galaxies of a known distance. With this correlation, one can determine the distance to galaxies, a difficult task in astronomy. Correlations. The following correlations have been empirically shown for elliptical galaxies: Usefulness. The usefulness of this three dimensional space formula_7 is studied by plotting formula_8 against formula_9, where formula_10 is the mean surface brightness formula_2 expressed in magnitudes. The equation of the regression line through this plot is: formula_11 or formula_12. Thus by measuring observable quantities such as surface brightness and velocity dispersion (both independent of the observer's distance to the source) one can estimate the effective radius (measured in kpc) of the galaxy. As one now knows the linear size of the effective radius and can measure the angular size, it is easy to determine the distance of the galaxy from the observer through the small-angle approximation. Variations. An early use of the fundamental plane is the formula_13 correlation, given by: formula_14 determined by Dressler et al. (1987). Here formula_15 is the diameter within which the mean surface brightness is formula_16. This relationship has a scatter of 15% between galaxies, as it represents a slightly oblique projection of the Fundamental Plane. Fundamental Plane correlations provide insights into the formative and evolutionary processes of elliptical galaxies. Whereas the tilt of the Fundamental Plane relative to the naive expectations from the Virial Theorem is reasonably well understood, the outstanding puzzle is its small thickness. Interpretation. The observed empirical correlations reveal information on the formation of elliptical galaxies. In particular, consider the following assumptions These relations imply that formula_26, therefore formula_27 and so formula_28. However, there are observed deviations from homology, i.e. formula_29 with formula_30 in the optical band. This implies that formula_31 so formula_32 so that formula_33. This is consistent with the observed relation. Two limiting cases for the assembly of galaxies are as follows. The observed relation formula_0 lies between these limits. Notes. Diffuse dwarf ellipticals do not lie on the fundamental plane as shown by Kormendy (1987). Gudehus (1991) found that galaxies brighter than formula_39 lie on one plane, and those fainter than this value, formula_40, lie on another plane. The two planes are inclined by about 11 degrees. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "R_e \\propto \\langle I \\rangle_e^{-0.83\\pm0.08}" }, { "math_id": 1, "text": "R_e" }, { "math_id": 2, "text": "\\langle I \\rangle_e" }, { "math_id": 3, "text": "L_e = \\pi \\langle I \\rangle_e R_e^2" }, { "math_id": 4, "text": "\nL_e \\propto \\langle I \\rangle_e \\langle I \\rangle_e^{-1.66}\n" }, { "math_id": 5, "text": "\n\\langle I \\rangle_e \\sim L^{-3/2}\n" }, { "math_id": 6, "text": "L_e \\sim \\sigma_o^4" }, { "math_id": 7, "text": " \\left( \\log R_e, \\langle I \\rangle_e, \\log \\sigma_o \\right) " }, { "math_id": 8, "text": "\\log \\, R_e" }, { "math_id": 9, "text": "\\log \\sigma_o + 0.26 \\, \\mu_B" }, { "math_id": 10, "text": "\\mu_B" }, { "math_id": 11, "text": "\n\\log R_e = 1.4 \\,\\log \\sigma_o + 0.36 \\mu_B + {\\rm const.}\n" }, { "math_id": 12, "text": "\nR_e \\propto \\sigma_o^{1.4} \\langle I \\rangle_e^{-0.9}\n" }, { "math_id": 13, "text": "D_n - \\sigma_o" }, { "math_id": 14, "text": "\n\\frac{D_n}{\\text{kpc}} = 2.05 \\, \\left(\\frac{\\sigma_o}{100 \\, \\text{km}/\\text{s}}\\right)^{1.33} \n" }, { "math_id": 15, "text": "D_n" }, { "math_id": 16, "text": "20.75 \\mu_B" }, { "math_id": 17, "text": "\\sigma" }, { "math_id": 18, "text": "R" }, { "math_id": 19, "text": "M" }, { "math_id": 20, "text": "\\sigma^2 \\sim GM/R" }, { "math_id": 21, "text": "M \\sim \\sigma^2 R " }, { "math_id": 22, "text": "L" }, { "math_id": 23, "text": "I" }, { "math_id": 24, "text": "L \\propto I R^2" }, { "math_id": 25, "text": "M/L" }, { "math_id": 26, "text": "M \\propto L \\propto I R^2 \\propto \\sigma^2 R" }, { "math_id": 27, "text": "\\sigma^2 \\propto IR" }, { "math_id": 28, "text": "R \\propto \\sigma^2 I^{-1}" }, { "math_id": 29, "text": "M/L\\propto L^{\\alpha}" }, { "math_id": 30, "text": "\\alpha=0.2" }, { "math_id": 31, "text": "M \\propto L^{1+\\alpha} \\propto I^{1+\\alpha} R^{2+2\\alpha} \\propto \\sigma^2 R" }, { "math_id": 32, "text": "R \\propto \\sigma^{2/(1+2\\alpha)} I^{-(1+\\alpha)/(1+2\\alpha)}" }, { "math_id": 33, "text": "R \\propto \\sigma^{1.42} I^{-0.86}" }, { "math_id": 34, "text": "\\sigma^2 = " }, { "math_id": 35, "text": "R \\propto I^{-1}" }, { "math_id": 36, "text": "\\sigma\\propto (GM/R)^{1/2}" }, { "math_id": 37, "text": "M\\propto L \\propto IR^2" }, { "math_id": 38, "text": "R\\propto I^{-0.5}" }, { "math_id": 39, "text": "M_V=-23.04" }, { "math_id": 40, "text": "M '" } ]
https://en.wikipedia.org/wiki?curid=8081933
8082019
Galaxy effective radius
Radius which encloses 50% of the total light of a galaxy Galaxy effective radius or half-light radius (formula_0) is the radius at which half of the total light of a galaxy is emitted. This assumes the galaxy has either intrinsic spherical symmetry or is at least circularly symmetric as viewed in the plane of the sky. Alternatively, a half-light contour, or isophote, may be used for spherically and circularly asymmetric objects. formula_0 is an important length scale in formula_1 term in de Vaucouleurs law, which characterizes a specific rate at which surface brightness decreases as a function of radius: formula_2 where formula_3 is the surface brightness at formula_4. At formula_5, formula_6 Thus, the central surface brightness is approximately formula_7. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "R_e" }, { "math_id": 1, "text": "\\sqrt[4] R" }, { "math_id": 2, "text": "\nI(R) = I_e \\cdot e^{-7.67 \\left( \\sqrt[4]{ R/ {R_e}} - 1 \\right)}\n" }, { "math_id": 3, "text": "I_e" }, { "math_id": 4, "text": "R = R_e" }, { "math_id": 5, "text": "R = 0" }, { "math_id": 6, "text": "\nI(R=0) = I_e \\cdot e^{7.67} \\approx 2000 \\cdot I_e\n" }, { "math_id": 7, "text": "2000 \\cdot I_e" } ]
https://en.wikipedia.org/wiki?curid=8082019
80825
Free fall
Motion of a body subject only to gravity In classical mechanics, free fall is any motion of a body where gravity is the only force acting upon it. In the context of general relativity, where gravitation is reduced to a space-time curvature, a body in free fall has no force acting on it. An object in the technical sense of the term "free fall" may not necessarily be falling down in the usual sense of the term. An object moving upwards might not normally be considered to be falling, but if it is subject to only the force of gravity, it is said to be in free fall. The Moon is thus in free fall around the Earth, though its orbital speed keeps it in very far orbit from the Earth's surface. In a roughly uniform gravitational field gravity acts on each part of a body approximately equally. When there are no other forces, such as the normal force exerted between a body (e.g. an astronaut in orbit) and its surrounding objects, it will result in the sensation of weightlessness, a condition that also occurs when the gravitational field is weak (such as when far away from any source of gravity). The term "free fall" is often used more loosely than in the strict sense defined above. Thus, falling through an atmosphere without a deployed parachute, or lifting device, is also often referred to as "free fall". The aerodynamic drag forces in such situations prevent them from producing full weightlessness, and thus a skydiver's "free fall" after reaching terminal velocity produces the sensation of the body's weight being supported on a cushion of air. History. In the Western world prior to the 16th century, it was generally assumed that the speed of a falling body would be proportional to its weight—that is, a 10 kg object was expected to fall ten times faster than an otherwise identical 1 kg object through the same medium. The ancient Greek philosopher Aristotle (384–322 BC) discussed falling objects in "Physics" (Book VII), one of the oldest books on mechanics (see Aristotelian physics). Although, in the 6th century, John Philoponus challenged this argument and said that, by observation, two balls of very different weights will fall at nearly the same speed. In 12th-century Iraq, Abu'l-Barakāt al-Baghdādī gave an explanation for the gravitational acceleration of falling bodies. According to Shlomo Pines, al-Baghdādī's theory of motion was "the oldest negation of Aristotle's fundamental dynamic law [namely, that a constant force produces a uniform motion], [and is thus an] anticipation in a vague fashion of the fundamental law of classical mechanics [namely, that a force applied continuously produces acceleration]." Galileo Galilei. According to a tale that may be apocryphal, in 1589–1592 Galileo dropped two objects of unequal mass from the Leaning Tower of Pisa. Given the speed at which such a fall would occur, it is doubtful that Galileo could have extracted much information from this experiment. Most of his observations of falling bodies were really of bodies rolling down ramps. This slowed things down enough to the point where he was able to measure the time intervals with water clocks and his own pulse (stopwatches having not yet been invented). He repeated this "a full hundred times" until he had achieved "an accuracy such that the deviation between two observations never exceeded one-tenth of a pulse beat." In 1589–1592, Galileo wrote "De Motu Antiquiora", an unpublished manuscript on the motion of falling bodies. Examples. Examples of objects in free fall include: Technically, an object is in free fall even when moving upwards or instantaneously at rest at the top of its motion. If gravity is the only influence acting, then the acceleration is always downward and has the same magnitude for all bodies, commonly denoted formula_0. Since all objects fall at the same rate in the absence of other forces, objects and people will experience weightlessness in these situations. Examples of objects not in free-fall: The example of a falling skydiver who has not yet deployed a parachute is not considered free fall from a physics perspective, since they experience a drag force that equals their weight once they have achieved terminal velocity (see below). Near the surface of the Earth, an object in free fall in a vacuum will accelerate at approximately 9.8 m/s2, independent of its mass. With air resistance acting on an object that has been dropped, the object will eventually reach a terminal velocity, which is around 53 m/s (190 km/h or 118 mph) for a human skydiver. The terminal velocity depends on many factors including mass, drag coefficient, and relative surface area and will only be achieved if the fall is from sufficient altitude. A typical skydiver in a spread-eagle position will reach terminal velocity after about 12 seconds, during which time they will have fallen around 450 m (1,500 ft). Free fall was demonstrated on the Moon by astronaut David Scott on August 2, 1971. He simultaneously released a hammer and a feather from the same height above the Moon's surface. The hammer and the feather both fell at the same rate and hit the surface at the same time. This demonstrated Galileo's discovery that, in the absence of air resistance, all objects experience the same acceleration due to gravity. On the Moon, however, the gravitational acceleration is approximately 1.63 m/s2, or only about 1⁄6 that on Earth. Free fall in Newtonian mechanics. Uniform gravitational field without air resistance. This is the "textbook" case of the vertical motion of an object falling a small distance close to the surface of a planet. It is a good approximation in air as long as the force of gravity on the object is much greater than the force of air resistance, or equivalently the object's velocity is always much less than the terminal velocity (see below). formula_1 formula_2 where formula_3 is the initial vertical component of the velocity (m/s). formula_4 is the vertical component of the velocity at formula_5(m/s). formula_6 is the initial altitude (m). formula_7 is the altitude at formula_5(m). formula_5 is time elapsed (s). formula_8 is the acceleration due to gravity (9.81 m/s2 near the surface of the earth). If the initial velocity is zero, then the distance fallen from the initial position will grow as the square of the elapsed time. Moreover, because the odd numbers sum to the perfect squares, the distance fallen in successive time intervals grows as the odd numbers. This description of the behavior of falling bodies was given by Galileo. Uniform gravitational field with air resistance. This case, which applies to skydivers, parachutists or any body of mass, formula_9, and cross-sectional area, formula_10, with Reynolds number well above the critical Reynolds number, so that the air resistance is proportional to the square of the fall velocity, formula_11, has an equation of motion formula_12 where formula_13 is the air density and formula_14 is the drag coefficient, assumed to be constant although in general it will depend on the Reynolds number. Assuming an object falling from rest and no change in air density with altitude, the solution is: formula_15 where the terminal speed is given by formula_16 The object's speed versus time can be integrated over time to find the vertical position as a function of time: formula_17 Using the figure of 56 m/s for the terminal velocity of a human, one finds that after 10 seconds he will have fallen 348 metres and attained 94% of terminal velocity, and after 12 seconds he will have fallen 455 metres and will have attained 97% of terminal velocity. However, when the air density cannot be assumed to be constant, such as for objects falling from high altitude, the equation of motion becomes much more difficult to solve analytically and a numerical simulation of the motion is usually necessary. The figure shows the forces acting on meteoroids falling through the Earth's upper atmosphere. HALO jumps, including Joe Kittinger's and Felix Baumgartner's record jumps, also belong in this category. Inverse-square law gravitational field. It can be said that two objects in space orbiting each other in the absence of other forces are in free fall around each other, e.g. that the Moon or an artificial satellite "falls around" the Earth, or a planet "falls around" the Sun. Assuming spherical objects means that the equation of motion is governed by Newton's law of universal gravitation, with solutions to the gravitational two-body problem being elliptic orbits obeying Kepler's laws of planetary motion. This connection between falling objects close to the Earth and orbiting objects is best illustrated by the thought experiment, Newton's cannonball. The motion of two objects moving radially towards each other with no angular momentum can be considered a special case of an elliptical orbit of eccentricity "e" 1 (radial elliptic trajectory). This allows one to compute the free-fall time for two point objects on a radial path. The solution of this equation of motion yields time as a function of separation: formula_18 where formula_19 is the time after the start of the fall formula_20 is the distance between the centers of the bodies formula_21 is the initial value of formula_20 formula_22 is the standard gravitational parameter. Substituting formula_23 we get the free-fall time formula_24 The separation can be expressed explicitly as a function of time formula_25 where formula_26 is the quantile function of the Beta distribution, also known as the inverse function of the regularized incomplete beta function formula_27. This solution can also be represented exactly by the analytic power series formula_28 Evaluating this yields: formula_29 where formula_30 Free fall in general relativity. In general relativity, an object in free fall is subject to no force and is an inertial body moving along a geodesic. Far away from any sources of space-time curvature, where spacetime is flat, the Newtonian theory of free fall agrees with general relativity. Otherwise the two disagree; e.g., only general relativity can account for the precession of orbits, the orbital decay or inspiral of compact binaries due to gravitational waves, and the relativity of direction (geodetic precession and frame dragging). The experimental observation that all objects in free fall accelerate at the same rate, as noted by Galileo and then embodied in Newton's theory as the equality of gravitational and inertial masses, and later confirmed to high accuracy by modern forms of the Eötvös experiment, is the basis of the equivalence principle, from which basis Einstein's theory of general relativity initially took off. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "g" }, { "math_id": 1, "text": "v(t)=v_{0}-gt\\," }, { "math_id": 2, "text": "y(t)=v_{0}t+y_{0}-\\frac{1}{2}gt^2" }, { "math_id": 3, "text": "v_{0}\\," }, { "math_id": 4, "text": "v(t)\\," }, { "math_id": 5, "text": "t\\," }, { "math_id": 6, "text": "y_{0}\\," }, { "math_id": 7, "text": "y(t)\\," }, { "math_id": 8, "text": "g\\," }, { "math_id": 9, "text": "m" }, { "math_id": 10, "text": "A" }, { "math_id": 11, "text": "v" }, { "math_id": 12, "text": "m\\frac{\\mathrm{d}v}{\\mathrm{d}t}=mg - \\frac{1}{2} \\rho C_{\\mathrm{D}} A v^2 \\, ," }, { "math_id": 13, "text": "\\rho" }, { "math_id": 14, "text": "C_{\\mathrm{D}}" }, { "math_id": 15, "text": "v(t) = v_{\\infty}\\tanh\\left(\\frac{gt}{v_{\\infty}}\\right)," }, { "math_id": 16, "text": "v_{\\infty}=\\sqrt{\\frac{2mg}{\\rho C_D A}} \\, ." }, { "math_id": 17, "text": "y = y_0 - \\frac{v_{\\infty}^2}{g} \\ln \\cosh\\left(\\frac{gt}{v_\\infty}\\right)." }, { "math_id": 18, "text": "t(y)= \\sqrt{ \\frac{ {y_0}^3 }{2\\mu} } \\left(\\sqrt{\\frac{y}{y_0}\\left(1-\\frac{y}{y_0}\\right)} + \\arccos{\\sqrt{\\frac{y}{y_0}}}\n \\right)," }, { "math_id": 19, "text": "t" }, { "math_id": 20, "text": "y" }, { "math_id": 21, "text": "y_0" }, { "math_id": 22, "text": "\\mu = G(m_1 + m_2)" }, { "math_id": 23, "text": " y = 0" }, { "math_id": 24, "text": "t_{\\text{ff}}=\\pi\\sqrt{y^3_0/(8\\mu)}~." }, { "math_id": 25, "text": "y(t)=y_0~Q\\left(1-\\frac{t}{t_{\\text{ff}}};\\frac{3}{2},\\frac{1}{2}\\right) ~," }, { "math_id": 26, "text": "Q(x;\\alpha,\\beta)" }, { "math_id": 27, "text": "I_x(\\alpha,\\beta)" }, { "math_id": 28, "text": " y( t ) = \\sum_{n=1}^{ \\infty }\n\\left[\n \\lim_{ r \\to 0 } \\left(\n {\\frac{ x^{ n }}{ n! }}\n \\frac{\\mathrm{d}^{\\,n-1}}{\\mathrm{ d } r ^{\\,n-1}} \\left[\n r^n \\left( \\frac{ 7 }{ 2 } ( \\arcsin( \\sqrt{ r } ) - \\sqrt{ r - r^2 } ) \n \\right)^{ - \\frac{2}{3} n }\n \\right] \\right)\n \\right].\n" }, { "math_id": 29, "text": "y(t)=y_0 \\left( x - \\frac{1}{5} x^2 - \\frac{3}{175}x^3 \n - \\frac{23}{7875}x^4 - \\frac{1894}{3031875}x^5 - \\frac{3293}{21896875}x^6 - \\frac{2418092}{62077640625}x^7 - \\cdots \\right) \\ , \n" }, { "math_id": 30, "text": " x = \\left[\\frac{3}{2} \\left( \\frac{\\pi}{2}- t \\sqrt{ \\frac{2\\mu}{ {y_0}^3 } } \\right) \\right]^{2/3}. " } ]
https://en.wikipedia.org/wiki?curid=80825
80842
Analytic continuation
Extension of the domain of an analytic function (mathematics) In complex analysis, a branch of mathematics, analytic continuation is a technique to extend the domain of definition of a given analytic function. Analytic continuation often succeeds in defining further values of a function, for example in a new region where the infinite series representation which initially defined the function becomes divergent. The step-wise continuation technique may, however, come up against difficulties. These may have an essentially topological nature, leading to inconsistencies (defining more than one value). They may alternatively have to do with the presence of singularities. The case of several complex variables is rather different, since singularities then need not be isolated points, and its investigation was a major reason for the development of sheaf cohomology. Initial discussion. Suppose "f" is an analytic function defined on a non-empty open subset "U" of the complex plane formula_0. If "V" is a larger open subset of formula_0, containing "U", and "F" is an analytic function defined on "V" such that formula_1 then "F" is called an analytic continuation of "f". In other words, the restriction of "F" to "U" is the function "f" we started with. Analytic continuations are unique in the following sense: if "V" is the connected domain of two analytic functions "F"1 and "F"2 such that "U" is contained in "V" and for all "z" in "U" formula_2 then formula_3 on all of "V". This is because "F"1 − "F"2 is an analytic function which vanishes on the open, connected domain "U" of "f" and hence must vanish on its entire domain. This follows directly from the identity theorem for holomorphic functions. Applications. A common way to define functions in complex analysis proceeds by first specifying the function on a small domain only, and then extending it by analytic continuation. In practice, this continuation is often done by first establishing some functional equation on the small domain and then using this equation to extend the domain. Examples are the Riemann zeta function and the gamma function. The concept of a universal cover was first developed to define a natural domain for the analytic continuation of an analytic function. The idea of finding the maximal analytic continuation of a function in turn led to the development of the idea of Riemann surfaces. Analytic continuation is used in Riemannian manifolds, solutions of Einstein's equations. For example, the analytic continuation of Schwarzschild coordinates into Kruskal–Szekeres coordinates. Worked example. Begin with a particular analytic function formula_4. In this case, it is given by a power series centered at formula_5: formula_6 By the Cauchy–Hadamard theorem, its radius of convergence is 1. That is, formula_4 is defined and analytic on the open set formula_7 which has boundary formula_8. Indeed, the series diverges at formula_9. Pretend we don't know that formula_10, and focus on recentering the power series at a different point formula_11: formula_12 We'll calculate the formula_13's and determine whether this new power series converges in an open set formula_14 which is not contained in formula_15. If so, we will have analytically continued formula_4 to the region formula_16 which is strictly larger than formula_15. The distance from formula_17 to formula_18 is formula_19. Take formula_20; let formula_21 be the disk of radius formula_22 around formula_17; and let formula_23 be its boundary. Then formula_24. Using Cauchy's differentiation formula to calculate the new coefficients, one has formula_25 The last summation results from the kth derivation of the geometric series, which gives the formula formula_26 Then, formula_27 which has radius of convergence formula_28 around formula_29. If we choose formula_11 with formula_30, then formula_14 is not a subset of formula_15 and is actually larger in area than formula_15. The plot shows the result for formula_31 We can continue the process: select formula_32, recenter the power series at formula_33, and determine where the new power series converges. If the region contains points not in formula_16, then we will have analytically continued formula_4 even further. This particular formula_4 can be analytically continued to the whole punctured complex plane formula_34 In this particular case the obtained values of formula_35 are the same when the successive centers have a positive imaginary part or a negative imaginary part. This is not always the case; in particular this is not the case for the complex logarithm, the antiderivative of the above function. Formal definition of a germ. The power series defined below is generalized by the idea of a "germ". The general theory of analytic continuation and its generalizations is known as sheaf theory. Let formula_36 be a power series converging in the disk "D""r"("z"0), "r" &gt; 0, defined by formula_37. Note that without loss of generality, here and below, we will always assume that a maximal such "r" was chosen, even if that "r" is ∞. Also note that it would be equivalent to begin with an analytic function defined on some small open set. We say that the vector formula_38 is a "germ" of "f". The "base" "g"0 of "g" is "z"0, the "stem" of "g" is (α0, α1, α2, ...) and the "top" "g"1 of "g" is α0. The top of "g" is the value of "f" at "z"0. Any vector "g" = ("z"0, α0, α1, ...) is a germ if it represents a power series of an analytic function around "z"0 with some radius of convergence "r" &gt; 0. Therefore, we can safely speak of the set of germs formula_39. The topology of the set of germs. Let "g" and "h" be germs. If formula_40 where "r" is the radius of convergence of "g" and if the power series defined by "g" and "h" specify identical functions on the intersection of the two domains, then we say that "h" is generated by (or compatible with) "g", and we write "g" ≥ "h". This compatibility condition is neither transitive, symmetric nor antisymmetric. If we extend the relation by transitivity, we obtain a symmetric relation, which is therefore also an equivalence relation on germs (but not an ordering). This extension by transitivity is one definition of analytic continuation. The equivalence relation will be denoted formula_41. We can define a topology on formula_39. Let "r" &gt; 0, and let formula_42 The sets "Ur"("g"), for all "r" &gt; 0 and formula_43 define a basis of open sets for the topology on formula_39. A connected component of formula_39 (i.e., an equivalence class) is called a "sheaf". We also note that the map defined by formula_44 where "r" is the radius of convergence of "g", is a chart. The set of such charts forms an atlas for formula_39, hence formula_39 is a Riemann surface. formula_39 is sometimes called the "universal analytic function". formula_45 Examples of analytic continuation. is a power series corresponding to the natural logarithm near "z" = 1. This power series can be turned into a germ formula_46 This germ has a radius of convergence of 1, and so there is a sheaf "S" corresponding to it. This is the sheaf of the logarithm function. The uniqueness theorem for analytic functions also extends to sheaves of analytic functions: if the sheaf of an analytic function contains the zero germ (i.e., the sheaf is uniformly zero in some neighborhood) then the entire sheaf is zero. Armed with this result, we can see that if we take any germ "g" of the sheaf "S" of the logarithm function, as described above, and turn it into a power series "f"("z") then this function will have the property that exp("f"("z")) = "z". If we had decided to use a version of the inverse function theorem for analytic functions, we could construct a wide variety of inverses for the exponential map, but we would discover that they are all represented by some germ in "S". In that sense, "S" is the "one true inverse" of the exponential map. In older literature, sheaves of analytic functions were called "multi-valued functions". See sheaf for the general concept. Natural boundary. Suppose that a power series has radius of convergence "r" and defines an analytic function "f" inside that disc. Consider points on the circle of convergence. A point for which there is a neighbourhood on which "f" has an analytic extension is "regular", otherwise "singular". The circle is a natural boundary if all its points are singular. More generally, we may apply the definition to any open connected domain on which "f" is analytic, and classify the points of the boundary of the domain as regular or singular: the domain boundary is then a natural boundary if all points are singular, in which case the domain is a "domain of holomorphy". Example I: A function with a natural boundary at zero (the prime zeta function). For formula_47 we define the so-called prime zeta function, formula_48, to be formula_49 This function is analogous to the summatory form of the Riemann zeta function when formula_47 in so much as it is the same summatory function as formula_50, except with indices restricted only to the prime numbers instead of taking the sum over all positive natural numbers. The prime zeta function has an analytic continuation to all complex "s" such that formula_51, a fact which follows from the expression of formula_48 by the logarithms of the Riemann zeta function as formula_52 Since formula_50 has a simple, non-removable pole at formula_53, it can then be seen that formula_48 has a simple pole at formula_54. Since the set of points formula_55 has accumulation point 0 (the limit of the sequence as formula_56), we can see that zero forms a natural boundary for formula_48. This implies that formula_48 has no analytic continuation for "s" left of (or at) zero, i.e., there is no continuation possible for formula_48 when formula_57. As a remark, this fact can be problematic if we are performing a complex contour integral over an interval whose real parts are symmetric about zero, say formula_58 for some formula_59, where the integrand is a function with denominator that depends on formula_48 in an essential way. Example II: A typical lacunary series (natural boundary as subsets of the unit circle). For integers formula_60, we define the lacunary series of order "c" by the power series expansion formula_61 Clearly, since formula_62 there is a functional equation for formula_63 for any "z" satisfying formula_64 given by formula_65. It is also not difficult to see that for any integer formula_66, we have another functional equation for formula_63 given by formula_67 For any positive natural numbers "c", the lacunary series function diverges at formula_68. We consider the question of analytic continuation of formula_63 to other complex "z" such that formula_69 As we shall see, for any formula_70, the function formula_63 diverges at the formula_71-th roots of unity. Hence, since the set formed by all such roots is dense on the boundary of the unit circle, there is no analytic continuation of formula_63 to complex "z" whose modulus exceeds one. The proof of this fact is generalized from a standard argument for the case where formula_72 Namely, for integers formula_70, let formula_73 where formula_74 denotes the open unit disk in the complex plane and formula_75, i.e., there are formula_76 distinct complex numbers "z" that lie on or inside the unit circle such that formula_77. Now the key part of the proof is to use the functional equation for formula_63 when formula_64 to show that formula_78 Thus for any arc on the boundary of the unit circle, there are an infinite number of points "z" within this arc such that formula_79. This condition is equivalent to saying that the circle formula_80 forms a natural boundary for the function formula_63 for any fixed choice of formula_81 Hence, there is no analytic continuation for these functions beyond the interior of the unit circle. Monodromy theorem. The monodromy theorem gives a sufficient condition for the existence of a "direct analytic continuation" (i.e., an extension of an analytic function to an analytic function on a bigger set). Suppose formula_82 is an open set and "f" an analytic function on "D". If "G" is a simply connected domain containing "D", such that "f" has an analytic continuation along every path in "G", starting from some fixed point "a" in "D", then "f" has a direct analytic continuation to "G". In the above language this means that if "G" is a simply connected domain, and "S" is a sheaf whose set of base points contains "G", then there exists an analytic function "f" on "G" whose germs belong to "S". Hadamard's gap theorem. For a power series formula_83 with formula_84 the circle of convergence is a natural boundary. Such a power series is called lacunary. This theorem has been substantially generalized by Eugen Fabry (see Fabry's gap theorem) and George Pólya. Pólya's theorem. Let formula_36 be a power series, then there exist "ε""k" ∈ {−1, 1} such that formula_85 has the convergence disc of "f" around "z"0 as a natural boundary. The proof of this theorem makes use of Hadamard's gap theorem. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\Complex" }, { "math_id": 1, "text": "F(z) = f(z) \\qquad \\forall z \\in U, " }, { "math_id": 2, "text": "F_1(z) = F_2(z) = f(z)," }, { "math_id": 3, "text": "F_1 = F_2" }, { "math_id": 4, "text": "f" }, { "math_id": 5, "text": "z=1" }, { "math_id": 6, "text": "f(z) = \\sum_{k=0}^\\infty (-1)^k (z-1)^k." }, { "math_id": 7, "text": "U = \\{|z-1|<1\\}" }, { "math_id": 8, "text": "\\partial U = \\{|z-1|=1\\}" }, { "math_id": 9, "text": "z=0 \\in \\partial U" }, { "math_id": 10, "text": "f(z)=1/z" }, { "math_id": 11, "text": "a \\in U" }, { "math_id": 12, "text": "f(z) = \\sum_{k=0}^\\infty a_k (z-a)^k." }, { "math_id": 13, "text": "a_k" }, { "math_id": 14, "text": "V" }, { "math_id": 15, "text": "U" }, { "math_id": 16, "text": "U \\cup V" }, { "math_id": 17, "text": "a" }, { "math_id": 18, "text": "\\partial U" }, { "math_id": 19, "text": "\\rho = 1 - |a-1| > 0" }, { "math_id": 20, "text": "0 < r < \\rho" }, { "math_id": 21, "text": "D" }, { "math_id": 22, "text": "r" }, { "math_id": 23, "text": "\\partial D" }, { "math_id": 24, "text": "D \\cup \\partial D \\subset U" }, { "math_id": 25, "text": "\\begin{align}\na_k &= \\frac{f^{(k)}(a)}{k!} \\\\\n&=\\frac{1}{2\\pi i} \\int_{\\partial D} \\frac{f(\\zeta) d \\zeta}{(\\zeta -a)^{k+1}} \\\\\n&=\\frac{1}{2\\pi i} \\int_{\\partial D} \\frac{\\sum_{n=0}^\\infty (-1)^n (\\zeta-1)^n d \\zeta}{(\\zeta -a)^{k+1}} \\\\\n&=\\frac{1}{2\\pi i} \\sum_{n=0}^\\infty (-1)^n \\int_{\\partial D} \\frac{(\\zeta-1)^n d\\zeta}{(\\zeta -a)^{k+1}} \\\\\n&=\\frac{1}{2\\pi i} \\sum_{n=0}^\\infty (-1)^n \\int_0^{2\\pi} \\frac{(a+re^{i \\theta}-1)^n rie^{i \\theta}d\\theta}{(re^{i \\theta})^{k+1}} \\\\\n&=\\frac{1}{2\\pi} \\sum_{n=0}^\\infty (-1)^n \\int_0^{2\\pi} \\frac{(a-1+re^{i \\theta})^n d\\theta}{(re^{i \\theta})^{k}}\n\\\\\n&=\\frac{1}{2\\pi} \\sum_{n=0}^\\infty (-1)^n \\int_0^{2\\pi} \\frac{\\sum_{m=0}^n \\binom{n}{m} (a-1)^{n-m} (re^{i \\theta})^m d\\theta}{(re^{i \\theta})^{k}} \n\\\\\n&=\\frac{1}{2\\pi} \\sum_{n=0}^\\infty (-1)^n \\sum_{m=0}^n \\binom{n}{m} (a-1)^{n-m} r^{m-k} \\int_0^{2\\pi} e^{i (m-k)\\theta} d\\theta \n\\\\\n&=\\frac{1}{2\\pi} \\sum_{n=k}^\\infty (-1)^n \\binom{n}{k} (a-1)^{n-k}\\int_0^{2\\pi} d\\theta \\\\\n&=\\sum_{n=k}^\\infty (-1)^n \\binom{n}{k} (a-1)^{n-k} \n\\\\\n&=(-1)^k \\sum_{m=0}^\\infty \\binom{m+k}{k} (1-a)^m \\\\\n&=(-1)^k a^{-k-1}\n\\end{align}." }, { "math_id": 26, "text": "\\frac 1{(1-x)^{k+1}} = \\sum_{m=0}^\\infty \\binom{m+k}k x^m." }, { "math_id": 27, "text": " \\begin{align}\nf(z) &= \\sum_{k=0}^\\infty a_k (z-a)^k \\\\\n&= \\sum_{k=0}^\\infty (-1)^k a^{-k-1} (z-a)^k \\\\\n&= \\frac{1}{a} \\sum_{k=0}^\\infty \\left ( 1 - \\frac{z}{a} \\right )^k \\\\\n&= \\frac{1}{a} \\frac{1}{1 - \\left(1 - \\frac{z}{a}\\right)} \\\\\n&= \\frac{1}{z} \\\\\n&= \\frac{1}{(z + a) - a}\n\\end{align}" }, { "math_id": 28, "text": "|a|" }, { "math_id": 29, "text": "0" }, { "math_id": 30, "text": "|a|>1" }, { "math_id": 31, "text": "a = \\tfrac{1}{2}(3+i)." }, { "math_id": 32, "text": "b \\in U \\cup V" }, { "math_id": 33, "text": "b" }, { "math_id": 34, "text": "\\Complex \\setminus \\{0\\}." }, { "math_id": 35, "text": "f(-1)" }, { "math_id": 36, "text": "f(z)=\\sum_{k=0}^\\infty \\alpha_k (z-z_0)^k" }, { "math_id": 37, "text": "D_r(z_0) = \\{z \\in \\Complex : |z - z_0| < r\\}" }, { "math_id": 38, "text": "g = (z_0, \\alpha_0, \\alpha_1, \\alpha_2, \\ldots) " }, { "math_id": 39, "text": "\\mathcal G" }, { "math_id": 40, "text": "|h_0-g_0|<r" }, { "math_id": 41, "text": "\\cong" }, { "math_id": 42, "text": "U_r(g) = \\{h \\in \\mathcal G : g \\ge h, |g_0 - h_0| < r\\}." }, { "math_id": 43, "text": "g\\in\\mathcal G" }, { "math_id": 44, "text": "\\phi_g(h) = h_0 : U_r(g) \\to \\Complex," }, { "math_id": 45, "text": "L(z) = \\sum_{k=1}^\\infin \\frac{(-1)^{k+1}}{k}(z-1)^k" }, { "math_id": 46, "text": " g=\\left(1,0,1,-\\frac 1 2, \\frac 1 3 , - \\frac 1 4 , \\frac 1 5 , - \\frac 1 6 , \\ldots\\right) " }, { "math_id": 47, "text": "\\Re(s) > 1" }, { "math_id": 48, "text": "P(s)" }, { "math_id": 49, "text": "P(s) := \\sum_{p\\ \\text{ prime}} p^{-s}." }, { "math_id": 50, "text": "\\zeta(s)" }, { "math_id": 51, "text": "0 < \\Re(s) < 1" }, { "math_id": 52, "text": "P(s) = \\sum_{n \\geq 1} \\mu(n)\\frac{\\log\\zeta(ns)}{n}." }, { "math_id": 53, "text": "s := 1" }, { "math_id": 54, "text": "s := \\tfrac{1}{k}, \\forall k \\in \\Z^{+}" }, { "math_id": 55, "text": "\\operatorname{Sing}_P := \\left\\{k^{-1} : k \\in \\Z^+\\right\\} = \\left \\{1, \\frac{1}{2}, \\frac{1}{3}, \\frac{1}{4},\\ldots \\right \\}" }, { "math_id": 56, "text": "k\\mapsto\\infty" }, { "math_id": 57, "text": "0 \\geq \\Re(s)" }, { "math_id": 58, "text": "I_F \\subseteq \\Complex \\ \\text{such that}\\ \\Re(s) \\in (-C, C), \\forall s \\in I_F" }, { "math_id": 59, "text": "C > 0" }, { "math_id": 60, "text": "c \\geq 2" }, { "math_id": 61, "text": "\\mathcal{L}_c(z) := \\sum_{n \\geq 1} z^{c^n}, |z| < 1." }, { "math_id": 62, "text": "c^{n+1} = c \\cdot c^{n}" }, { "math_id": 63, "text": "\\mathcal{L}_c(z)" }, { "math_id": 64, "text": "|z| < 1" }, { "math_id": 65, "text": "\\mathcal{L}_c(z) = z^{c} + \\mathcal{L}_c(z^c)" }, { "math_id": 66, "text": "m \\geq 1" }, { "math_id": 67, "text": "\\mathcal{L}_c(z) = \\sum_{i=0}^{m-1} z^{c^{i}} + \\mathcal{L}_c(z^{c^m}), \\forall |z| < 1." }, { "math_id": 68, "text": "z = 1" }, { "math_id": 69, "text": "|z| > 1." }, { "math_id": 70, "text": "n \\geq 1" }, { "math_id": 71, "text": "c^{n}" }, { "math_id": 72, "text": "c := 2." }, { "math_id": 73, "text": "\\mathcal{R}_{c,n} := \\left \\{z \\in \\mathbb{D} \\cup \\partial{\\mathbb{D}}: z^{c^n} = 1 \\right \\}," }, { "math_id": 74, "text": "\\mathbb{D}" }, { "math_id": 75, "text": "|\\mathcal{R}_{c,n} | = c^n" }, { "math_id": 76, "text": "c^n" }, { "math_id": 77, "text": "z^{c^n} = 1" }, { "math_id": 78, "text": "\\forall z \\in \\mathcal{R}_{c,n}, \\qquad \\mathcal{L}_c(z) = \\sum_{i=0}^{c^n-1} z^{c^i} + \\mathcal{L}_c(z^{c^n}) = \\sum_{i=0}^{c^n-1} z^{c^i} + \\mathcal{L}_c(1) = +\\infty." }, { "math_id": 79, "text": "\\mathcal{L}_c(z) = \\infty" }, { "math_id": 80, "text": "C_1 := \\{z: |z| = 1\\}" }, { "math_id": 81, "text": "c \\in \\Z \\quad c > 1." }, { "math_id": 82, "text": "D\\subset \\Complex" }, { "math_id": 83, "text": "f(z)=\\sum_{k=0}^\\infty a_k z^{n_k}" }, { "math_id": 84, "text": "\\liminf_{k\\to\\infty}\\frac{n_{k+1}}{n_k} > 1" }, { "math_id": 85, "text": "f(z)=\\sum_{k=0}^\\infty \\varepsilon_k\\alpha_k (z-z_0)^k" } ]
https://en.wikipedia.org/wiki?curid=80842
80857
Meromorphic function
Class of mathematical function In the mathematical field of complex analysis, a meromorphic function on an open subset "D" of the complex plane is a function that is holomorphic on all of "D" "except" for a set of isolated points, which are poles of the function. The term comes from the Greek "meros" (μέρος), meaning "part". Every meromorphic function on "D" can be expressed as the ratio between two holomorphic functions (with the denominator not constant 0) defined on "D": any pole must coincide with a zero of the denominator. Heuristic description. Intuitively, a meromorphic function is a ratio of two well-behaved (holomorphic) functions. Such a function will still be well-behaved, except possibly at the points where the denominator of the fraction is zero. If the denominator has a zero at "z" and the numerator does not, then the value of the function will approach infinity; if both parts have a zero at "z", then one must compare the multiplicity of these zeros. From an algebraic point of view, if the function's domain is connected, then the set of meromorphic functions is the field of fractions of the integral domain of the set of holomorphic functions. This is analogous to the relationship between the rational numbers and the integers. Prior, alternate use. Both the field of study wherein the term is used and the precise meaning of the term changed in the 20th century. In the 1930s, in group theory, a "meromorphic function" (or "meromorph") was a function from a group "G" into itself that preserved the product on the group. The image of this function was called an "automorphism" of "G". Similarly, a "homomorphic function" (or "homomorph") was a function between groups that preserved the product, while a "homomorphism" was the image of a homomorph. This form of the term is now obsolete, and the related term "meromorph" is no longer used in group theory. The term "endomorphism" is now used for the function itself, with no special name given to the image of the function. A meromorphic function is not necessarily an endomorphism, since the complex points at its poles are not in its domain, but may be in its range. Properties. Since poles are isolated, there are at most countably many for a meromorphic function. The set of poles can be infinite, as exemplified by the function formula_0 By using analytic continuation to eliminate removable singularities, meromorphic functions can be added, subtracted, multiplied, and the quotient formula_1 can be formed unless formula_2 on a connected component of "D". Thus, if "D" is connected, the meromorphic functions form a field, in fact a field extension of the complex numbers. Higher dimensions. In several complex variables, a meromorphic function is defined to be locally a quotient of two holomorphic functions. For example, formula_3 is a meromorphic function on the two-dimensional complex affine space. Here it is no longer true that every meromorphic function can be regarded as a holomorphic function with values in the Riemann sphere: There is a set of "indeterminacy" of codimension two (in the given example this set consists of the origin formula_4). Unlike in dimension one, in higher dimensions there do exist compact complex manifolds on which there are no non-constant meromorphic functions, for example, most complex tori. On Riemann surfaces. On a Riemann surface, every point admits an open neighborhood which is biholomorphic to an open subset of the complex plane. Thereby the notion of a meromorphic function can be defined for every Riemann surface. When "D" is the entire Riemann sphere, the field of meromorphic functions is simply the field of rational functions in one variable over the complex field, since one can prove that any meromorphic function on the sphere is rational. (This is a special case of the so-called GAGA principle.) For every Riemann surface, a meromorphic function is the same as a holomorphic function that maps to the Riemann sphere and which is not the constant function equal to ∞. The poles correspond to those complex numbers which are mapped to ∞. On a non-compact Riemann surface, every meromorphic function can be realized as a quotient of two (globally defined) holomorphic functions. In contrast, on a compact Riemann surface, every holomorphic function is constant, while there always exist non-constant meromorphic functions. Footnotes. &lt;templatestyles src="Reflist/styles.css" /&gt; References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "f(z) = \\csc z = \\frac{1}{\\sin z}." }, { "math_id": 1, "text": "f/g" }, { "math_id": 2, "text": "g(z) = 0" }, { "math_id": 3, "text": "f(z_1, z_2) = z_1 / z_2" }, { "math_id": 4, "text": "(0, 0)" }, { "math_id": 5, "text": " f(z) = \\frac{z^3 - 2z + 10}{z^5 + 3z - 1}, " }, { "math_id": 6, "text": " f(z) = \\frac{e^z}{z} \\quad\\text{and}\\quad f(z) = \\frac{\\sin{z}}{(z-1)^2} " }, { "math_id": 7, "text": " f(z) = e^\\frac{1}{z} " }, { "math_id": 8, "text": "\\mathbb{C} \\setminus \\{0\\}" }, { "math_id": 9, "text": " f(z) = \\ln(z) " }, { "math_id": 10, "text": " f(z) = \\csc\\frac{1}{z} = \\frac1{\\sin\\left(\\frac{1}{z}\\right)} " }, { "math_id": 11, "text": "z = 0" }, { "math_id": 12, "text": " f(z) = \\sin \\frac 1 z " } ]
https://en.wikipedia.org/wiki?curid=80857
80862
Essential singularity
Location around which a function displays irregular behavior In complex analysis, an essential singularity of a function is a "severe" singularity near which the function exhibits striking behavior. The category "essential singularity" is a "left-over" or default group of isolated singularities that are especially unmanageable: by definition they fit into neither of the other two categories of singularity that may be dealt with in some manner – removable singularities and poles. In practice some include non-isolated singularities too; those do not have a residue. Formal description. Consider an open subset formula_0 of the complex plane formula_1. Let formula_2 be an element of formula_0, and formula_3 a holomorphic function. The point formula_2 is called an "essential singularity" of the function formula_4 if the singularity is neither a pole nor a removable singularity. For example, the function formula_5 has an essential singularity at formula_6. Alternative descriptions. Let formula_2 be a complex number, and assume that formula_7 is not defined at formula_2 but is analytic in some region formula_0 of the complex plane, and that every open neighbourhood of formula_2 has non-empty intersection with formula_0. If both formula_8 and formula_9 exist, then formula_2 is a "removable singularity" of both formula_4 and formula_10. If formula_8 exists but formula_9 does not exist (in fact formula_11), then formula_2 is a "zero" of formula_4 and a "pole" of formula_10. Similarly, if formula_8 does not exist (in fact formula_12) but formula_9 exists, then formula_2 is a "pole" of formula_4 and a "zero" of formula_10. If neither formula_8 nor formula_9 exists, then formula_2 is an essential singularity of both formula_4 and formula_10. Another way to characterize an essential singularity is that the Laurent series of formula_4 at the point formula_2 has infinitely many negative degree terms (i.e., the principal part of the Laurent series is an infinite sum). A related definition is that if there is a point formula_2 for which no derivative of formula_13 converges to a limit as formula_14 tends to formula_2, then formula_2 is an essential singularity of formula_4. On a Riemann sphere with a point at infinity, formula_15, the function formula_16 has an essential singularity at that point if and only if the formula_17 has an essential singularity at 0: i.e. neither formula_18 nor formula_19 exists. The Riemann zeta function on the Riemann sphere has only one essential singularity, at formula_15. Indeed, every meromorphic function aside that is not a rational function has a unique essential singularity at formula_15. The behavior of holomorphic functions near their essential singularities is described by the Casorati–Weierstrass theorem and by the considerably stronger Picard's great theorem. The latter says that in every neighborhood of an essential singularity formula_2, the function formula_4 takes on "every" complex value, except possibly one, infinitely many times. (The exception is necessary; for example, the function formula_20 never takes on the value 0.) References. &lt;templatestyles src="Reflist/styles.css" /&gt; &lt;templatestyles src="Refbegin/styles.css" /&gt;
[ { "math_id": 0, "text": "U" }, { "math_id": 1, "text": "\\mathbb{C}" }, { "math_id": 2, "text": "a" }, { "math_id": 3, "text": "f\\colon U\\setminus\\{a\\}\\to \\mathbb{C}" }, { "math_id": 4, "text": "f" }, { "math_id": 5, "text": "f(z)=e^{1/z}" }, { "math_id": 6, "text": "z=0" }, { "math_id": 7, "text": "f(z)" }, { "math_id": 8, "text": "\\lim_{z \\to a}f(z)" }, { "math_id": 9, "text": "\\lim_{z \\to a}\\frac{1}{f(z)}" }, { "math_id": 10, "text": "\\frac{1}{f}" }, { "math_id": 11, "text": "\\lim_{z\\to a}|1/f(z)|=\\infty" }, { "math_id": 12, "text": "\\lim_{z\\to a}|f(z)|=\\infty" }, { "math_id": 13, "text": "f(z)(z-a)^n" }, { "math_id": 14, "text": "z" }, { "math_id": 15, "text": "\\infty_\\mathbb{C}" }, { "math_id": 16, "text": "{f(z)}" }, { "math_id": 17, "text": "{f(1/z)}" }, { "math_id": 18, "text": "\\lim_{z \\to 0}{f(1/z)}" }, { "math_id": 19, "text": "\\lim_{z \\to 0}\\frac{1}{f(1/z)}" }, { "math_id": 20, "text": "\\exp(1/z)" } ]
https://en.wikipedia.org/wiki?curid=80862
808716
Prouhet–Thue–Morse constant
In mathematics, the Prouhet–Thue–Morse constant, named for Eugène Prouhet, Axel Thue, and Marston Morse, is the number—denoted by τ—whose binary expansion 0.01101001100101101001011001101001... is given by the Prouhet–Thue–Morse sequence. That is, formula_0 where "tn" is the "n"th element of the Prouhet–Thue–Morse sequence. Other representations. The Prouhet–Thue–Morse constant can also be expressed, without using "tn" , as an infinite product, formula_1 This formula is obtained by substituting "x" = 1/2 into generating series for "tn" formula_2 The continued fraction expansion of the constant is [0; 2, 2, 2, 1, 4, 3, 5, 2, 1, 4, 2, 1, 5, 44, 1, 4, 1, 2, 4, 1, …] (sequence in the OEIS) Yann Bugeaud and Martine Queffélec showed that infinitely many partial quotients of this continued fraction are 4 or 5, and infinitely many partial quotients are greater than or equal to 50. Transcendence. The Prouhet–Thue–Morse constant was shown to be transcendental by Kurt Mahler in 1929. He also showed that the number formula_3 is also transcendental for any algebraic number α, where 0 &lt; |"α"| &lt; 1. Yann Bugaeud proved that the Prouhet–Thue–Morse constant has an irrationality measure of 2. Appearances. The Prouhet–Thue–Morse constant appears in probability. If a language "L" over {0, 1} is chosen at random, by flipping a fair coin to decide whether each word "w" is in "L", the probability that it contains at least one word for each possible length is formula_4
[ { "math_id": 0, "text": " \\tau = \\sum_{n=0}^{\\infty} \\frac{t_n}{2^{n+1}} = 0.412454033640 \\ldots " }, { "math_id": 1, "text": " \\tau = \\frac{1}{4}\\left[2-\\prod_{n=0}^{\\infty}\\left(1-\\frac{1}{2^{2^n}}\\right)\\right] " }, { "math_id": 2, "text": " F(x) = \\sum_{n=0}^{\\infty} (-1)^{t_n} x^n = \\prod_{n=0}^{\\infty} ( 1 - x^{2^n} ) " }, { "math_id": 3, "text": "\\sum_{i=0}^{\\infty} t_n \\, \\alpha^n" }, { "math_id": 4, "text": " p = \\prod_{n=0}^{\\infty}\\left(1-\\frac{1}{2^{2^n}}\\right) = \\sum_{n=0}^{\\infty} \\frac{(-1)^{t_n}}{2^{n+1}} = 2 - 4 \\tau = 0.35018386544\\ldots" } ]
https://en.wikipedia.org/wiki?curid=808716
8087219
65,537
Natural number 65537 is the integer after 65536 and before 65538. In mathematics. 65537 is the largest known prime number of the form formula_0 (formula_1). Therefore, a regular polygon with 65537 sides is constructible with compass and unmarked straightedge. Johann Gustav Hermes gave the first explicit construction of this polygon. In number theory, primes of this form are known as Fermat primes, named after the mathematician Pierre de Fermat. The only known prime Fermat numbers are formula_2 formula_3 formula_4 formula_5 formula_6 In 1732, Leonhard Euler found that the next Fermat number is composite: formula_7 In 1880, Fortuné Landry showed that formula_8 65537 is also the 17th Jacobsthal–Lucas number, and currently the largest known integer "n" for which the number formula_9 is a probable prime. Applications. 65537 is commonly used as a public exponent in the RSA cryptosystem. Because it is the Fermat number F"n" = 22"n" + 1 with "n" = 4, the common shorthand is "F4" or "F4". This value was used in RSA mainly for historical reasons; early raw RSA implementations (without proper padding) were vulnerable to very small exponents, while use of high exponents was computationally expensive with no advantage to security (assuming proper padding). 65537 is also used as the modulus in some Lehmer random number generators, such as the one used by ZX Spectrum, which ensures that any seed value will be coprime to it (vital to ensure the maximum period) while also allowing efficient reduction by the modulus using a bit shift and subtract. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "2^{2^{n}} +1" }, { "math_id": 1, "text": "n = 4" }, { "math_id": 2, "text": "2^{2^{0}} + 1 = 2^{1} + 1 = 3," }, { "math_id": 3, "text": "2^{2^{1}} + 1= 2^{2} +1 = 5," }, { "math_id": 4, "text": "2^{2^{2}} + 1 = 2^{4} +1 = 17," }, { "math_id": 5, "text": "2^{2^{3}} + 1= 2^{8} + 1= 257," }, { "math_id": 6, "text": "2^{2^{4}} + 1 = 2^{16} + 1 = 65537." }, { "math_id": 7, "text": "2^{2^{5}} + 1 = 2^{32} + 1 = 4294967297 = 641 \\times 6700417" }, { "math_id": 8, "text": "2^{2^{6}} + 1 = 2^{64} + 1 = 274177 \\times 67280421310721" }, { "math_id": 9, "text": "10^{n} + 27" } ]
https://en.wikipedia.org/wiki?curid=8087219
8088700
Circular points at infinity
In projective geometry, the circular points at infinity (also called cyclic points or isotropic points) are two special points at infinity in the complex projective plane that are contained in the complexification of every real circle. Coordinates. A point of the complex projective plane may be described in terms of homogeneous coordinates, being a triple of complex numbers ("x" : "y" : "z"), where two triples describe the same point of the plane when the coordinates of one triple are the same as those of the other aside from being multiplied by the same nonzero factor. In this system, the points at infinity may be chosen as those whose "z"-coordinate is zero. The two circular points at infinity are two of these, usually taken to be those with homogeneous coordinates (1 : i : 0) and (1 : −i : 0). Trilinear coordinates. Let "A". "B". "C" be the measures of the vertex angles of the reference triangle ABC. Then the trilinear coordinates of the circular points at infinity in the plane of the reference triangle are as given below: formula_0 or, equivalently, formula_1 or, again equivalently, formula_2 where formula_3. Complexified circles. A real circle, defined by its center point ("x"0,"y"0) and radius "r" (all three of which are real numbers) may be described as the set of real solutions to the equation formula_4 Converting this into a homogeneous equation and taking the set of all complex-number solutions gives the complexification of the circle. The two circular points have their name because they lie on the complexification of every real circle. More generally, both points satisfy the homogeneous equations of the type formula_5 The case where the coefficients are all real gives the equation of a general circle (of the real projective plane). In general, an algebraic curve that passes through these two points is called circular. Additional properties. The circular points at infinity are the points at infinity of the isotropic lines. They are invariant under translations and rotations of the plane. The concept of angle can be defined using the circular points, natural logarithm and cross-ratio: The angle between two lines is a certain multiple of the logarithm of the cross-ratio of the pencil formed by the two lines and the lines joining their intersection to the circular points. Sommerville configures two lines on the origin as formula_6 Denoting the circular points as ω and ω′, he obtains the cross ratio formula_7 so that formula_8 References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "-1 : \\cos C - i\\sin C : \\cos B + i\\sin B,\\qquad -1 : \\cos C + i\\sin C : \\cos B - i\\sin B" }, { "math_id": 1, "text": "\\cos C + i\\sin C : -1 :\\cos A - i\\sin A, \\qquad\\cos C - i\\sin C : -1 :\\cos A + i\\sin A" }, { "math_id": 2, "text": "\\cos B + i\\sin B : \\cos A - i\\sin A : -1, \\qquad \\cos B-i\\sin B : \\cos A+i\\sin A: -1," }, { "math_id": 3, "text": "i=\\sqrt{-1}" }, { "math_id": 4, "text": "(x-x_0)^2+(y-y_0)^2=r^2." }, { "math_id": 5, "text": "Ax^2 + Ay^2 + 2B_1xz + 2B_2yz - Cz^2 = 0. " }, { "math_id": 6, "text": "u : y = x \\tan \\theta, \\quad u' : y = x \\tan \\theta '." }, { "math_id": 7, "text": "(u u' , \\omega \\omega ') = \\frac{\\tan \\theta - i}{\\tan \\theta + i} \\div \\frac{\\tan \\theta ' - i}{\\tan \\theta ' + i} ," }, { "math_id": 8, "text": "\\phi = \\theta ' - \\theta = \\tfrac{i}{2} \\log (u u', \\omega \\omega ') ." } ]
https://en.wikipedia.org/wiki?curid=8088700
8088814
Isotropic line
In the geometry of quadratic forms, an isotropic line or null line is a line for which the quadratic form applied to the displacement vector between any pair of its points is zero. An isotropic line occurs only with an isotropic quadratic form, and never with a definite quadratic form. Using complex geometry, Edmond Laguerre first suggested the existence of two isotropic lines through the point ("α", "β") that depend on the imaginary unit "i": First system: formula_0 Second system: formula_1 Laguerre then interpreted these lines as geodesics: An essential property of isotropic lines, and which can be used to define them, is the following: the distance between any two points of an isotropic line "situated at a finite distance in the plane" is zero. In other terms, these lines satisfy the differential equation d"s"2 = 0. On an arbitrary surface one can study curves that satisfy this differential equation; these curves are the geodesic lines of the surface, and we also call them "isotropic lines". In the complex projective plane, points are represented by homogeneous coordinates formula_2 and lines by homogeneous coordinates formula_3. An isotropic line in the complex projective plane satisfies the equation: formula_4 In terms of the affine subspace "x"3 = 1, an isotropic line through the origin is formula_5 In projective geometry, the isotropic lines are the ones passing through the circular points at infinity. In the real orthogonal geometry of Emil Artin, isotropic lines occur in pairs: A non-singular plane which contains an isotropic vector shall be called a hyperbolic plane. It can always be spanned by a pair "N, M" of vectors which satisfy formula_6 We shall call any such ordered pair "N, M" a hyperbolic pair. If "V" is a non-singular plane with orthogonal geometry and "N" ≠ 0 is an isotropic vector of "V", then there exists precisely one "M" in "V" such that "N, M" is a hyperbolic pair. The vectors "x N" and "y M" are then the only isotropic vectors of "V". Relativity. Isotropic lines have been used in cosmological writing to carry light. For example, in a mathematical encyclopedia, light consists of photons: "The worldline of a zero rest mass (such as a non-quantum model of a photon and other elementary particles of mass zero) is an isotropic line." For isotropic lines through the origin, a particular point is a null vector, and the collection of all such isotropic lines forms the light cone at the origin. Élie Cartan expanded the concept of isotropic lines to multivectors in his book on spinors in three dimensions. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "(y - \\beta) = (x - \\alpha) i," }, { "math_id": 1, "text": "(y - \\beta) = -i (x - \\alpha) ." }, { "math_id": 2, "text": "(x_1, x_2, x_3)" }, { "math_id": 3, "text": "(a_1, a_2, a_3)" }, { "math_id": 4, "text": "a_3(x_2 \\pm i x_1) = (a_2 \\pm i a_1) x_2 ." }, { "math_id": 5, "text": "x_2 = \\pm i x_1 ." }, { "math_id": 6, "text": "N^2 \\ =\\ M^2\\ =\\ 0, \\quad NM\\ =\\ 1\\ ." } ]
https://en.wikipedia.org/wiki?curid=8088814
8088912
Imaginary line (mathematics)
Straight line that only contains one real point In complex geometry, an imaginary line is a straight line that only contains one real point. It can be proven that this point is the intersection point with the conjugated line. It is a special case of an imaginary curve. An imaginary line is found in the complex projective plane P2(C) where points are represented by three homogeneous coordinates formula_0 Boyd Patterson described the lines in this plane: The locus of points whose coordinates satisfy a homogeneous linear equation with complex coefficients formula_1 is a straight line and the line is "real" or "imaginary" according as the coefficients of its equation are or are not proportional to three real numbers. Felix Klein described imaginary geometrical structures: "We will characterize a geometric structure as imaginary if its coordinates are not all real.: According to Hatton: The locus of the double points (imaginary) of the overlapping involutions in which an overlapping involution pencil (real) is cut by real transversals is a pair of imaginary straight lines. Hatton continues, Hence it follows that an imaginary straight line is determined by an imaginary point, which is a double point of an involution, and a real point, the vertex of the involution pencil. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "(x_1,\\ x_2,\\ x_3),\\quad x_i \\isin C ." }, { "math_id": 1, "text": " a_1\\ x_1 +\\ a_2\\ x_2 \\ + a_3\\ x_3 \\ =\\ 0" } ]
https://en.wikipedia.org/wiki?curid=8088912
8088939
Real point
In geometry, a real point is a point in the complex projective plane with homogeneous coordinates ("x","y","z") for which there exists a nonzero complex number "λ" such that "λx", "λy", and "λz" are all real numbers. This definition can be widened to a complex projective space of arbitrary finite dimension as follows: formula_0 are the homogeneous coordinates of a real point if there exists a nonzero complex number "λ" such that the coordinates of formula_1 are all real. A point which is not real is called an imaginary point. Context. Geometries that are specializations of real projective geometry, such as Euclidean geometry, elliptic geometry or conformal geometry may be complexified, thus embedding the points of the geometry in a complex projective space, but retaining the identity of the original real space as special. Lines, planes etc. are expanded to the lines, etc. of the complex projective space. As with the inclusion of points at infinity and complexification of real polynomials, this allows some theorems to be stated more simply without exceptions and for a more regular algebraic analysis of the geometry. Viewed in terms of homogeneous coordinates, a real vector space of homogeneous coordinates of the original geometry is complexified. A point of the original geometric space is defined by an equivalence class of homogeneous vectors of the form "λu", where "λ" is an nonzero complex value and "u" is a real vector. A point of this form (and hence belongs to the original real space) is called a "real point", whereas a point that has been added through the complexification and thus does not have this form is called an "imaginary point". Real subspace. A subspace of a projective space is "real" if it is spanned by real points. Every imaginary point belongs to exactly one real line, the line through the point and its complex conjugate. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": " (u_1, u_2, \\ldots, u_n)" }, { "math_id": 1, "text": " (\\lambda u_1, \\lambda u_2, \\ldots, \\lambda u_n)" } ]
https://en.wikipedia.org/wiki?curid=8088939
809059
Quantity adjustment
Concept in economics relating changes in supply to changes in price and vice versa In economics, quantity adjustment is the process by which a market surplus leads to a cut-back in the quantity supplied or a market shortage causes an increase in supplied quantity. It is one possible result of supply and demand disequilibrium in a market. Quantity adjustment is complementary to pricing. In the textbook story, favored by the followers of Léon Walras, if the quantity demanded does not equal the quantity supplied in a market, "price adjustment" is the rule: if there is a market "surplus" or glut (excess supply), prices fall, ending the glut, while a "shortage" (excess demand) causes price to rise. A simple model for price adjustment is the "Evans price adjustment model", which proposes the differential equation: formula_0 This says that the rate of change of the price (P) is proportional to the difference between the quantity demanded (QD) and the quantity supplied (QS). However, instead of price adjustment — or, more likely, "simultaneously" with price adjustment — quantities may adjust: a market surplus leads to a cut-back in the quantity supplied, while a shortage causes a cut-back in the quantity demanded. The "short side" of the market dominates, with limited quantity demanded constraining supply in the first case and limited quantity supplied constraining demand in the second. Economist Alfred Marshall saw market adjustment in quantity-adjustment terms in the short run. During a given "market day", the amount of goods on the market was "given" -- but it adjusts in the short run, a longer period: if the "supply price" (the price suppliers were willing to accept) was below the "demand price" (what purchasers were willing to pay), the quantity in the market would rise. If the supply price exceeded the demand price, on the other hand, the quantity on the market would fall. Marshallian quantity adjustment is described as follows: formula_1 This says that the rate of change of the quantity supplied is proportional to the difference between the demand price (DP) and the supply price (SP). Quantity adjustment contrasts with the tradition of Léon Walras and general equilibrium. For Walras, (ideal) markets operated "as if" there were an Auctioneer who called out prices and asked for quantities supplied and demanded. Prices were then varied (in a process called "tatonnement" or groping) until the market "cleared", with each quantity demanded equal to the corresponding quantity supplied. In this pure theory, no actual trading was allowed until the market-clearing price was determined. In the Walrasian system, only price adjustment operated to equate the quantity supplied with the quantity demanded.
[ { "math_id": 0, "text": " \\frac{dP}{dt} = k (QD-QS)," }, { "math_id": 1, "text": " \\frac{dQS}{dt} = k (DP-SP)," } ]
https://en.wikipedia.org/wiki?curid=809059
8092026
Excitation temperature
Concept in statistical mechanics In statistical mechanics, the excitation temperature ("T"ex) is defined for a population of particles via the Boltzmann factor. It satisfies formula_0 where Thus the excitation temperature is the temperature at which we would expect to find a system with this ratio of level populations. However it has no actual physical meaning except when in local thermodynamic equilibrium. The excitation temperature can even be negative for a system with inverted levels (such as a maser). In observations of the 21 cm line of hydrogen, the apparent value of the excitation temperature is often called the "spin temperature". References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\n\\frac{n_{\\rm u}}{n_{\\rm l}} = \\frac{g_{\\rm u}}{g_{\\rm l}} \\exp{ \\left(-\\frac{\\Delta E}{k T_{\\rm ex}} \\right) },\n" } ]
https://en.wikipedia.org/wiki?curid=8092026
8092200
Heavy-tailed distribution
Probability distribution In probability theory, heavy-tailed distributions are probability distributions whose tails are not exponentially bounded: that is, they have heavier tails than the exponential distribution. In many applications it is the right tail of the distribution that is of interest, but a distribution may have a heavy left tail, or both tails may be heavy. There are three important subclasses of heavy-tailed distributions: the fat-tailed distributions, the long-tailed distributions, and the subexponential distributions. In practice, all commonly used heavy-tailed distributions belong to the subexponential class, introduced by Jozef Teugels. There is still some discrepancy over the use of the term heavy-tailed. There are two other definitions in use. Some authors use the term to refer to those distributions which do not have all their power moments finite; and some others to those distributions that do not have a finite variance. The definition given in this article is the most general in use, and includes all distributions encompassed by the alternative definitions, as well as those distributions such as log-normal that possess all their power moments, yet which are generally considered to be heavy-tailed. (Occasionally, heavy-tailed is used for any distribution that has heavier tails than the normal distribution.) Definitions. Definition of heavy-tailed distribution. The distribution of a random variable "X" with distribution function "F" is said to have a heavy (right) tail if the moment generating function of "X", "MX"("t"), is infinite for all "t" &gt; 0. That means formula_0 This is also written in terms of the tail distribution function formula_1 as formula_2 Definition of long-tailed distribution. The distribution of a random variable "X" with distribution function "F" is said to have a long right tail if for all "t" &gt; 0, formula_3 or equivalently formula_4 This has the intuitive interpretation for a right-tailed long-tailed distributed quantity that if the long-tailed quantity exceeds some high level, the probability approaches 1 that it will exceed any other higher level. All long-tailed distributions are heavy-tailed, but the converse is false, and it is possible to construct heavy-tailed distributions that are not long-tailed. Subexponential distributions. Subexponentiality is defined in terms of convolutions of probability distributions. For two independent, identically distributed random variables formula_5 with a common distribution function formula_6, the convolution of formula_6 with itself, written formula_7 and called the convolution square, is defined using Lebesgue–Stieltjes integration by: formula_8 and the "n"-fold convolution formula_9 is defined inductively by the rule: formula_10 The tail distribution function formula_11 is defined as formula_12. A distribution formula_6 on the positive half-line is subexponential if formula_13 This implies that, for any formula_14, formula_15 The probabilistic interpretation of this is that, for a sum of formula_16 independent random variables formula_17 with common distribution formula_6, formula_18 This is often known as the principle of the single big jump or catastrophe principle. A distribution formula_6 on the whole real line is subexponential if the distribution formula_19 is. Here formula_20 is the indicator function of the positive half-line. Alternatively, a random variable formula_21 supported on the real line is subexponential if and only if formula_22 is subexponential. All subexponential distributions are long-tailed, but examples can be constructed of long-tailed distributions that are not subexponential. Common heavy-tailed distributions. All commonly used heavy-tailed distributions are subexponential. Those that are one-tailed include: Those that are two-tailed include: Relationship to fat-tailed distributions. A fat-tailed distribution is a distribution for which the probability density function, for large x, goes to zero as a power formula_23. Since such a power is always bounded below by the probability density function of an exponential distribution, fat-tailed distributions are always heavy-tailed. Some distributions, however, have a tail which goes to zero slower than an exponential function (meaning they are heavy-tailed), but faster than a power (meaning they are not fat-tailed). An example is the log-normal distribution . Many other heavy-tailed distributions such as the log-logistic and Pareto distribution are, however, also fat-tailed. Estimating the tail-index. There are parametric and non-parametric approaches to the problem of the tail-index estimation. To estimate the tail-index using the parametric approach, some authors employ GEV distribution or Pareto distribution; they may apply the maximum-likelihood estimator (MLE). Pickand's tail-index estimator. With formula_24 a random sequence of independent and same density function formula_25, the Maximum Attraction Domain of the generalized extreme value density formula_26, where formula_27. If formula_28 and formula_29, then the "Pickands" tail-index estimation is formula_30 where formula_31. This estimator converges in probability to formula_32. Hill's tail-index estimator. Let formula_33 be a sequence of independent and identically distributed random variables with distribution function formula_25, the maximum domain of attraction of the generalized extreme value distribution formula_26, where formula_27. The sample path is formula_34 where formula_16 is the sample size. If formula_35 is an intermediate order sequence, i.e. formula_36, formula_37 and formula_38, then the Hill tail-index estimator is formula_39 where formula_40 is the formula_41-th order statistic of formula_42. This estimator converges in probability to formula_32, and is asymptotically normal provided formula_43 is restricted based on a higher order regular variation property . Consistency and asymptotic normality extend to a large class of dependent and heterogeneous sequences, irrespective of whether formula_44 is observed, or a computed residual or filtered data from a large class of models and estimators, including mis-specified models and models with errors that are dependent. Note that both Pickand's and Hill's tail-index estimators commonly make use of logarithm of the order statistics. Ratio estimator of the tail-index. The ratio estimator (RE-estimator) of the tail-index was introduced by Goldie and Smith. It is constructed similarly to Hill's estimator but uses a non-random "tuning parameter". A comparison of Hill-type and RE-type estimators can be found in Novak. Estimation of heavy-tailed density. Nonparametric approaches to estimate heavy- and superheavy-tailed probability density functions were given in Markovich. These are approaches based on variable bandwidth and long-tailed kernel estimators; on the preliminary data transform to a new random variable at finite or infinite intervals, which is more convenient for the estimation and then inverse transform of the obtained density estimate; and "piecing-together approach" which provides a certain parametric model for the tail of the density and a non-parametric model to approximate the mode of the density. Nonparametric estimators require an appropriate selection of tuning (smoothing) parameters like a bandwidth of kernel estimators and the bin width of the histogram. The well known data-driven methods of such selection are a cross-validation and its modifications, methods based on the minimization of the mean squared error (MSE) and its asymptotic and their upper bounds. A discrepancy method which uses well-known nonparametric statistics like Kolmogorov-Smirnov's, von Mises and Anderson-Darling's ones as a metric in the space of distribution functions (dfs) and quantiles of the later statistics as a known uncertainty or a discrepancy value can be found in. Bootstrap is another tool to find smoothing parameters using approximations of unknown MSE by different schemes of re-samples selection, see e.g.
[ { "math_id": 0, "text": "\n\\int_{-\\infty}^\\infty e^{t x} \\,dF(x) = \\infty \\quad \\mbox{for all } t>0.\n" }, { "math_id": 1, "text": "\\overline{F}(x) \\equiv \\Pr[X>x] \\, " }, { "math_id": 2, "text": "\n\\lim_{x \\to \\infty} e^{t x}\\overline{F}(x) = \\infty \\quad \\mbox{for all } t >0.\\,\n" }, { "math_id": 3, "text": "\n\\lim_{x \\to \\infty} \\Pr[X>x+t\\mid X>x] =1, \\,\n" }, { "math_id": 4, "text": "\n\\overline{F}(x+t) \\sim \\overline{F}(x) \\quad \\mbox{as } x \\to \\infty. \\,\n" }, { "math_id": 5, "text": " X_1,X_2" }, { "math_id": 6, "text": "F" }, { "math_id": 7, "text": "F^{*2}" }, { "math_id": 8, "text": "\n\\Pr[X_1+X_2 \\leq x] = F^{*2}(x) = \\int_{0}^x F(x-y)\\,dF(y),\n" }, { "math_id": 9, "text": "F^{*n}" }, { "math_id": 10, "text": "\nF^{*n}(x) = \\int_{0}^x F(x-y)\\,dF^{*n-1}(y).\n" }, { "math_id": 11, "text": "\\overline{F}" }, { "math_id": 12, "text": "\\overline{F}(x) = 1-F(x)" }, { "math_id": 13, "text": "\n\\overline{F^{*2}}(x) \\sim 2\\overline{F}(x) \\quad \\mbox{as } x \\to \\infty. \n" }, { "math_id": 14, "text": "n \\geq 1" }, { "math_id": 15, "text": "\n\\overline{F^{*n}}(x) \\sim n\\overline{F}(x) \\quad \\mbox{as } x \\to \\infty. \n" }, { "math_id": 16, "text": "n" }, { "math_id": 17, "text": "X_1,\\ldots,X_n" }, { "math_id": 18, "text": "\n\\Pr[X_1+ \\cdots +X_n>x] \\sim \\Pr[\\max(X_1, \\ldots,X_n)>x] \\quad \\text{as } x \\to \\infty. \n" }, { "math_id": 19, "text": "F I([0,\\infty))" }, { "math_id": 20, "text": "I([0,\\infty))" }, { "math_id": 21, "text": "X" }, { "math_id": 22, "text": "X^+ = \\max(0,X)" }, { "math_id": 23, "text": "x^{-a}" }, { "math_id": 24, "text": "(X_n , n \\geq 1)" }, { "math_id": 25, "text": "F \\in D(H(\\xi))" }, { "math_id": 26, "text": " H " }, { "math_id": 27, "text": "\\xi \\in \\mathbb{R}" }, { "math_id": 28, "text": "\\lim_{n\\to\\infty} k(n) = \\infty " }, { "math_id": 29, "text": "\\lim_{n\\to\\infty} \\frac{k(n)}{n}= 0" }, { "math_id": 30, "text": "\n\\xi^\\text{Pickands}_{(k(n),n)} =\\frac{1}{\\ln 2} \\ln \\left( \\frac{X_{(n-k(n)+1,n)} - X_{(n-2k(n)+1,n)}}{X_{(n-2k(n)+1,n)} - X_{(n-4k(n)+1,n)}}\\right),\n" }, { "math_id": 31, "text": "X_{(n-k(n)+1,n)}=\\max \\left(X_{n-k(n)+1},\\ldots ,X_{n}\\right)" }, { "math_id": 32, "text": "\\xi" }, { "math_id": 33, "text": "(X_t , t \\geq 1)" }, { "math_id": 34, "text": "{X_t: 1 \\leq t \\leq n}" }, { "math_id": 35, "text": "\\{k(n)\\}" }, { "math_id": 36, "text": "k(n) \\in \\{1,\\ldots,n-1\\}, " }, { "math_id": 37, "text": "k(n) \\to \\infty" }, { "math_id": 38, "text": "k(n)/n \\to 0" }, { "math_id": 39, "text": "\n\\xi^\\text{Hill}_{(k(n),n)} = \\left(\\frac 1 {k(n)} \\sum_{i=n-k(n)+1}^n \\ln(X_{(i,n)}) - \\ln (X_{(n-k(n)+1,n)})\\right)^{-1},\n" }, { "math_id": 40, "text": "X_{(i,n)}" }, { "math_id": 41, "text": "i" }, { "math_id": 42, "text": "X_1, \\dots, X_n" }, { "math_id": 43, "text": "k(n) \\to \\infty " }, { "math_id": 44, "text": "X_t" } ]
https://en.wikipedia.org/wiki?curid=8092200
8092698
Euclidean distance matrix
In mathematics, a Euclidean distance matrix is an "n"×"n" matrix representing the spacing of a set of "n" points in Euclidean space. For points formula_0 in "k"-dimensional space ℝ"k", the elements of their Euclidean distance matrix "A" are given by squares of distances between them. That is formula_1 where formula_2 denotes the Euclidean norm on ℝ"k". formula_3 In the context of (not necessarily Euclidean) distance matrices, the entries are usually defined directly as distances, not their squares. However, in the Euclidean case, squares of distances are used to avoid computing square roots and to simplify relevant theorems and algorithms. Euclidean distance matrices are closely related to Gram matrices (matrices of dot products, describing norms of vectors and angles between them). The latter are easily analyzed using methods of linear algebra. This allows to characterize Euclidean distance matrices and recover the points formula_0 that realize it. A realization, if it exists, is unique up to rigid transformations, i.e. distance-preserving transformations of Euclidean space (rotations, reflections, translations). In practical applications, distances are noisy measurements or come from arbitrary dissimilarity estimates (not necessarily metric). The goal may be to visualize such data by points in Euclidean space whose distance matrix approximates a given dissimilarity matrix as well as possible — this is known as multidimensional scaling. Alternatively, given two sets of data already represented by points in Euclidean space, one may ask how similar they are in shape, that is, how closely can they be related by a distance-preserving transformation — this is Procrustes analysis. Some of the distances may also be missing or come unlabelled (as an unordered set or multiset instead of a matrix), leading to more complex algorithmic tasks, such as the graph realization problem or the turnpike problem (for points on a line). Properties. By the fact that Euclidean distance is a metric, the matrix "A" has the following properties. In dimension "k", a Euclidean distance matrix has rank less than or equal to "k"+2. If the points formula_0 are in general position, the rank is exactly min("n", "k" + 2). Distances can be shrunk by any power to obtain another Euclidean distance matrix. That is, if formula_7 is a Euclidean distance matrix, then formula_8 is a Euclidean distance matrix for every 0&lt;"s"&lt;1. Relation to Gram matrix. The Gram matrix of a sequence of points formula_0 in "k"-dimensional space ℝ"k" is the "n"×"n" matrix formula_9 of their dot products (here a point formula_10 is thought of as a vector from 0 to that point): formula_11, where formula_12 is the angle between the vector formula_10 and formula_13. In particular formula_14 is the square of the distance of formula_10 from 0. Thus the Gram matrix describes norms and angles of vectors (from 0 to) formula_0. Let formula_15 be the "k"×"n" matrix containing formula_0 as columns. Then formula_16, because formula_17 (seeing formula_10 as a column vector). Matrices that can be decomposed as formula_18, that is, Gram matrices of some sequence of vectors (columns of formula_15), are well understood — these are precisely positive semidefinite matrices. To relate the Euclidean distance matrix to the Gram matrix, observe that formula_19 That is, the norms and angles determine the distances. Note that the Gram matrix contains additional information: distances from 0. Conversely, distances formula_20 between pairs of "n"+1 points formula_21 determine dot products between "n" vectors formula_22 (1≤"i"≤"n"): formula_23 (this is known as the polarization identity). Characterizations. For a "n"×"n" matrix "A", a sequence of points formula_0 in "k"-dimensional Euclidean space ℝ"k" is called a realization of "A" in ℝ"k" if "A" is their Euclidean distance matrix. One can assume without loss of generality that formula_24 (because translating by formula_25 preserves distances). &lt;templatestyles src="Math_theorem/styles.css" /&gt; This follows from the previous discussion because "G" is positive semidefinite of rank at most "k" if and only if it can be decomposed as formula_16 where "X" is a "k"×"n" matrix. Moreover, the columns of "X" give a realization in ℝ"k". Therefore, any method to decompose "G" allows to find a realization. The two main approaches are variants of Cholesky decomposition or using spectral decompositions to find the principal square root of "G", see Definite matrix#Decomposition. The statement of theorem distinguishes the first point formula_26. A more symmetric variant of the same theorem is the following: &lt;templatestyles src="Math_theorem/styles.css" /&gt; Other characterizations involve Cayley–Menger determinants. In particular, these allow to show that a symmetric hollow "n"×"n" matrix is realizable in ℝ"k" if and only if every ("k"+3)×("k"+3) principal submatrix is. In other words, a semimetric on finitely many points is embedabble isometrically in ℝ"k" if and only if every "k"+3 points are. In practice, the definiteness or rank conditions may fail due to numerical errors, noise in measurements, or due to the data not coming from actual Euclidean distances. Points that realize optimally similar distances can then be found by semidefinite approximation (and low rank approximation, if desired) using linear algebraic tools such as singular value decomposition or semidefinite programming. This is known as multidimensional scaling. Variants of these methods can also deal with incomplete distance data. Unlabeled data, that is, a set or multiset of distances not assigned to particular pairs, is much more difficult to deal with. Such data arises, for example, in DNA sequencing (specifically, genome recovery from partial digest) or phase retrieval. Two sets of points are called homometric if they have the same multiset of distances (but are not necessarily related by a rigid transformation). Deciding whether a given multiset of "n"("n"-1)/2 distances can be realized in a given dimension "k" is strongly NP-hard. In one dimension this is known as the turnpike problem; it is an open question whether it can be solved in polynomial time. When the multiset of distances is given with error bars, even the one dimensional case is NP-hard. Nevertheless, practical algorithms exist for many cases, e.g. random points. Uniqueness of representations. Given a Euclidean distance matrix, the sequence of points that realize it is unique up to rigid transformations – these are isometries of Euclidean space: rotations, reflections, translations, and their compositions. &lt;templatestyles src="Math_theorem/styles.css" /&gt; In applications, when distances don't match exactly, Procrustes analysis aims to relate two point sets as close as possible via rigid transformations, usually using singular value decomposition. The ordinary Euclidean case is known as the orthogonal Procrustes problem or Wahba's problem (when observations are weighted to account for varying uncertainties). Examples of applications include determining orientations of satellites, comparing molecule structure (in cheminformatics), protein structure (structural alignment in bioinformatics), or bone structure (statistical shape analysis in biology). Notes. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "x_1,x_2,\\ldots,x_n" }, { "math_id": 1, "text": "\\begin{align}\nA & = (a_{ij}); \\\\\na_{ij} & = d_{ij}^2 \\;=\\; \\lVert x_i - x_j\\rVert^2\n\\end{align}\n" }, { "math_id": 2, "text": "\\|\\cdot\\|" }, { "math_id": 3, "text": "A = \\begin{bmatrix}\n0 & d_{12}^2 & d_{13}^2 & \\dots & d_{1n}^2 \\\\\nd_{21}^2 & 0 & d_{23}^2 & \\dots & d_{2n}^2 \\\\\nd_{31}^2 & d_{32}^2 & 0 & \\dots & d_{3n}^2 \\\\\n\\vdots&\\vdots & \\vdots & \\ddots&\\vdots& \\\\\nd_{n1}^2 & d_{n2}^2 & d_{n3}^2 & \\dots & 0 \\\\\n\\end{bmatrix} " }, { "math_id": 4, "text": "a_{ij} = a_{ji}" }, { "math_id": 5, "text": " \\sqrt{a_{ij}} \\le \\sqrt{a_{ik}} + \\sqrt{a_{kj}} " }, { "math_id": 6, "text": " a_{ij}\\ge 0" }, { "math_id": 7, "text": "A=(a_{ij})" }, { "math_id": 8, "text": "({a_{ij}}^s)" }, { "math_id": 9, "text": "G = (g_{ij})" }, { "math_id": 10, "text": "x_i" }, { "math_id": 11, "text": "g_{ij} = x_i \\cdot x_j = \\|x_i\\| \\|x_j\\| \\cos \\theta" }, { "math_id": 12, "text": "\\theta" }, { "math_id": 13, "text": "x_j" }, { "math_id": 14, "text": "g_{ii} = \\|x_i\\|^2" }, { "math_id": 15, "text": "X" }, { "math_id": 16, "text": "G = X^\\textsf{T} X" }, { "math_id": 17, "text": "g_{ij} = x_i^\\textsf{T} x_j" }, { "math_id": 18, "text": "X^\\textsf{T}X" }, { "math_id": 19, "text": "d_{ij}^2 = \\|x_i - x_j\\|^2 = (x_i - x_j)^\\textsf{T} (x_i - x_j) = x_i^\\textsf{T} x_i - 2x_i^\\textsf{T} x_j + x_j^\\textsf{T} x_j = g_{ii} -2g_{ij} + g_{jj}" }, { "math_id": 20, "text": "d_{ij}" }, { "math_id": 21, "text": "x_0,x_1,\\ldots,x_n" }, { "math_id": 22, "text": "x_i-x_0" }, { "math_id": 23, "text": "g_{ij} = (x_i-x_0) \\cdot (x_j-x_0) = \\frac{1}{2}\\left(\\|x_i-x_0\\|^2 + \\|x_j-x_0\\|^2 - \\|x_i - x_j\\|^2 \\right) = \\frac{1}{2}(d_{0i}^2 + d_{0j}^2 - d_{ij}^2)" }, { "math_id": 24, "text": "x_1 = \\mathbf{0}" }, { "math_id": 25, "text": "-x_1" }, { "math_id": 26, "text": "x_1" } ]
https://en.wikipedia.org/wiki?curid=8092698
809314
Motivic cohomology
Invariant of algebraic varieties and of more general schemes Motivic cohomology is an invariant of algebraic varieties and of more general schemes. It is a type of cohomology related to motives and includes the Chow ring of algebraic cycles as a special case. Some of the deepest problems in algebraic geometry and number theory are attempts to understand motivic cohomology. Motivic homology and cohomology. Let "X" be a scheme of finite type over a field "k". A key goal of algebraic geometry is to compute the Chow groups of "X", because they give strong information about all subvarieties of "X". The Chow groups of "X" have some of the formal properties of Borel–Moore homology in topology, but some things are missing. For example, for a closed subscheme "Z" of "X", there is an exact sequence of Chow groups, the localization sequence formula_0 whereas in topology this would be part of a long exact sequence. This problem was resolved by generalizing Chow groups to a bigraded family of groups, (Borel–Moore) motivic homology groups (which were first called higher Chow groups by Bloch). Namely, for every scheme "X" of finite type over a field "k" and integers "i" and "j", we have an abelian group "H""i"("X",Z("j")), with the usual Chow group being the special case formula_1 For a closed subscheme "Z" of a scheme "X", there is a long exact localization sequence for motivic homology groups, ending with the localization sequence for Chow groups: formula_2 In fact, this is one of a family of four theories constructed by Voevodsky: motivic cohomology, motivic cohomology with compact support, Borel-Moore motivic homology (as above), and motivic homology with compact support. These theories have many of the formal properties of the corresponding theories in topology. For example, the motivic cohomology groups "H""i"(X,Z("j")) form a bigraded ring for every scheme "X" of finite type over a field. When "X" is smooth of dimension "n" over "k", there is a Poincare duality isomorphism formula_3 In particular, the Chow group "CH""i"("X") of codimension-"i" cycles is isomorphic to "H"2"i"("X",Z("i")) when "X" is smooth over "k". The motivic cohomology "H""i"("X", Z("j")) of a smooth scheme "X" over "k" is the cohomology of "X" in the Zariski topology with coefficients in a certain complex of sheaves Z(j) on "X". (Some properties are easier to prove using the Nisnevich topology, but this gives the same motivic cohomology groups.) For example, Z(j) is zero for "j" &lt; 0, Z(0) is the constant sheaf Z, and Z(1) is isomorphic in the derived category of "X" to "G"m[−1]. Here "G"m (the multiplicative group) denotes the sheaf of invertible regular functions, and the shift [−1] means that this sheaf is viewed as a complex in degree 1. The four versions of motivic homology and cohomology can be defined with coefficients in any abelian group. The theories with different coefficients are related by the universal coefficient theorem, as in topology. Relations to other cohomology theories. Relation to K-theory. By Bloch, Lichtenbaum, Friedlander, Suslin, and Levine, there is a spectral sequence from motivic cohomology to algebraic K-theory for every smooth scheme "X" over a field, analogous to the Atiyah-Hirzebruch spectral sequence in topology: formula_4 As in topology, the spectral sequence degenerates after tensoring with the rationals. For arbitrary schemes of finite type over a field (not necessarily smooth), there is an analogous spectral sequence from motivic homology to G-theory (the K-theory of coherent sheaves, rather than vector bundles). Relation to Milnor K-theory. Motivic cohomology provides a rich invariant already for fields. (Note that a field "k" determines a scheme Spec("k"), for which motivic cohomology is defined.) Although motivic cohomology "H""i"("k", Z("j")) for fields "k" is far from understood in general, there is a description when "i" = "j": formula_5 where "K""j"M("k") is the "j"th Milnor K-group of "k". Since Milnor K-theory of a field is defined explicitly by generators and relations, this is a useful description of one piece of the motivic cohomology of "k". Map to étale cohomology. Let "X" be a smooth scheme over a field "k", and let "m" be a positive integer which is invertible in "k". Then there is a natural homomorphism (the cycle map) from motivic cohomology to étale cohomology: formula_6 where Z/"m"("j") on the right means the étale sheaf (μ"m")⊗"j", with μ"m" being the "m"th roots of unity. This generalizes the cycle map from the Chow ring of a smooth variety to étale cohomology. A frequent goal in algebraic geometry or number theory is to compute motivic cohomology, whereas étale cohomology is often easier to understand. For example, if the base field "k" is the complex numbers, then étale cohomology coincides with singular cohomology (with finite coefficients). A powerful result proved by Voevodsky, known as the Beilinson-Lichtenbaum conjecture, says that many motivic cohomology groups are in fact isomorphic to étale cohomology groups. This is a consequence of the norm residue isomorphism theorem. Namely, the Beilinson-Lichtenbaum conjecture (Voevodsky's theorem) says that for a smooth scheme "X" over a field "k" and "m" a positive integer invertible in "k", the cycle map formula_7 is an isomorphism for all "j" ≥ "i" and is injective for all "j" ≥ "i" − 1. Relation to motives. For any field "k" and commutative ring "R", Voevodsky defined an "R"-linear triangulated category called the derived category of motives over "k" with coefficients in "R", DM("k"; "R"). Each scheme "X" over "k" determines two objects in DM called the motive of "X", M("X"), and the compactly supported motive of "X", Mc("X"); the two are isomorphic if "X" is proper over "k". One basic point of the derived category of motives is that the four types of motivic homology and motivic cohomology all arise as sets of morphisms in this category. To describe this, first note that there are Tate motives "R"("j") in DM("k"; "R") for all integers "j", such that the motive of projective space is a direct sum of Tate motives: formula_8 where "M" ↦ "M"[1] denotes the shift or "translation functor" in the triangulated category DM("k"; "R"). In these terms, motivic cohomology (for example) is given by formula_9 for every scheme "X" of finite type over "k". When the coefficients "R" are the rational numbers, a modern version of a conjecture by Beilinson predicts that the subcategory of compact objects in DM(k; Q) is equivalent to the bounded derived category of an abelian category MM("k"), the category of mixed motives over "k". In particular, the conjecture would imply that motivic cohomology groups can be identified with Ext groups in the category of mixed motives. This is far from known. Concretely, Beilinson's conjecture would imply the Beilinson-Soulé conjecture that "H""i"(X,Q("j")) is zero for "i" &lt; 0, which is known only in a few cases. Conversely, a variant of the Beilinson-Soulé conjecture, together with Grothendieck's standard conjectures and Murre's conjectures on Chow motives, would imply the existence of an abelian category "MM"("k") as the heart of a t-structure on "DM"("k"; Q). More would be needed in order to identify Ext groups in "MM"("k") with motivic cohomology. For "k" a subfield of the complex numbers, a candidate for the abelian category of mixed motives has been defined by Nori. If a category "MM"("k") with the expected properties exists (notably that the Betti realization functor from "MM"("k") to Q-vector spaces is faithful), then it must be equivalent to Nori's category. Applications to Arithmetic Geometry. Values of L-functions. Let "X" be a smooth projective variety over a number field. The Bloch-Kato conjecture on values of L-functions predicts that the order of vanishing of an L-function of "X" at an integer point is equal to the rank of a suitable motivic cohomology group. This is one of the central problems of number theory, incorporating earlier conjectures by Deligne and Beilinson. The Birch–Swinnerton-Dyer conjecture is a special case. More precisely, the conjecture predicts the leading coefficient of the L-function at an integer point in terms of regulators and a height pairing on motivic cohomology. History. The first clear sign of a possible generalization from Chow groups to a more general motivic cohomology theory for algebraic varieties was Quillen's definition and development of algebraic K-theory (1973), generalizing the Grothendieck group "K"0 of vector bundles. In the early 1980s, Beilinson and Soulé observed that Adams operations gave a splitting of algebraic K-theory tensored with the rationals; the summands are now called motivic cohomology (with rational coefficients). Beilinson and Lichtenbaum made influential conjectures predicting the existence and properties of motivic cohomology. Most but not all of their conjectures have now been proved. Bloch's definition of higher Chow groups (1986) was the first integral (as opposed to rational) definition of motivic homology for schemes over a field "k" (and hence motivic cohomology, in the case of smooth schemes). The definition of higher Chow groups of "X" is a natural generalization of the definition of Chow groups, involving algebraic cycles on the product of "X" with affine space which meet a set of hyperplanes (viewed as the faces of a simplex) in the expected dimension. Finally, Voevodsky (building on his work with Suslin) defined the four types of motivic homology and motivic cohomology in 2000, along with the derived category of motives. Related categories were also defined by Hanamura and Levine. The work of Elmanto and Morrow has extended the construction of motivic cohomology to arbitrary quasi-compact, quasi-separated schemes over a field. Notes. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "CH_i(Z) \\rightarrow CH_i(X) \\rightarrow CH_i(X-Z) \\rightarrow 0," }, { "math_id": 1, "text": " CH_i(X) \\cong H_{2i}(X,\\mathbf{Z}(i))." }, { "math_id": 2, "text": "\\cdots\\rightarrow H_{2i+1}(X-Z,\\mathbf{Z}(i))\\rightarrow H_{2i}(Z,\\mathbf{Z}(i))\\rightarrow H_{2i}(X,\\mathbf{Z}(i))\\rightarrow H_{2i}(X-Z,\\mathbf{Z}(i))\\rightarrow 0." }, { "math_id": 3, "text": "H^i(X,\\mathbf{Z}(j))\\cong H_{2n-i}(X,\\mathbf{Z}(n-j))." }, { "math_id": 4, "text": "E_2^{pq}=H^p(X,\\mathbf{Z}(-q/2)) \\Rightarrow K_{-p-q}(X)." }, { "math_id": 5, "text": "K_j^M(k) \\cong H^j(k, \\mathbf{Z}(j))," }, { "math_id": 6, "text": "H^i(X,\\mathbf{Z}/m(j))\\rightarrow H^i_{et}(X,\\mathbf{Z}/m(j))," }, { "math_id": 7, "text": "H^i(X,\\mathbf{Z}/m(j))\\rightarrow H^i_{et}(X,\\mathbf{Z}/m(j))" }, { "math_id": 8, "text": "M(\\mathbf{P}^n_k)\\cong \\oplus_{j=0}^n R(j)[2j]," }, { "math_id": 9, "text": "H^i(X,R(j))\\cong \\text{Hom}_{DM(k; R)}(M(X),R(j)[i])" } ]
https://en.wikipedia.org/wiki?curid=809314
8093356
Compton edge
Greatest energy a photon scattered on an electron can transfer to it In gamma-ray spectrometry, the Compton edge is a feature of the measured gamma-ray energy spectrum that results from Compton scattering in a scintillator or a semiconductor detector. It is a measurement phenomenon caused by scattering within the detector, and is not present in the incident radiation. When a gamma ray scatters within the detector and the scattered photon escapes from the detector's volume, only a fraction of the incident energy is deposited in the detector. This fraction depends on the scattering angle of the photon, leading to a spectrum of energies corresponding to the entire range of possible scattering angles. The highest energy that can be deposited, corresponding to full backscatter, is called the "Compton edge". In mathematical terms, the Compton edge is the inflection point of the high-energy side of the Compton region. Background. In a Compton scattering process, an incident photon collides with an electron in a material. The amount of energy exchanged varies with angle, and is given by the formula: formula_0 or formula_1 The amount of energy transferred to the electron varies with the angle of deflection. As formula_3 approaches zero, none of the energy is transferred. The maximum amount of energy is transferred when formula_3 approaches 180 degrees. formula_4 formula_5 In a single scattering act, is impossible for the photon to transfer any more energy via this process; thus, there is a sharp cutoff at this energy, leading to the name "Compton edge". If multiple photopeaks are present in the spectrum, each of them will have its own Compton edge. The part of the spectrum between the Compton edge and the photopeak is due to multiple subsequent Compton-scattering processes. The continuum of energies corresponding to Compton scattered electrons is known as the "Compton continuum". References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": " \\frac{1}{E^\\prime} - \\frac{1}{E} = \\frac{1}{m_{\\text{e}} c^2}\\left(1-\\cos \\theta \\right) " }, { "math_id": 1, "text": " E^\\prime = \\frac{E}{1 + \\frac{E}{m_{\\text{e}} c^2}(1-\\cos\\theta)} " }, { "math_id": 2, "text": "m_{\\text{e}}" }, { "math_id": 3, "text": "\\theta" }, { "math_id": 4, "text": " E_T = E - E^\\prime " }, { "math_id": 5, "text": " E_{\\text{Compton}} = E_T (\\text{max}) = E \\left(1-\\frac{1}{1 + \\frac{2E}{m_{\\text{e}} c^2}} \\right)" } ]
https://en.wikipedia.org/wiki?curid=8093356
8094016
Hollow matrix
Several types of mathematical matrix containing zeroes In mathematics, a hollow matrix may refer to one of several related classes of matrix: a sparse matrix; a matrix with a large block of zeroes; or a matrix with diagonal entries all zero. Definitions. Sparse. A "hollow matrix" may be one with "few" non-zero entries: that is, a sparse matrix. Block of zeroes. A "hollow matrix" may be a square "n" × "n" matrix with an "r" × "s" block of zeroes where "r" + "s" &gt; "n". Diagonal entries all zero. A "hollow matrix" may be a square matrix whose diagonal elements are all equal to zero. That is, an "n" × "n" matrix "A" = ("aij") is hollow if "aij" = 0 whenever "i" = "j" (i.e. "aii" = 0 for all i). The most obvious example is the real skew-symmetric matrix. Other examples are the adjacency matrix of a finite simple graph, and a distance matrix or Euclidean distance matrix. In other words, any square matrix that takes the form formula_0 is a hollow matrix, where the symbol formula_1 denotes an arbitrary entry. For example, formula_2 is a hollow matrix. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\begin{pmatrix}\n 0 & \\ast & & \\ast & \\ast \\\\\n\\ast & 0 & & \\ast & \\ast \\\\\n & & \\ddots \\\\\n\\ast & \\ast & & 0 & \\ast \\\\\n\\ast & \\ast & & \\ast & 0\n\\end{pmatrix}" }, { "math_id": 1, "text": "\\ast" }, { "math_id": 2, "text": "\\begin{pmatrix}\n 0 & 2 & 6 & \\frac{1}{3} & 4 \\\\\n 2 & 0 & 4 & 8 & 0 \\\\\n 9 & 4 & 0 & 2 & 933 \\\\\n 1 & 4 & 4 & 0 & 6 \\\\\n 7 & 9 & 23 & 8 & 0\n\\end{pmatrix}" }, { "math_id": 3, "text": "L:V \\to V" }, { "math_id": 4, "text": "L(\\langle e \\rangle) \\cap \\langle e \\rangle = \\langle 0 \\rangle" }, { "math_id": 5, "text": "\\langle e \\rangle = \\{ \\lambda e : \\lambda \\in F\\}." } ]
https://en.wikipedia.org/wiki?curid=8094016
8094068
Winsorized mean
Statistical measure of central tendency A winsorized mean is a winsorized statistical measure of central tendency, much like the mean and median, and even more similar to the truncated mean. It involves the calculation of the mean after winsorizing — replacing given parts of a probability distribution or sample at the high and low end with the most extreme remaining values, typically doing so for an equal amount of both extremes; often 10 to 25 percent of the ends are replaced. The winsorized mean can equivalently be expressed as a weighted average of the truncated mean and the quantiles at which it is limited, which corresponds to replacing parts with the corresponding quantiles. Advantages. The winsorized mean is a useful estimator because by retaining the outliers without taking them too literally, it is less sensitive to observations at the extremes than the straightforward mean, and will still generate a reasonable estimate of central tendency or mean for almost all statistical models. In this regard it is referred to as a robust estimator. Drawbacks. The winsorized mean uses more information from the distribution or sample than the median. However, unless the underlying distribution is symmetric, the winsorized mean of a sample is unlikely to produce an unbiased estimator for either the mean or the median. Example. For a sample of 10 numbers (from "x"(1), the smallest, to "x"(10) the largest; order statistic notation) the 10% winsorized mean is formula_0 The key is in the repetition of "x"(2) and "x"(9): the extras substitute for the original values "x"(1) and "x"(10) which have been discarded and replaced. This is equivalent to a weighted average of 0.1 times the 5th percentile ("x"(2)), 0.8 times the 10% trimmed mean, and 0.1 times the 95th percentile ("x"(9)). Notes. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\frac{\\overbrace{x_{(2)} + x_{(2)}} + x_{(3)} + x_{(4)} + x_{(5)} + x_{(6)} + x_{(7)} + x_{(8)} + \\overbrace{x_{(9)} + x_{(9)}}}{10}. \\, " } ]
https://en.wikipedia.org/wiki?curid=8094068
8097831
Spot–future parity
Spot–future parity (or spot-futures parity) is a parity condition whereby, if an asset can be purchased today and held until the exercise of a futures contract, the value of the future should equal the current spot price adjusted for the cost of money, dividends, "convenience yield" and any carrying costs (such as storage). That is, if a person can purchase a good for price "S" and conclude a contract to sell it one month later at a price of "F", the price difference should be no greater than the cost of using money less any expenses (or earnings) from holding the asset; if the difference is greater, the person has an opportunity to buy and sell the "spots" and "futures" for a risk-free profit, "i.e." an arbitrage. Spot–future parity is an application of the law of one price; see also Rational pricing and #Futures. The spot-future parity condition does not say that prices must be equal (once adjusted), but rather that when the condition is not met, it should be possible to sell one and purchase the other for a risk-free profit. In highly liquid and developed markets, actual prices on the spot and futures markets may effectively fulfill the condition. When the condition is consistently not met for a given asset, the implication is that some condition of the market prevents effective arbitration; possible reasons include high transaction costs, regulations and legal restrictions, low liquidity, or poor enforceability of legal contracts. Spot–future parity can be used for virtually any asset where a future may be purchased, but is particularly common in currency markets, commodities, stock futures markets, and bond markets. It is also essential to price determination in swap markets. Mathematical expression. In the complete form: formula_0 Where: "F", "S" represent the cost of the good on the futures market and the spot market, respectively. "e" is the mathematical constant for the base of the natural logarithm. "r" is the applicable interest rate (for arbitrage, the cost of borrowing), stated at the continuous compounding rate. "y" is the storage cost over the life of the contract. "q" are any dividends accruing to the asset over the period between the spot contract (i.e. today) and the delivery date for the futures contract. "u" is the convenience yield, which includes any costs incurred (or lost benefits) due to not having physical possession of the asset during the contract period. "T" is the time period applicable (fraction of a year) to delivery of the forward contract. This may be simplified depending on the nature of the asset applicable; it is often seen in the form below, which applies for an asset with no dividends, storage or convenience costs. Alternatively, r can be seen as the net total cost of carrying (that is, the sum of interest, dividends, convenience and storage). Note that the formulation assumes that transaction costs are insignificant. Simplified form: formula_1 Pricing of existing futures contracts. Existing futures contracts can be priced using elements of the spot-futures parity equation, where formula_2 is the settlement price of the existing contract, formula_3 is the current spot price and formula_4 is the (expected) value of the existing contract today: formula_5 which upon application of the spot-futures parity equation becomes: formula_6 Where formula_7 is the forward price today. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": " F = Se^{(r+y-q-u)T}" }, { "math_id": 1, "text": " F = Se^{rT}" }, { "math_id": 2, "text": "K" }, { "math_id": 3, "text": "S_0" }, { "math_id": 4, "text": "P_0" }, { "math_id": 5, "text": " P_0 = S_0 - K e^{-rT}" }, { "math_id": 6, "text": " P_0 = (F_0 - K)e^{-rT}" }, { "math_id": 7, "text": "F_0" } ]
https://en.wikipedia.org/wiki?curid=8097831
8099018
Riesz space
Partially ordered vector space, ordered as a lattice In mathematics, a Riesz space, lattice-ordered vector space or vector lattice is a partially ordered vector space where the order structure is a lattice. Riesz spaces are named after Frigyes Riesz who first defined them in his 1928 paper "Sur la décomposition des opérations fonctionelles linéaires". Riesz spaces have wide-ranging applications. They are important in measure theory, in that important results are special cases of results for Riesz spaces. For example, the Radon–Nikodym theorem follows as a special case of the Freudenthal spectral theorem. Riesz spaces have also seen application in mathematical economics through the work of Greek-American economist and mathematician Charalambos D. Aliprantis. Definition. Preliminaries. If formula_0 is an ordered vector space (which by definition is a vector space over the reals) and if formula_1 is a subset of formula_0 then an element formula_2 is an upper bound (resp. lower bound) of formula_1 if formula_3 (resp. formula_4) for all formula_5 An element formula_6 in formula_0 is the least upper bound or supremum (resp. greater lower bound or infimum) of formula_1 if it is an upper bound (resp. a lower bound) of formula_1 and if for any upper bound (resp. any lower bound) formula_7 of formula_8 formula_9 (resp. formula_10). Preordered vector lattice. A preordered vector lattice is a preordered vector space formula_11 in which every pair of elements has a supremum. More explicitly, a preordered vector lattice is vector space endowed with a preorder, formula_12 such that for any formula_13: The preorder, together with items 1 and 2, which make it "compatible with the vector space structure", make formula_11 a preordered vector space. Item 3 says that the preorder is a join semilattice. Because the preorder is compatible with the vector space structure, one can show that any pair also have an infimum, making formula_11 also a meet semilattice, hence a lattice. A preordered vector space formula_11 is a preordered vector lattice if and only if it satisfies any of the following equivalent properties: Riesz space and vector lattices. A Riesz space or a vector lattice is a preordered vector lattice whose preorder is a partial order. Equivalently, it is an ordered vector space for which the ordering is a lattice. Note that many authors required that a vector lattice be a partially ordered vector space (rather than merely a preordered vector space) while others only require that it be a preordered vector space. We will henceforth assume that every Riesz space and every vector lattice is an ordered vector space but that a preordered vector lattice is not necessarily partially ordered. If formula_11 is an ordered vector space over formula_22 whose positive cone formula_23 (the elements formula_24) is generating (that is, such that formula_25), and if for every formula_26 either formula_27 or formula_28 exists, then formula_11 is a vector lattice. Intervals. An order interval in a partially ordered vector space is a convex set of the form formula_29 In an ordered real vector space, every interval of the form formula_30 is balanced. From axioms 1 and 2 above it follows that formula_31 and formula_32 implies formula_33 A subset is said to be order bounded if it is contained in some order interval. An order unit of a preordered vector space is any element formula_34 such that the set formula_30 is absorbing. The set of all linear functionals on a preordered vector space formula_35 that map every order interval into a bounded set is called the order bound dual of formula_35 and denoted by formula_36 If a space is ordered then its order bound dual is a vector subspace of its algebraic dual. A subset formula_37 of a vector lattice formula_11 is called order complete if for every non-empty subset formula_38 such that formula_39 is order bounded in formula_40 both formula_41 and formula_42 exist and are elements of formula_43 We say that a vector lattice formula_11 is order complete if formula_11 is an order complete subset of formula_21 Classification. Finite-dimensional Riesz spaces are entirely classified by the Archimedean property: Theorem: Suppose that formula_0 is a vector lattice of finite-dimension formula_44 If formula_0 is Archimedean ordered then it is (a vector lattice) isomorphic to formula_45 under its canonical order. Otherwise, there exists an integer formula_46 satisfying formula_47 such that formula_0 is isomorphic to formula_48 where formula_49 has its canonical order, formula_50 is formula_51 with the lexicographical order, and the product of these two spaces has the canonical product order. The same result does not hold in infinite dimensions. For an example due to Kaplansky, consider the vector space V of functions on [0,1] that are continuous except at finitely many points, where they have a pole of second order. This space is lattice-ordered by the usual pointwise comparison, but cannot be written as ℝκ for any cardinal κ. On the other hand, epi-mono factorization in the category of ℝ-vector spaces also applies to Riesz spaces: every lattice-ordered vector space injects into a quotient of ℝκ by a solid subspace. Basic properties. Every Riesz space is a partially ordered vector space, but not every partially ordered vector space is a Riesz space. Note that for any subset formula_37 of formula_52 formula_53 whenever either the supremum or infimum exists (in which case they both exist). If formula_54 and formula_55 then formula_56 For all formula_57 in a Riesz space formula_52 formula_58 Absolute value. For every element formula_34 in a Riesz space formula_52 the absolute value of formula_59 denoted by formula_60 is defined to be formula_61 where this satisfies formula_62 and formula_63 For any formula_64 and any real number formula_65 we have formula_66 and formula_67 Disjointness. Two elements formula_68 in a vector lattice formula_0 are said to be lattice disjoint or disjoint if formula_69 in which case we write formula_70 Two elements formula_68 are disjoint if and only if formula_71 If formula_68 are disjoint then formula_72 and formula_73 where for any element formula_74 formula_75 and formula_76 We say that two sets formula_37 and formula_39 are disjoint if formula_6 and formula_7 are disjoint for all formula_77 and all formula_78 in which case we write formula_79 If formula_37 is the singleton set formula_80 then we will write formula_81 in place of formula_82 For any set formula_40 we define the disjoint complement to be the set formula_83 Disjoint complements are always bands, but the converse is not true in general. If formula_37 is a subset of formula_0 such that formula_84 exists, and if formula_39 is a subset lattice in formula_0 that is disjoint from formula_40 then formula_39 is a lattice disjoint from formula_85 Representation as a disjoint sum of positive elements. For any formula_86 let formula_87 and formula_88 where note that both of these elements are formula_89 and formula_90 with formula_91 Then formula_92 and formula_93 are disjoint, and formula_90 is the unique representation of formula_34 as the difference of disjoint elements that are formula_94 For all formula_95 formula_96 and formula_97 If formula_55 and formula_14 then formula_98 Moreover, formula_14 if and only if formula_99 and formula_100 Every Riesz space is a distributive lattice; that is, it has the following equivalent properties: for all formula_101 Every Riesz space has the Riesz decomposition property. Order convergence. There are a number of meaningful non-equivalent ways to define convergence of sequences or nets with respect to the order structure of a Riesz space. A sequence formula_108 in a Riesz space formula_11 is said to converge monotonely if it is a monotone decreasing (resp. increasing) sequence and its infimum (supremum) formula_34 exists in formula_11 and denoted formula_109 (resp. formula_110). A sequence formula_108 in a Riesz space formula_11 is said to converge in order to formula_34 if there exists a monotone converging sequence formula_111 in formula_11 such that formula_112 If formula_113 is a positive element of a Riesz space formula_11 then a sequence formula_114 in formula_11 is said to converge u-uniformly to formula_34 if for any formula_115 there exists an formula_116 such that formula_117 for all formula_118 Subspaces. The extra structure provided by these spaces provide for distinct kinds of Riesz subspaces. The collection of each kind structure in a Riesz space (for example, the collection of all ideals) forms a distributive lattice. Sublattices. If formula_0 is a vector lattice then a vector sublattice is a vector subspace formula_119 of formula_0 such that for all formula_120 formula_27 belongs to formula_119 (where this supremum is taken in formula_0). It can happen that a subspace formula_119 of formula_0 is a vector lattice under its canonical order but is not a vector sublattice of formula_121 Ideals. A vector subspace formula_122 of a Riesz space formula_11 is called an ideal if it is solid, meaning if for formula_123 and formula_124 formula_125 implies that formula_126 The intersection of an arbitrary collection of ideals is again an ideal, which allows for the definition of a smallest ideal containing some non-empty subset formula_37 of formula_127 and is called the ideal generated by formula_43 An Ideal generated by a singleton is called a principal ideal. Bands and σ-Ideals. A band formula_39 in a Riesz space formula_11 is defined to be an ideal with the extra property, that for any element formula_128 for which its absolute value formula_129 is the supremum of an arbitrary subset of positive elements in formula_130 that formula_131 is actually in formula_132 formula_133-"Ideals" are defined similarly, with the words 'arbitrary subset' replaced with 'countable subset'. Clearly every band is a formula_133-ideal, but the converse is not true in general. The intersection of an arbitrary family of bands is again a band. As with ideals, for every non-empty subset formula_37 of formula_127 there exists a smallest band containing that subset, called the band generated by formula_43 A band generated by a singleton is called a principal band. Projection bands. A band formula_39 in a Riesz space, is called a projection band, if formula_134 meaning every element formula_128 can be written uniquely as a sum of two elements, formula_135 with formula_136 and formula_137 There then also exists a positive linear idempotent, or projection, formula_138 such that formula_139 The collection of all projection bands in a Riesz space forms a Boolean algebra. Some spaces do not have non-trivial projection bands (for example, formula_140), so this Boolean algebra may be trivial. Completeness. A vector lattice is complete if every subset has both a supremum and an infimum. A vector lattice is Dedekind complete if each set with an upper bound has a supremum and each set with a lower bound has an infimum. An order complete, regularly ordered vector lattice whose canonical image in its order bidual is order complete is called minimal and is said to be of minimal type. Subspaces, quotients, and products. Sublattices If formula_141 is a vector subspace of a preordered vector space formula_0 then the canonical ordering on formula_141 induced by formula_0's positive cone formula_23 is the preorder induced by the pointed convex cone formula_142 where this cone is proper if formula_23 is proper (that is, if formula_143). A sublattice of a vector lattice formula_0 is a vector subspace formula_141 of formula_0 such that for all formula_144 formula_145 belongs to formula_0 (importantly, note that this supremum is taken in formula_0 and not in formula_141). If formula_146 with formula_147 then the 2-dimensional vector subspace formula_141 of formula_0 defined by all maps of the form formula_148 (where formula_149) is a vector lattice under the induced order but is not a sublattice of formula_121 This despite formula_0 being an order complete Archimedean ordered topological vector lattice. Furthermore, there exist vector a vector sublattice formula_116 of this space formula_0 such that formula_150 has empty interior in formula_0 but no positive linear functional on formula_116 can be extended to a positive linear functional on formula_121 Quotient lattices Let formula_141 be a vector subspace of an ordered vector space formula_0 having positive cone formula_151 let formula_152 be the canonical projection, and let formula_153 Then formula_154 is a cone in formula_155 that induces a canonical preordering on the quotient space formula_156 If formula_154 is a proper cone in formula_155 then formula_154 makes formula_155 into an ordered vector space. If formula_141 is formula_23-saturated then formula_154 defines the canonical order of formula_156 Note that formula_157 provides an example of an ordered vector space where formula_158 is not a proper cone. If formula_0 is a vector lattice and formula_116 is a solid vector subspace of formula_0 then formula_154 defines the canonical order of formula_155 under which formula_159 is a vector lattice and the canonical map formula_152 is a vector lattice homomorphism. Furthermore, if formula_0 is order complete and formula_141 is a band in formula_0 then formula_155 is isomorphic with formula_160 Also, if formula_141 is solid then the order topology of formula_155 is the quotient of the order topology on formula_121 If formula_0 is a topological vector lattice and formula_141 is a closed solid sublattice of formula_0 then formula_161 is also a topological vector lattice. Product If formula_1 is any set then the space formula_162 of all functions from formula_1 into formula_0 is canonically ordered by the proper cone formula_163 Suppose that formula_164 is a family of preordered vector spaces and that the positive cone of formula_165 is formula_166 Then formula_167 is a pointed convex cone in formula_168 which determines a canonical ordering on formula_169; formula_23 is a proper cone if all formula_170 are proper cones. Algebraic direct sum The algebraic direct sum formula_171 of formula_164 is a vector subspace of formula_169 that is given the canonical subspace ordering inherited from formula_172 If formula_173 are ordered vector subspaces of an ordered vector space formula_0 then formula_0 is the ordered direct sum of these subspaces if the canonical algebraic isomorphism of formula_0 onto formula_174 (with the canonical product order) is an order isomorphism. Spaces of linear maps. A cone formula_23 in a vector space formula_0 is said to be generating if formula_175 is equal to the whole vector space. If formula_0 and formula_176 are two non-trivial ordered vector spaces with respective positive cones formula_177 and formula_178 then formula_177 is generating in formula_0 if and only if the set formula_179 is a proper cone in formula_180 which is the space of all linear maps from formula_0 into formula_181 In this case the ordering defined by formula_23 is called the canonical ordering of formula_182 More generally, if formula_141 is any vector subspace of formula_183 such that formula_184 is a proper cone, the ordering defined by formula_184 is called the canonical ordering of formula_185 A linear map formula_186 between two preordered vector spaces formula_0 and formula_187 with respective positive cones formula_23 and formula_188 is called positive if formula_189 If formula_0 and formula_187 are vector lattices with formula_187 order complete and if formula_190 is the set of all positive linear maps from formula_0 into formula_187 then the subspace formula_191 of formula_192 is an order complete vector lattice under its canonical order; furthermore, formula_141 contains exactly those linear maps that map order intervals of formula_0 into order intervals of formula_193 Positive functionals and the order dual. A linear function formula_131 on a preordered vector space is called positive if formula_54 implies formula_194 The set of all positive linear forms on a vector space, denoted by formula_195 is a cone equal to the polar of formula_196 The order dual of an ordered vector space formula_0 is the set, denoted by formula_197 defined by formula_198 Although formula_199 there do exist ordered vector spaces for which set equality does not hold. Vector lattice homomorphism. Suppose that formula_0 and formula_187 are preordered vector lattices with positive cones formula_23 and formula_188 and let formula_186 be a map. Then formula_113 is a preordered vector lattice homomorphism if formula_113 is linear and if any one of the following equivalent conditions hold: A pre-ordered vector lattice homomorphism that is bijective is a pre-ordered vector lattice isomorphism. A pre-ordered vector lattice homomorphism between two Riesz spaces is called a vector lattice homomorphism; if it is also bijective, then it is called a vector lattice isomorphism. If formula_113 is a non-zero linear functional on a vector lattice formula_0 with positive cone formula_23 then the following are equivalent: An extreme ray of the cone formula_23 is a set formula_200 where formula_201 formula_34 is non-zero, and if formula_202 is such that formula_203 then formula_204 for some formula_205 such that formula_206 A vector lattice homomorphism from formula_0 into formula_187 is a topological homomorphism when formula_0 and formula_187 are given their respective order topologies. Projection properties. There are numerous projection properties that Riesz spaces may have. A Riesz space is said to have the (principal) projection property if every (principal) band is a projection band. The so-called main inclusion theorem relates the following additional properties to the (principal) projection property: A Riesz space is... Then these properties are related as follows. SDC implies DC; DC implies both Dedekind formula_133-completeness and the projection property; Both Dedekind formula_133-completeness and the projection property separately imply the principal projection property; and the principal projection property implies the Archimedean property. None of the reverse implications hold, but Dedekind formula_133-completeness and the projection property together imply DC. Notes. &lt;templatestyles src="Reflist/styles.css" /&gt; References. &lt;templatestyles src="Reflist/styles.css" /&gt; &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "X" }, { "math_id": 1, "text": "S" }, { "math_id": 2, "text": "b \\in X" }, { "math_id": 3, "text": "s \\leq b" }, { "math_id": 4, "text": "s \\geq b" }, { "math_id": 5, "text": "s \\in S." }, { "math_id": 6, "text": "a" }, { "math_id": 7, "text": "b" }, { "math_id": 8, "text": "S," }, { "math_id": 9, "text": "a \\leq b" }, { "math_id": 10, "text": "a \\geq b" }, { "math_id": 11, "text": "E" }, { "math_id": 12, "text": "\\,\\leq,\\," }, { "math_id": 13, "text": "x, y, z \\in E" }, { "math_id": 14, "text": "x \\leq y" }, { "math_id": 15, "text": "x + z \\leq y + z." }, { "math_id": 16, "text": "0 \\leq a," }, { "math_id": 17, "text": "a x \\leq a y." }, { "math_id": 18, "text": "x, y \\in E," }, { "math_id": 19, "text": "x \\vee y" }, { "math_id": 20, "text": "\\,(\\leq).\\," }, { "math_id": 21, "text": "E." }, { "math_id": 22, "text": "\\R" }, { "math_id": 23, "text": "C" }, { "math_id": 24, "text": "\\,\\geq 0" }, { "math_id": 25, "text": "E = C - C" }, { "math_id": 26, "text": "x, y \\in C" }, { "math_id": 27, "text": "\\sup\\{x, y\\}" }, { "math_id": 28, "text": "\\inf \\{x, y\\}" }, { "math_id": 29, "text": "[a, b] = \\{ x : a \\leq x \\leq b \\}." }, { "math_id": 30, "text": "[-x, x]" }, { "math_id": 31, "text": "x, y \\in [a, b]" }, { "math_id": 32, "text": "t \\in (0, 1)" }, { "math_id": 33, "text": "t x (1 - t) y \\in [a, b]." }, { "math_id": 34, "text": "x" }, { "math_id": 35, "text": "V" }, { "math_id": 36, "text": "V^b." }, { "math_id": 37, "text": "A" }, { "math_id": 38, "text": "B \\subseteq A" }, { "math_id": 39, "text": "B" }, { "math_id": 40, "text": "A," }, { "math_id": 41, "text": "\\sup B" }, { "math_id": 42, "text": "\\inf B" }, { "math_id": 43, "text": "A." }, { "math_id": 44, "text": "n." }, { "math_id": 45, "text": "\\R^{n}" }, { "math_id": 46, "text": "k" }, { "math_id": 47, "text": "2 \\leq k \\leq n" }, { "math_id": 48, "text": "\\R^k_L \\times \\R^{n-k}" }, { "math_id": 49, "text": "\\R^{n-k}" }, { "math_id": 50, "text": "\\R^k_L" }, { "math_id": 51, "text": "\\R^k" }, { "math_id": 52, "text": "X," }, { "math_id": 53, "text": "\\sup A = - \\inf (-A)" }, { "math_id": 54, "text": "x \\geq 0" }, { "math_id": 55, "text": "y \\geq 0" }, { "math_id": 56, "text": "[0, x] + [0, y] = [0, x + y]." }, { "math_id": 57, "text": "a, b, x, \\text{ and } y" }, { "math_id": 58, "text": "a - \\inf (x, y) + b = \\sup (a - x + b, a - y + b)." }, { "math_id": 59, "text": "x," }, { "math_id": 60, "text": "|x|," }, { "math_id": 61, "text": "| x|:= \\sup \\{x, -x\\}," }, { "math_id": 62, "text": "-|x| \\leq x \\leq |x|" }, { "math_id": 63, "text": "|x| \\geq 0." }, { "math_id": 64, "text": "x, y \\in X" }, { "math_id": 65, "text": "r," }, { "math_id": 66, "text": "|r x| = |r| |x|" }, { "math_id": 67, "text": "|x + y| \\leq |x| + |y|." }, { "math_id": 68, "text": "x \\text{ and } y" }, { "math_id": 69, "text": "\\inf \\{|x|, |y|\\} = 0," }, { "math_id": 70, "text": "x \\perp y." }, { "math_id": 71, "text": "\\sup\\{|x|, |y|\\} = |x| + |y|." }, { "math_id": 72, "text": "| x + y|=|x|+|y |" }, { "math_id": 73, "text": "(x + y)^+ = x^+ + y^+," }, { "math_id": 74, "text": "z," }, { "math_id": 75, "text": "z^+ := \\sup \\{z, 0\\}" }, { "math_id": 76, "text": "z^- := \\sup \\{-z, 0\\}." }, { "math_id": 77, "text": "a \\in A" }, { "math_id": 78, "text": "b \\in B," }, { "math_id": 79, "text": "A \\perp B." }, { "math_id": 80, "text": "\\{ a \\}" }, { "math_id": 81, "text": "a \\perp B" }, { "math_id": 82, "text": "\\{a\\} \\perp B." }, { "math_id": 83, "text": "A^{\\perp} := \\left\\{ x \\in X : x \\perp A \\right\\}." }, { "math_id": 84, "text": "x = \\sup A" }, { "math_id": 85, "text": "\\{x\\}." }, { "math_id": 86, "text": "x \\in X," }, { "math_id": 87, "text": "x^+ := \\sup \\{x, 0\\}" }, { "math_id": 88, "text": "x^- := \\sup \\{-x, 0\\}," }, { "math_id": 89, "text": "\\geq 0" }, { "math_id": 90, "text": "x = x^+ - x^-" }, { "math_id": 91, "text": "|x|= x^+ + x^-." }, { "math_id": 92, "text": "x^+" }, { "math_id": 93, "text": "x^{-}" }, { "math_id": 94, "text": "\\geq 0." }, { "math_id": 95, "text": "x, y \\in X," }, { "math_id": 96, "text": "\\left|x^+ - y^+\\right| \\leq|x - y|" }, { "math_id": 97, "text": "x + y = \\sup \\{x, y\\} + \\inf \\{x, y\\}." }, { "math_id": 98, "text": "x^+ \\leq y." }, { "math_id": 99, "text": "x^+ \\leq y^+" }, { "math_id": 100, "text": "x^- \\leq y^-." }, { "math_id": 101, "text": "x, y, z \\in X" }, { "math_id": 102, "text": "x \\wedge (y \\vee z) = (x \\wedge y) \\vee (x \\wedge z)" }, { "math_id": 103, "text": "x \\vee (y \\wedge z) = (x \\vee y) \\wedge (x \\vee z)" }, { "math_id": 104, "text": "(x \\wedge y) \\vee (y \\wedge z) \\vee (z \\wedge x) = (x \\vee y) \\wedge (y \\vee z) \\wedge (z \\vee x)." }, { "math_id": 105, "text": "x \\wedge z = y \\wedge z" }, { "math_id": 106, "text": "x \\vee z = y \\vee z" }, { "math_id": 107, "text": "x = y." }, { "math_id": 108, "text": "\\left\\{x_n\\right\\}" }, { "math_id": 109, "text": "x_n \\downarrow x" }, { "math_id": 110, "text": "x_n \\uparrow x" }, { "math_id": 111, "text": "\\left\\{p_n\\right\\}" }, { "math_id": 112, "text": "\\left|x_n - x\\right| < p_n \\downarrow 0." }, { "math_id": 113, "text": "u" }, { "math_id": 114, "text": "\\left\\{ x_n \\right\\}" }, { "math_id": 115, "text": "r > 0" }, { "math_id": 116, "text": "N" }, { "math_id": 117, "text": "\\left|x_n - x\\right| < r u" }, { "math_id": 118, "text": "n > N." }, { "math_id": 119, "text": "F" }, { "math_id": 120, "text": "x, y \\in F," }, { "math_id": 121, "text": "X." }, { "math_id": 122, "text": "I" }, { "math_id": 123, "text": "f \\in I" }, { "math_id": 124, "text": "g \\in E," }, { "math_id": 125, "text": "|g| \\leq |f|" }, { "math_id": 126, "text": "g \\in I." }, { "math_id": 127, "text": "E," }, { "math_id": 128, "text": "f \\in E" }, { "math_id": 129, "text": "|f|" }, { "math_id": 130, "text": "B," }, { "math_id": 131, "text": "f" }, { "math_id": 132, "text": "B." }, { "math_id": 133, "text": "\\sigma" }, { "math_id": 134, "text": "E = B \\oplus B^{\\bot}," }, { "math_id": 135, "text": "f = u + v" }, { "math_id": 136, "text": "u \\in B" }, { "math_id": 137, "text": "v \\in B^{\\bot}." }, { "math_id": 138, "text": "P_B : E \\to E," }, { "math_id": 139, "text": "P_B(f) = u." }, { "math_id": 140, "text": "C([0, 1])" }, { "math_id": 141, "text": "M" }, { "math_id": 142, "text": "C \\cap M," }, { "math_id": 143, "text": "C \\cap (- C) = \\varnothing" }, { "math_id": 144, "text": "x, y \\in M," }, { "math_id": 145, "text": "\\sup_{}{}_X (x, y)" }, { "math_id": 146, "text": "X = L^p([0, 1], \\mu)" }, { "math_id": 147, "text": "0 < p < 1," }, { "math_id": 148, "text": "t \\mapsto a t + b" }, { "math_id": 149, "text": "a, b \\in \\R" }, { "math_id": 150, "text": "N \\cap C" }, { "math_id": 151, "text": "C," }, { "math_id": 152, "text": "\\pi : X \\to X / M" }, { "math_id": 153, "text": "\\hat{C} := \\pi(C)." }, { "math_id": 154, "text": "\\hat{C}" }, { "math_id": 155, "text": "X / M" }, { "math_id": 156, "text": "X / M." }, { "math_id": 157, "text": "X=\\R^2_{0}" }, { "math_id": 158, "text": "\\pi(C)" }, { "math_id": 159, "text": "L / M" }, { "math_id": 160, "text": "M^{\\bot}." }, { "math_id": 161, "text": "X / L" }, { "math_id": 162, "text": "X^S" }, { "math_id": 163, "text": "\\left\\{ f \\in X^S : f(s) \\in C \\text{ for all } s \\in S \\right\\}." }, { "math_id": 164, "text": "\\left\\{ X_\\alpha : \\alpha \\in A \\right\\}" }, { "math_id": 165, "text": "X_\\alpha" }, { "math_id": 166, "text": "C_\\alpha." }, { "math_id": 167, "text": "C := \\prod_{\\alpha} C_\\alpha" }, { "math_id": 168, "text": "\\prod_\\alpha X_\\alpha," }, { "math_id": 169, "text": "\\prod_\\alpha X_\\alpha" }, { "math_id": 170, "text": "C_\\alpha" }, { "math_id": 171, "text": "\\bigoplus_\\alpha X_\\alpha" }, { "math_id": 172, "text": "\\prod_\\alpha X_\\alpha." }, { "math_id": 173, "text": "X_1, \\ldots, X_n" }, { "math_id": 174, "text": "\\prod_\\alpha X_{\\alpha}" }, { "math_id": 175, "text": "C - C" }, { "math_id": 176, "text": "W" }, { "math_id": 177, "text": "P" }, { "math_id": 178, "text": "Q," }, { "math_id": 179, "text": "C = \\{ u \\in \\operatorname{L}(X; W) : u(P) \\subseteq Q \\}" }, { "math_id": 180, "text": "\\operatorname{L}(X; W)," }, { "math_id": 181, "text": "W." }, { "math_id": 182, "text": "\\operatorname{L}(X; W)." }, { "math_id": 183, "text": "\\operatorname{L}(X; W)" }, { "math_id": 184, "text": "C \\cap M" }, { "math_id": 185, "text": "M." }, { "math_id": 186, "text": "u : X \\to Y" }, { "math_id": 187, "text": "Y" }, { "math_id": 188, "text": "D" }, { "math_id": 189, "text": "u(X) \\subseteq D." }, { "math_id": 190, "text": "H" }, { "math_id": 191, "text": "M := H - H" }, { "math_id": 192, "text": "\\operatorname{L}(X; Y)" }, { "math_id": 193, "text": "Y." }, { "math_id": 194, "text": "f(x) \\geq 0." }, { "math_id": 195, "text": "C^*," }, { "math_id": 196, "text": "- C." }, { "math_id": 197, "text": "X^+," }, { "math_id": 198, "text": "X^+ := C^* - C^*." }, { "math_id": 199, "text": "X^+ \\subseteq X^b," }, { "math_id": 200, "text": "\\{ r x : r \\geq 0 \\}" }, { "math_id": 201, "text": "x \\in C," }, { "math_id": 202, "text": "y \\in C" }, { "math_id": 203, "text": "x - y \\in C" }, { "math_id": 204, "text": "y = s x" }, { "math_id": 205, "text": "s" }, { "math_id": 206, "text": "0 \\leq s \\leq 1." }, { "math_id": 207, "text": "y" }, { "math_id": 208, "text": "n x \\leq y" }, { "math_id": 209, "text": "n" }, { "math_id": 210, "text": "x = 0" }, { "math_id": 211, "text": "f \\leq g" }, { "math_id": 212, "text": "f(x) \\leq g(x)" }, { "math_id": 213, "text": "L^p" }, { "math_id": 214, "text": "\\R^2" } ]
https://en.wikipedia.org/wiki?curid=8099018
8099221
Ordered vector space
Vector space with a partial order In mathematics, an ordered vector space or partially ordered vector space is a vector space equipped with a partial order that is compatible with the vector space operations. Definition. Given a vector space formula_4 over the real numbers formula_5 and a preorder formula_6 on the set formula_7 the pair formula_8 is called a preordered vector space and we say that the preorder formula_6 is compatible with the vector space structure of formula_4 and call formula_6 a vector preorder on formula_4 if for all formula_9 and formula_10 with formula_11 the following two axioms are satisfied If formula_6 is a partial order compatible with the vector space structure of formula_4 then formula_8 is called an ordered vector space and formula_6 is called a vector partial order on formula_15 The two axioms imply that translations and positive homotheties are automorphisms of the order structure and the mapping formula_16 is an isomorphism to the dual order structure. Ordered vector spaces are ordered groups under their addition operation. Note that formula_3 if and only if formula_17 Positive cones and their equivalence to orderings. A subset formula_18 of a vector space formula_4 is called a cone if for all real formula_19 formula_20 that is, for all formula_21 we have formula_22. A cone is called pointed if it contains the origin. A cone formula_18 is convex if and only if formula_23 The intersection of any non-empty family of cones (resp. convex cones) is again a cone (resp. convex cone); the same is true of the union of an increasing (under set inclusion) family of cones (resp. convex cones). A cone formula_18 in a vector space formula_4 is said to be generating if formula_24 Given a preordered vector space formula_7 the subset formula_25 of all elements formula_0 in formula_8 satisfying formula_26 is a pointed convex cone (that is, a convex cone containing formula_27) called the positive cone of formula_4 and denoted by formula_28 The elements of the positive cone are called positive. If formula_0 and formula_2 are elements of a preordered vector space formula_29 then formula_3 if and only if formula_30 The positive cone is generating if and only if formula_4 is a directed set under formula_31 Given any pointed convex cone formula_18 one may define a preorder formula_6 on formula_4 that is compatible with the vector space structure of formula_4 by declaring for all formula_32 that formula_3 if and only if formula_33 the positive cone of this resulting preordered vector space is formula_34 There is thus a one-to-one correspondence between pointed convex cones and vector preorders on formula_15 If formula_4 is preordered then we may form an equivalence relation on formula_4 by defining formula_0 is equivalent to formula_2 if and only if formula_3 and formula_35 if formula_36 is the equivalence class containing the origin then formula_36 is a vector subspace of formula_4 and formula_37 is an ordered vector space under the relation: formula_38 if and only there exist formula_39 and formula_40 such that formula_41 A subset of formula_18 of a vector space formula_4 is called a proper cone if it is a convex cone satisfying formula_42 Explicitly, formula_18 is a proper cone if (1) formula_43 (2) formula_44 for all formula_19 and (3) formula_42 The intersection of any non-empty family of proper cones is again a proper cone. Each proper cone formula_18 in a real vector space induces an order on the vector space by defining formula_3 if and only if formula_45 and furthermore, the positive cone of this ordered vector space will be formula_34 Therefore, there exists a one-to-one correspondence between the proper convex cones of formula_4 and the vector partial orders on formula_15 By a total vector ordering on formula_4 we mean a total order on formula_4 that is compatible with the vector space structure of formula_15 The family of total vector orderings on a vector space formula_4 is in one-to-one correspondence with the family of all proper cones that are maximal under set inclusion. A total vector ordering "cannot" be Archimedean if its dimension, when considered as a vector space over the reals, is greater than 1. If formula_46 and formula_47 are two orderings of a vector space with positive cones formula_48 and formula_49 respectively, then we say that formula_46 is finer than formula_47 if formula_50 Examples. The real numbers with the usual ordering form a totally ordered vector space. For all integers formula_51 the Euclidean space formula_52 considered as a vector space over the reals with the lexicographic ordering forms a preordered vector space whose order is Archimedean if and only if formula_53. Pointwise order. If formula_47 is any set and if formula_4 is a vector space (over the reals) of real-valued functions on formula_54 then the pointwise order on formula_4 is given by, for all formula_55 formula_56 if and only if formula_57 for all formula_58 Spaces that are typically assigned this order include: The space formula_67 of all measurable almost-everywhere bounded real-valued maps on formula_68 where the preorder is defined for all formula_69 by formula_56 if and only if formula_57 almost everywhere. Intervals and the order bound dual. An order interval in a preordered vector space is set of the form formula_70 From axioms 1 and 2 above it follows that formula_71 and formula_72 implies formula_73 belongs to formula_74 thus these order intervals are convex. A subset is said to be order bounded if it is contained in some order interval. In a preordered real vector space, if for formula_26 then the interval of the form formula_75 is balanced. An order unit of a preordered vector space is any element formula_0 such that the set formula_75 is absorbing. The set of all linear functionals on a preordered vector space formula_4 that map every order interval into a bounded set is called the order bound dual of formula_4 and denoted by formula_76 If a space is ordered then its order bound dual is a vector subspace of its algebraic dual. A subset formula_77 of an ordered vector space formula_4 is called order complete if for every non-empty subset formula_78 such that formula_79 is order bounded in formula_80 both formula_81 and formula_82 exist and are elements of formula_83 We say that an ordered vector space formula_4 is order complete is formula_4 is an order complete subset of formula_15 Examples. If formula_8 is a preordered vector space over the reals with order unit formula_84 then the map formula_85 is a sublinear functional. Properties. If formula_4 is a preordered vector space then for all formula_32 Spaces of linear maps. A cone formula_18 is said to be generating if formula_102 is equal to the whole vector space. If formula_4 and formula_103 are two non-trivial ordered vector spaces with respective positive cones formula_48 and formula_49 then formula_48 is generating in formula_4 if and only if the set formula_104 is a proper cone in formula_105 which is the space of all linear maps from formula_4 into formula_106 In this case, the ordering defined by formula_18 is called the canonical ordering of formula_107 More generally, if formula_108 is any vector subspace of formula_109 such that formula_110 is a proper cone, the ordering defined by formula_110 is called the canonical ordering of formula_111 Positive functionals and the order dual. A linear function formula_112 on a preordered vector space is called positive if it satisfies either of the following equivalent conditions: The set of all positive linear forms on a vector space with positive cone formula_115 called the dual cone and denoted by formula_116 is a cone equal to the polar of formula_117 The preorder induced by the dual cone on the space of linear functionals on formula_4 is called the &lt;templatestyles src="Template:Visible anchor/styles.css" /&gt;dual preorder. The order dual of an ordered vector space formula_4 is the set, denoted by formula_118 defined by formula_119 Although formula_120 there do exist ordered vector spaces for which set equality does not hold. Special types of ordered vector spaces. Let formula_4 be an ordered vector space. We say that an ordered vector space formula_4 is Archimedean ordered and that the order of formula_4 is Archimedean if whenever formula_0 in formula_4 is such that formula_121 is majorized (that is, there exists some formula_122 such that formula_123 for all formula_124) then formula_125 A topological vector space (TVS) that is an ordered vector space is necessarily Archimedean if its positive cone is closed. We say that a preordered vector space formula_4 is regularly ordered and that its order is regular if it is Archimedean ordered and formula_25 distinguishes points in formula_15 This property guarantees that there are sufficiently many positive linear forms to be able to successfully use the tools of duality to study ordered vector spaces. An ordered vector space is called a vector lattice if for all elements formula_0 and formula_126 the supremum formula_127 and infimum formula_128 exist. Subspaces, quotients, and products. Throughout let formula_4 be a preordered vector space with positive cone formula_34 Subspaces If formula_108 is a vector subspace of formula_4 then the canonical ordering on formula_108 induced by formula_4's positive cone formula_18 is the partial order induced by the pointed convex cone formula_129 where this cone is proper if formula_18 is proper. Quotient space Let formula_108 be a vector subspace of an ordered vector space formula_7 formula_130 be the canonical projection, and let formula_131 Then formula_132 is a cone in formula_133 that induces a canonical preordering on the quotient space formula_134 If formula_132 is a proper cone informula_133 then formula_132 makes formula_133 into an ordered vector space. If formula_108 is formula_18-saturated then formula_132 defines the canonical order of formula_134 Note that formula_135 provides an example of an ordered vector space where formula_136 is not a proper cone. If formula_4 is also a topological vector space (TVS) and if for each neighborhood formula_137 of the origin in formula_4 there exists a neighborhood formula_138 of the origin such that formula_139 then formula_132 is a normal cone for the quotient topology. If formula_4 is a topological vector lattice and formula_108 is a closed solid sublattice of formula_4 then formula_140 is also a topological vector lattice. Product If formula_47 is any set then the space formula_141 of all functions from formula_47 into formula_4 is canonically ordered by the proper cone formula_142 Suppose that formula_143 is a family of preordered vector spaces and that the positive cone of formula_144 is formula_145 Then formula_146 is a pointed convex cone in formula_147 which determines a canonical ordering on formula_148 formula_18 is a proper cone if all formula_149 are proper cones. Algebraic direct sum The algebraic direct sum formula_150 of formula_143 is a vector subspace of formula_151 that is given the canonical subspace ordering inherited from formula_152 If formula_153 are ordered vector subspaces of an ordered vector space formula_4 then formula_4 is the ordered direct sum of these subspaces if the canonical algebraic isomorphism of formula_4 onto formula_154 (with the canonical product order) is an order isomorphism. Only the second order is, as a subset of formula_171 closed; see partial orders in topological spaces. For the third order the two-dimensional "intervals" formula_172 are open sets which generate the topology. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "x" }, { "math_id": 1, "text": "\\Reals^2" }, { "math_id": 2, "text": "y" }, { "math_id": 3, "text": "x \\leq y" }, { "math_id": 4, "text": "X" }, { "math_id": 5, "text": "\\Reals" }, { "math_id": 6, "text": "\\,\\leq\\," }, { "math_id": 7, "text": "X," }, { "math_id": 8, "text": "(X, \\leq)" }, { "math_id": 9, "text": "x, y, z \\in X" }, { "math_id": 10, "text": "r \\in \\Reals" }, { "math_id": 11, "text": "r \\geq 0" }, { "math_id": 12, "text": "x + z \\leq y + z," }, { "math_id": 13, "text": "y \\leq x" }, { "math_id": 14, "text": "r y \\leq r x." }, { "math_id": 15, "text": "X." }, { "math_id": 16, "text": "x \\mapsto -x" }, { "math_id": 17, "text": "-y \\leq -x." }, { "math_id": 18, "text": "C" }, { "math_id": 19, "text": "r > 0," }, { "math_id": 20, "text": "r C \\subseteq C," }, { "math_id": 21, "text": "c,c'\\in C" }, { "math_id": 22, "text": "c+c'\\in C" }, { "math_id": 23, "text": "C + C \\subseteq C." }, { "math_id": 24, "text": "X = C - C." }, { "math_id": 25, "text": "X^+" }, { "math_id": 26, "text": "x \\geq 0" }, { "math_id": 27, "text": "0" }, { "math_id": 28, "text": "\\operatorname{PosCone} X." }, { "math_id": 29, "text": "(X, \\leq)," }, { "math_id": 30, "text": "y - x \\in X^+." }, { "math_id": 31, "text": "\\,\\leq." }, { "math_id": 32, "text": "x, y \\in X," }, { "math_id": 33, "text": "y - x \\in C;" }, { "math_id": 34, "text": "C." }, { "math_id": 35, "text": "y \\leq x;" }, { "math_id": 36, "text": "N" }, { "math_id": 37, "text": "X / N" }, { "math_id": 38, "text": "A \\leq B" }, { "math_id": 39, "text": "a \\in A" }, { "math_id": 40, "text": "b \\in B" }, { "math_id": 41, "text": "a \\leq b." }, { "math_id": 42, "text": "C \\cap (- C) = \\{0\\}." }, { "math_id": 43, "text": "C + C \\subseteq C," }, { "math_id": 44, "text": "r C \\subseteq C" }, { "math_id": 45, "text": "y - x \\in C," }, { "math_id": 46, "text": "R" }, { "math_id": 47, "text": "S" }, { "math_id": 48, "text": "P" }, { "math_id": 49, "text": "Q," }, { "math_id": 50, "text": "P \\subseteq Q." }, { "math_id": 51, "text": "n \\geq 0," }, { "math_id": 52, "text": "\\Reals^n" }, { "math_id": 53, "text": "n = 1" }, { "math_id": 54, "text": "S," }, { "math_id": 55, "text": "f, g \\in X," }, { "math_id": 56, "text": "f \\leq g" }, { "math_id": 57, "text": "f(s) \\leq g(s)" }, { "math_id": 58, "text": "s \\in S." }, { "math_id": 59, "text": "\\ell^\\infty(S, \\Reals)" }, { "math_id": 60, "text": "S." }, { "math_id": 61, "text": "c_0(\\Reals)" }, { "math_id": 62, "text": "0." }, { "math_id": 63, "text": "C(S, \\Reals)" }, { "math_id": 64, "text": "n," }, { "math_id": 65, "text": "C(\\{1, \\dots, n\\}, \\Reals)" }, { "math_id": 66, "text": "S = \\{1, \\dots, n\\}" }, { "math_id": 67, "text": "\\mathcal{L}^\\infty(\\Reals, \\Reals)" }, { "math_id": 68, "text": "\\Reals," }, { "math_id": 69, "text": "f, g \\in \\mathcal{L}^\\infty(\\Reals, \\Reals)" }, { "math_id": 70, "text": "\\begin{alignat}{4}\n[a, b] &= \\{x : a \\leq x \\leq b\\}, \\\\[0.1ex]\n[a, b[ &= \\{x : a \\leq x < b\\}, \\\\\n]a, b] &= \\{x : a < x \\leq b\\}, \\text{ or } \\\\\n]a, b[ &= \\{x : a < x < b\\}. \\\\\n\\end{alignat}" }, { "math_id": 71, "text": "x, y \\in [a, b]" }, { "math_id": 72, "text": "0 < t < 1" }, { "math_id": 73, "text": "t x + (1 - t) y" }, { "math_id": 74, "text": "[a, b];" }, { "math_id": 75, "text": "[-x, x]" }, { "math_id": 76, "text": "X^{\\operatorname{b}}." }, { "math_id": 77, "text": "A" }, { "math_id": 78, "text": "B \\subseteq A" }, { "math_id": 79, "text": "B" }, { "math_id": 80, "text": "A," }, { "math_id": 81, "text": "\\sup B" }, { "math_id": 82, "text": "\\inf B" }, { "math_id": 83, "text": "A." }, { "math_id": 84, "text": "u," }, { "math_id": 85, "text": "p(x) := \\inf \\{t \\in \\Reals : x \\leq t u\\}" }, { "math_id": 86, "text": "y \\geq 0" }, { "math_id": 87, "text": "x + y \\geq 0." }, { "math_id": 88, "text": "r < 0" }, { "math_id": 89, "text": "r x \\geq r y." }, { "math_id": 90, "text": "y = \\sup \\{x, y\\}" }, { "math_id": 91, "text": "x = \\inf \\{x, y\\}" }, { "math_id": 92, "text": "\\sup \\{x, y\\}" }, { "math_id": 93, "text": "\\inf \\{-x, -y\\}" }, { "math_id": 94, "text": "\\inf \\{-x, -y\\} = - \\sup \\{x, y\\}." }, { "math_id": 95, "text": "\\inf \\{x, y\\}" }, { "math_id": 96, "text": "z \\in X," }, { "math_id": 97, "text": "\\sup \\{x + z, y + z\\} = z + \\sup \\{x, y\\}," }, { "math_id": 98, "text": "\\inf \\{x + z, y + z\\} = z + \\inf \\{x, y\\}" }, { "math_id": 99, "text": "x + y = \\inf\\{x, y\\} + \\sup \\{x, y\\}." }, { "math_id": 100, "text": "\\sup \\{0, x\\}" }, { "math_id": 101, "text": "x \\in X." }, { "math_id": 102, "text": "C - C" }, { "math_id": 103, "text": "W" }, { "math_id": 104, "text": "C = \\{u \\in L(X; W) : u(P) \\subseteq Q\\}" }, { "math_id": 105, "text": "L(X; W)," }, { "math_id": 106, "text": "W." }, { "math_id": 107, "text": "L(X; W)." }, { "math_id": 108, "text": "M" }, { "math_id": 109, "text": "L(X; W)" }, { "math_id": 110, "text": "C \\cap M" }, { "math_id": 111, "text": "M." }, { "math_id": 112, "text": "f" }, { "math_id": 113, "text": "f(x) \\geq 0." }, { "math_id": 114, "text": "f(x) \\leq f(y)." }, { "math_id": 115, "text": "C," }, { "math_id": 116, "text": "C^*," }, { "math_id": 117, "text": "-C." }, { "math_id": 118, "text": "X^+," }, { "math_id": 119, "text": "X^+ := C^* - C^*." }, { "math_id": 120, "text": "X^+ \\subseteq X^b," }, { "math_id": 121, "text": "\\{n x : n \\in \\N\\}" }, { "math_id": 122, "text": "y \\in X" }, { "math_id": 123, "text": "n x \\leq y" }, { "math_id": 124, "text": "n \\in \\N" }, { "math_id": 125, "text": "x \\leq 0." }, { "math_id": 126, "text": "y," }, { "math_id": 127, "text": "\\sup (x, y)" }, { "math_id": 128, "text": "\\inf (x, y)" }, { "math_id": 129, "text": "C \\cap M," }, { "math_id": 130, "text": "\\pi : X \\to X / M" }, { "math_id": 131, "text": "\\hat{C} := \\pi(C)." }, { "math_id": 132, "text": "\\hat{C}" }, { "math_id": 133, "text": "X / M" }, { "math_id": 134, "text": "X / M." }, { "math_id": 135, "text": "X = \\Reals^2_0" }, { "math_id": 136, "text": "\\pi(C)" }, { "math_id": 137, "text": "V" }, { "math_id": 138, "text": "U" }, { "math_id": 139, "text": "[(U + N) \\cap C] \\subseteq V + N" }, { "math_id": 140, "text": "X / L" }, { "math_id": 141, "text": "X^S" }, { "math_id": 142, "text": "\\left\\{f \\in X^S : f(s) \\in C \\text{ for all } s \\in S\\right\\}." }, { "math_id": 143, "text": "\\left\\{X_\\alpha : \\alpha \\in A\\right\\}" }, { "math_id": 144, "text": "X_\\alpha" }, { "math_id": 145, "text": "C_\\alpha." }, { "math_id": 146, "text": "C := \\prod_\\alpha C_\\alpha" }, { "math_id": 147, "text": "\\prod_\\alpha X_\\alpha," }, { "math_id": 148, "text": "\\prod_\\alpha X_\\alpha;" }, { "math_id": 149, "text": "C_\\alpha" }, { "math_id": 150, "text": "\\bigoplus_\\alpha X_\\alpha" }, { "math_id": 151, "text": "\\prod_\\alpha X_\\alpha" }, { "math_id": 152, "text": "\\prod_\\alpha X_\\alpha." }, { "math_id": 153, "text": "X_1, \\dots, X_n" }, { "math_id": 154, "text": "\\prod_\\alpha X_\\alpha" }, { "math_id": 155, "text": "(a, b) \\leq (c, d)" }, { "math_id": 156, "text": "a < c" }, { "math_id": 157, "text": "(a = c \\text{ and } b \\leq d)." }, { "math_id": 158, "text": "x > 0" }, { "math_id": 159, "text": "(x = 0 \\text{ and } y \\geq 0)," }, { "math_id": 160, "text": "-\\pi / 2 < \\theta \\leq \\pi / 2," }, { "math_id": 161, "text": "a \\leq c" }, { "math_id": 162, "text": "b \\leq d" }, { "math_id": 163, "text": "\\leq" }, { "math_id": 164, "text": "y \\geq 0," }, { "math_id": 165, "text": "0 \\leq \\theta \\leq \\pi / 2," }, { "math_id": 166, "text": "(a < c \\text{ and } b < d)" }, { "math_id": 167, "text": "(a = c \\text{ and } b = d)" }, { "math_id": 168, "text": "(x > 0 \\text{ and } y > 0)" }, { "math_id": 169, "text": "x = y = 0)," }, { "math_id": 170, "text": "0 < \\theta < \\pi / 2," }, { "math_id": 171, "text": "\\Reals^4," }, { "math_id": 172, "text": "p < x < q" }, { "math_id": 173, "text": "x_i \\leq y_i" }, { "math_id": 174, "text": "i = 1, \\dots, n." }, { "math_id": 175, "text": "[0, 1]" }, { "math_id": 176, "text": "f(x) \\leq g(x)" }, { "math_id": 177, "text": "[0, 1]." } ]
https://en.wikipedia.org/wiki?curid=8099221
8099272
Acoustic contrast factor
In acoustics, the acoustic contrast factor is a number that describes the relationship between the densities and the sound velocities of two media, or equivalently (because of the form of the expression), the relationship between the densities and compressibilities of two media. It is most often used in the context of biomedical ultrasonic imaging techniques using acoustic contrast agents and in the field of ultrasonic manipulation of particles (acoustophoresis) much smaller than the wavelength using ultrasonic standing waves. In the latter context, the acoustic contrast factor is the number which, depending on its sign, tells whether a given type of particle in a given medium will be attracted to the pressure nodes or anti-nodes. Example - particle in a medium. In an ultrasonic standing wave field, a small spherical particle (formula_0, where formula_1 is the particle radius, and formula_2 is the wavelength) suspended in an inviscid fluid will move under the effect of an acoustic radiation force. The direction of its movement is governed by the physical properties of the particle and the surrounding medium, expressed in the form of an acoustophoretic contrast factor formula_3. Given the compressibilities formula_4 and formula_5 and densities formula_6 and formula_7 of the medium and particle, respectively, the acoustic contrast factor formula_3 can be expressed as: formula_8 For a positive value of formula_9, the particles will be attracted to the pressure nodes. For a negative value of formula_9, the particles will be attracted to the pressure anti-nodes. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "a \\ll \\lambda" }, { "math_id": 1, "text": "a" }, { "math_id": 2, "text": "\\lambda" }, { "math_id": 3, "text": "\\phi" }, { "math_id": 4, "text": "\\beta_m" }, { "math_id": 5, "text": "\\beta_p" }, { "math_id": 6, "text": "\\rho_m" }, { "math_id": 7, "text": "\\rho_p" }, { "math_id": 8, "text": "\n\\phi = {\\frac{5\\rho_p-2\\rho_m}{2\\rho_p+\\rho_m}}-{\\frac{\\beta_p}{\\beta_m}}\n" }, { "math_id": 9, "text": " \\phi " } ]
https://en.wikipedia.org/wiki?curid=8099272
80993
Luffa
Genus of vines Luffa is a genus of tropical and subtropical vines in the pumpkin, squash and gourd family (Cucurbitaceae). In everyday non-technical usage, the luffa, also spelled loofah or less frequently loofa, usually refers to the fruits of the species "Luffa aegyptiaca" and "Luffa acutangula". It is cultivated and eaten as a vegetable, but must be harvested at a young stage of development to be edible. The vegetable is popular in India, China, Nepal, Bhutan, Bangladesh and Vietnam. When the fruit fully ripens, it becomes too fibrous for eating. The fully developed fruit is the source of the loofah scrubbing sponge. Names. The name "luffa" was taken by European botanists in the 17th century from the Arabic name "lūf". In North America it is sometimes known as "Chinese okra", and in Spanish as "estropajo". Uses. Fibers. The fruit section of "L. aegyptiaca" may be allowed to mature and used as a bath or kitchen sponge after being processed to remove everything except the network of xylem fibers. If the loofah is allowed to fully ripen and then dried on the vine, the flesh disappears, leaving only the fibrous skeleton and seeds, which can be easily shaken out. Marketed as "luffa" or "loofah", the sponge is used as a body scrub in the shower. In Paraguay, panels are made out of luffa combined with other vegetable matter and recycled plastic. These can be used to create furniture and construct houses. Food. Luffa is a popular food item. There are various ways to prepare it including in soups or stir frys. Indian subcontinent. In Hindi-speaking North India states, it is called "torai" (), and cooked as vegetable. In eastern-UP it is also called "nenua" while in central/Western India, specially in Madhya Pradesh, it is called "gilki" (). "Torai" is reserved for ridge gourd and is less popular than "gilki" in central western India. In Punjabi-speaking Punjab, sponge gourd is called "tori" (ਤੋਰੀ), while ridged gourd is called "ram tori" (ਰਾਮ ਤੋਰੀ) and the fruit and flowers are mostly used in dishes. In Bhojpuri speaking regions, it is called "ghiura" (घिउरा). Apart from the fruit of the vegetable, flowers are also used as a vegetable as "chokha", "tarua", "pakoda", etc. In Nepal and Nepali language speaking Indian states, sponge /smooth gourd is called "ghiraula" (घिरौंला), while the ridged variety is called "pate ghiraula" (पाटे घिरौंला). Both are popular vegetables usually cooked with tomatoes, potatoes and served with rice. In Gujarat, ridge gourd and sponge gourd are known as "turya" (તુરીયા) and "galka" (ગલકા) in Gujarati respectively. Ridge gourd is called "ghissori" or "ghissora" (ઘિસ્સોરી/ઘિસ્સોરા) in Kutchi. They are simple yet popular vegetables, usually made with a plentiful tomato gravy and garnished with green chillies and fresh coriander. When cooked roti is shredded by hand and mixed into it, it is colloquially known as "rotli shaak ma bhuseli". Alternatively, this dish is also eaten mixed with plain cooked rice. In Bengali-speaking Bangladesh and the Indian state of West Bengal, ridge gourd is called "jhinge" (), while sponge gourd is known as "dhundhul" (), both being popular vegetables. They are eaten, fried or cooked with shrimp, fish, or meat. In the Odia language of Odisha, ridge gourd (luffa acutangula) is known as "janhi" (), while sponge gourd (luffa aegyptiaca) is called "tarada"(), both accompanying many vegetarian and non-vegetarian dishes, most notably in dishes like "khira santula", where it is boiled with minuscule spices and simmered in milk. Another popular version involves mashing it in groundnut oil, herbs, peanuts and topping it with the peeled skin pieces. In Assamese speaking areas of Assam, it is called "bhula" (ভোল, luffa aegyptiaca) and is cooked with sour fish curry along with "taro". A related species is called "jika" (জিকা, Luffa acutangula), which is used as a vegetable in a curry, chutney and stir fry . In Tamil language of Tamil Nadu, "Luffa acutangula" (ridged gourd) is called "peerkangai" (பீர்க்கங்காய்) and "Luffa aegyptiaca" / "Luffa cylindrica" (sponge gourd) is called "nurai peerkankai" (நுரை பீர்க்கங்காய்) and are used as vegetables to make "peerkangai kootu", "poriyal", and "thogayal". Even the skin is used to make chutney. In Karnataka's Kannada speaking areas, sponge gourd is better known as "tuppa dahirekayi" (ಟುಪ್ಪಾದ ಹೀರೆಕಾಯಿ), literally translating to "buttersquash" in English, while ridge gourd is known as "hirekayi" (ಹೀರೆಕಾಯಿ) in standard Kannada. Naturally growing in this region, it's consumed when it is still tender and green. It is used as a vegetable in curries, but also as a snack, "bhajji", dipped in chickpea batter and deep fried. In Tulu language, ridge gourd is known as Peere(ಪೀರೆ) and is used to prepare chutney and ajethna. In both Telangana and Andhra Pradesh Telugu dialects, ridge gourd is generally called "beerakaya" (బీరకాయ), while sponge gourd is called "nethi beerakaya" (సేతి బీరకాయ). It is used in making Dal, Fry, Roti Pacchadi, and wet curry. In Malayalam language of Kerala, ridge gourd is commonly called "peechinga" (പീച്ചിങ്ങ) and "poththanga" in the Palakkad dialect, while sponge gourd is called "Eeṇilla peechinga" (ഏണില്ല പീച്ചിങ്ങ). It is also used as a vegetable, cooked with dal or stir fried. The fully matured fruit is used as a natural scrub in rural Kerala. In some places such as Wayanad, it grows as a creeper on fences. In Marathi-speaking Maharashtra, its called (दोडका, ridge gourd luffa) and "ghosaļ" (घोसाळ ,smooth/sponge luffa) which are common vegetables, prepared with either crushed dried peanuts or with beans. In Meitei language of Manipur, ridge gourd is called (ꯁꯦꯕꯣꯠ) and sponge gourd is called (ꯁꯦꯕꯣꯠ ꯍꯦꯀꯞ), which is cooked with other ingredients like potato, dried fish, fermented fish and served. They are also steamed before consuming or crushed () with other ingredients and served with steamed rice ("chaak"). Fried ones () are also favorites for many. "Sebot" is also eaten as a green vegetable. Other Asian cuisines. In Sri Lanka, it's called වැටකොළු (Waeṭakola, the Luffa acutangula variety) in Sinhalese and is a common ingredient in curries, even in dried forms. In Vietnamese cuisine, the gourd is called "" and is a common ingredient in soups and stir-fried dishes. In China and Taiwan (where it is called , or in English, "silk melon"), Indonesia (where it is called "oyong"), and the Philippines (where it is called "patola" in Tagalog and "kabatiti" in Ilokano), in Timor-Leste it is also called "patola" or "batola" in Tetum and in Manipur, India, (where it is called ) the luffa is eaten as a green vegetable in various dishes. In Japan it is called "hechima" () and is cultivated all over the country during summer. It is commonly used as a green vegetable in traditional dishes of the Ryukyu Islands (where it is called "naabeeraa"). In other regions it is also grown for uses other than food. In Nepal it is called "ghiraula" and consumed as a vegetable at a young age. When it becomes ripe and dried, it is used as a body scrubbing material during bathing. Western cuisines. Luffa is also known as "Chinese okra" in Canada and the U.S. Other uses. In Japan, in regions other than the Ryukyu Islands and Kyushu, it is predominantly grown for use as a sponge or for applying soap, shampoo, and lotion. As with bitter melon, many people grow it outside building windows as a natural sunscreen in summer. Role in food chain. Luffa species are used as food plants by the larvae of some Lepidoptera species, including "Hypercompe albicornis" and "Zeugodacus tau". Mechanical properties. The luffa sponge is a biological cellular material. These materials often exhibit exceptional mechanical properties at low densities. While their mechanical performance tends to fall behind manmade materials, such as alloys, ceramics, plastics, and composites, as a structural material, they have long term sustainability for the natural environment. When compressed longitudinally, a luffa sponge is able to absorb comparable energy per unit mass as aluminum foam. Luffa sponges are composed of a complex network of fiber bundles connected to form a 3-dimensional, highly-porous network. The hierarchical structure of luffa sponges results in mechanical properties that vary with the component of sponge tested. Specifically, the mechanical properties of fiber bundles differ from those of blocks from the bulk of the sponge, which differ from those of the cross sections of the entire sponge. Fiber-bundles. Uniaxial tensile tests of fiber bundles isolated from the inner surface provide insight this basic strut element of the luffa sponges. These fiber bundles vary in diameter from 0.3 to 0.5 mm. Each fiber bundle has a low density core region not occupied by fibers. The stress-strain response of the fiber bundles is nearly linear elastic all the way until fracture, suggesting the absence of work hardening. The slope of the linear region of the stress-strain curve, or Young’s modulus, is 236* MPa. The highest stress achieved before fracture, or ultimate tensile strength, is 103 MPa. The strain at which failure occurs, or failure strain, is small at only 5%. The mechanical properties of fiber bundles decrease dramatically when the size of the hollow region inside the bundle increases. Despite their low tensile strength, the fiber bundles have a high specific modulus of 2.07– 4.05 MPa⋅m3/kg, and their overall properties are improved when a high ratio of their cross sectional area is occupied by fibers, they are evenly distributed, and there is strong adhesion between fibers. Bulk-sponge. Block samples (height: 12.69 ± 2.35mm, width: 11.30 ± 2.88mm, length: 13.10 ± 2.64mm) cut from the core region and hoop region of the luffa sponge exhibit different mechanical behaviors under compression depending on both the orientation they are loaded in as well as the location in the sponge they are sampled from. The hoop region consists of the section of sponge located around the outside between the inner and outer surfaces, while the core region is from the sponge center. Samples from both the hoop and core regions exhibited yielding when compressed in the longitudinal direction due to the buckling of fibers. With the highly aligned fibers from the inner surface removed from the hoop region block samples, this yield behavior disappears. In general, the inner surface fibers most significant impact the longitudinal properties of the luffa sponge column followed by the circumferential properties. There is no noticeable contribution to the radial properties. Additionally, the core region exhibits lower yield stress and energy absorption (as determined by the area under the stress-strain curve) compared to the hoop region due to its greater porosity. Overall, the stress-strain curves of block samples exhibit three stages of mechanical behavior common to porous materials. Namely, the samples follow linear elasticity for strains less than 10%, followed by a plateau for strains from 10% to 60%, and finally a stress increase associated with densification at strains greater than 60%. Segment samples created from cross sections of the entire luffa sponge (diameter: 92.51 ± 6.15mm, height: 19.76 ± 4.95mm) when tested in compression exhibit this same characteristic behavior. The three stages can be described by the equations: In the above equations, formula_6 is the Young's modulus and formula_7 the yield strength of the sponge material. These are chosen to best fit experimental data. The strain at the elastic limit, where the plateau region begins, is denoted as formula_8, while the strain at the onset of the densification region is formula_9. formula_10 Here formula_11 is the density of the bulk sponge formula_12 is the density of its constituent, the fiber bundle. The constant D defines the strain at the onset of densification as well as the stress relationship in the densification region. It is determined by fitting experimental data. Dynamic loading. The mechanical properties of Luffa sponges change under different strain rates. Specifically, energy adsorption, compressive stress, and plateau stress (which is in the case of foam materials corresponds to the yield stress) are enhanced by increasing the strain rate. One explanation for this is that the luffa fibers undergo more axial deformation when dynamically loaded (high strain rates) than when quasi-statically loaded (low strain rates). References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\sigma=E^*\\varepsilon\n\t\n" }, { "math_id": 1, "text": "\\varepsilon\\le\\varepsilon_e" }, { "math_id": 2, "text": "\\sigma=\\sigma_p^* " }, { "math_id": 3, "text": "\\varepsilon_e<\\varepsilon\\le\\varepsilon_D(1-1/D)" }, { "math_id": 4, "text": "\\sigma=\\sigma_p^*/D{(\\varepsilon_D/\\varepsilon_D-\\varepsilon)}^m " }, { "math_id": 5, "text": " \\varepsilon>\\varepsilon_D(1-1/D)" }, { "math_id": 6, "text": "E^*" }, { "math_id": 7, "text": "\\sigma_p^*" }, { "math_id": 8, "text": "\\varepsilon_e" }, { "math_id": 9, "text": "\\varepsilon_D" }, { "math_id": 10, "text": "\\varepsilon_D=1-1.4(\\rho^*/\\rho_s)" }, { "math_id": 11, "text": "\\rho^*" }, { "math_id": 12, "text": "\\rho_s" } ]
https://en.wikipedia.org/wiki?curid=80993
8099349
Betz's law
Aerodynamic power limitation for wind turbines In aerodynamics, Betz's law indicates the maximum power that can be extracted from the wind, independent of the design of a wind turbine in open flow. It was published in 1919 by the German physicist Albert Betz. The law is derived from the principles of conservation of mass and momentum of the air stream flowing through an idealized "actuator disk" that extracts energy from the wind stream. According to Betz's law, no wind turbine of any mechanism can capture more than 16/27 (59.3%) of the kinetic energy in wind. The factor 16/27 (0.593) is known as Betz's coefficient. Practical utility-scale wind turbines achieve at peak 75–80% of the Betz limit. The Betz limit is based on an open-disk actuator. If a diffuser is used to collect additional wind flow and direct it through the turbine, more energy can be extracted, but the limit still applies to the cross-section of the entire structure. Concepts. Betz's law applies to all Newtonian fluids, including wind. If all of the energy coming from wind movement through a turbine were extracted as useful energy, the wind speed afterward would drop to zero. If the wind stopped moving at the exit of the turbine, then no more fresh wind could get in; it would be blocked. In order to keep the wind moving through the turbine, there has to be some wind movement, however small, on the other side with some wind speed greater than zero. Betz's law shows that as air flows through a certain area, and as wind speed slows from losing energy to extraction from a turbine, the airflow must distribute to a wider area. As a result, geometry limits the maximum efficiency of any turbine. Independent discoveries. British scientist Frederick W. Lanchester derived the same maximum in 1915. The leader of the Russian aerodynamic school, Nikolay Zhukowsky, also published the same result for an ideal wind turbine in 1920, the same year as Betz. It is thus an example of Stigler's law, which posits that no scientific discovery is named after its actual discoverer. Proof. The Betz Limit is the maximum possible energy that can be extracted by an infinitely thin rotor from a fluid flowing at a certain speed. In order to calculate the maximum theoretical efficiency of a thin rotor (of, for example, a wind turbine), one imagines it to be replaced by a disc that removes energy from the fluid passing through it. At a certain distance behind this disc, the fluid that has passed through the disc has a reduced, but nonzero, velocity. Application of conservation of mass (continuity equation). Applying conservation of mass to the control volume, the mass flow rate (the mass of fluid flowing per unit time) is formula_0 where "v"1 is the speed in the front of the rotor, "v"2 is the speed downstream of the rotor, "v" is the speed at the fluid power device, "ρ" is the fluid density, formula_1 is the area of the turbine, and formula_2 and formula_3 are the areas of the fluid before and after the turbine (the inlet and outlet of the control volume). The density times the area and speed must be equal in each of the three regions: before the turbine, while going through the turbine, and past the turbine. The force exerted on the wind by the rotor is the mass of air multiplied by its acceleration: formula_4 Power and work. The incremental work done by the force may be written formula_5 and the power (rate of work done) of the wind is formula_6 Substituting the force "F" computed above into the power equation yields the power extracted from the wind, formula_7 However, power can be computed another way, by using the kinetic energy. Applying the conservation of energy equation to the control volume yields formula_8 Substituting the mass flow rate from the continuity equation yields formula_9 Both of these expressions for power are valid; one was derived by examining the incremental work, and the other by the conservation of energy. Equating these two expressions yields formula_10 The density can't be zero for any "v" and "S," so formula_11 or formula_12 The "constant wind velocity across the rotor" may be taken as the average of the upstream and downstream velocities. This is arguably the most counter-intuitive stage of the derivation of Betz's law. It is a direct consequence of the "axial flow" assumption, which disallows any radial mass flow in the actuator disk region. With no mass escape and a constant diameter in the actuator region, the air speed cannot change in the interaction region. Thus no energy can be extracted other than at the front and back of the interaction region, fixing the airspeed of the actuator disk to be the average. (Removing that restriction may allow higher performance than Betz law allows, but other radial effects must also be considered. This constant velocity effect is distinct from the radial kinetic energy loss that is also ignored.) Betz's law and coefficient of performance. Returning to the previous expression for power based on kinetic energy: formula_13 By differentiating formula_14 with respect to formula_15 for a given fluid speed and a given area S, one finds the "maximum" or "minimum" value for formula_14. The result is that formula_14 reaches maximum value when formula_16. Substituting this value results in formula_17 The power obtainable from a cylinder of fluid with cross-sectional area S and velocity is formula_18 The reference power for the Betz efficiency calculation is the power in a moving fluid in a cylinder with cross-sectional area S and velocity : formula_19 The power coefficient (= ) is the dimensionless ratio of the extractable power P to the kinetic power available in the undistributed stream. It has a maximum value C = 16/27 = 0.593 (or 59.3%; however, coefficients of performance are usually expressed as a decimal, not a percentage). The resulting expression is: formula_20 Modern large wind turbines achieve peak values for in the range of 0.45 to 0.50, about 75–85% of the theoretically possible maximum. In high wind speed, where the turbine is operating at its rated power, the turbine rotates (pitches) its blades to lower to protect itself from damage. The power in the wind increases by a factor of 8 from 12.5 to 25 m/s, so must fall accordingly, getting as low as 0.06 for winds of 25 m/s. Understanding the Betz results. The speed ratio formula_16 between outgoing and incoming wind implies that the outgoing air has only formula_21 the kinetic energy of the incoming air, and that formula_22 of the energy of the incoming air was extracted. This is a correct calculation, but it only considers the incoming air which eventually travels through the rotor. The last step in calculating the Betz efficiency is to divide the calculated power extracted from the flow by a reference power. As its reference power, the Betz analysis uses the power of air upstream moving at through the cross-sectional area S of the rotor. Since formula_23 at the Betz limit, the rotor extracts formula_22 of formula_24, or formula_25 of the incoming kinetic energy. Because the cross-sectional area of wind flowing through the rotor changes, there must be some flow of air in the directions perpendicular to the axis of the rotor. Any kinetic energy associated with this radial flow has no effect on the calculation because the calculation considers only the initial and final states of the air in the system. Upper Bounds on wind turbines. Although it is often touted (e.g.) as the definitive upper bound on energy extraction by any possible wind turbine it is not. Despite the misleading title of his article, Betz (nor Lanchester) never made such an "unconditional" claim. Notably, a wind turbine operating at the Betz maximum efficiency has a non-zero wind velocity wake. Any actuator disk placed downstream of the first will extract added power and so the combined dual actuator complex exceeds Betz limit. The second actuator disk could be, but need not be, in the far field wind zone (parallel streamline) for this consideration to hold. The reason for this surprising exception to a law based solely on energy and flux conservation laws lurks in the seemingly modest assumption of transverse uniformity of the axial wind profile within the stream lines. For example, the aforementioned dual actuator wind turbine has, downstream, a transverse wind profile that has two distinct velocities and thus is not bound by the limits of the single actuator disk. Mathematically, the derivation for an single actuator disk implicitly embeds the assumption that the wind does not change velocity as it transits the "infinitely thin" actuator; in contrast, in the dual actuator hybrid, the wind does change velocity as it transits, invalidating the derivation's key step requiring constant velocity. A single infinitely thin actuator cannot change the velocity because it would otherwise not conserve flux, but in the hybrid pair, flux can be shed (outside the crossection) between the actuators allowing a different final outlet velocity than the inlet velocity. Physical multi-coaxial-rotor wind turbines have been analyzed. Although these do not exceeded Betz limit in practice, this may be attributable to the fact that rotors not only have losses but must also obey angular momentum and the Blade element momentum theory which limits their efficiency below Betz limit. Modern research has suggested that a more relaxed higher bound of formula_26 can be achieved when the "unneeded assumptions" in the Betz's law derivation are removed. Economic relevance. Most real wind turbines are aerodynamically "thin" making them approximate the assumptions of Betz law. To the extent that a typical wind turbine approximates the assumptions in Betz law, then Betz limit places an approximate upper bound on the annual energy that can be extracted at a site. Even if a hypothetical wind blew consistently for a full year, any wind turbine well approximated by the actuator disk model can extract no more than the Betz limit of the energy contained in that year's wind could be extracted. Essentially increasing system economic efficiency results from increased production per unit, measured per square meter of vane exposure. An increase in system efficiency is required to bring down the cost of electrical power production. Efficiency increases may be the result of engineering of the wind capture devices, such as the configuration and dynamics of wind turbines, that may increase the power generation from these systems within the Betz limit. System efficiency increases in power application, transmission or storage may also contribute to a lower cost of power per unit. Points of interest. The assumptions of the Betz derivation impose some physical restrictions on the nature of wind turbines it applies to (identical inlet/outlet velocity for example). But beyond those assumptions, the Betz limit has no dependence on the internal mechanics of the wind extraction system, therefore S may take any form provided that the flow travels from the entrance to the control volume to the exit, and the control volume has uniform entry and exit velocities. Any extraneous effects can only decrease the performance of the system (usually a turbine) since this analysis was idealized to disregard friction. Any non-ideal effects would detract from the energy available in the incoming fluid, lowering the overall efficiency. Some manufacturers and inventors have made claims of exceeding the limit by using nozzles and other wind diversion devices, usually by misrepresenting the Betz limit and calculating only the rotor area and not the total input of air contributing to the wind energy extracted from the system. The Betz limit has no relevance when calculating turbine efficiency in a mobile application such as a wind-powered vehicle, as here the efficiency could theoretically approach 100% minus blade losses if the fluid flow through the turbine disc (or equivalent) were only retarded imperceptibly. As this would require an infinitely large structure, practical devices rarely achieve 90% or over. The amount of power extracted from the fluid flow at high turbine efficiencies is less than the Betz limit, which is not the same type of efficiency. Modern development. In 1934 H. Glauert derived the expression for turbine efficiency, when the angular component of velocity is taken into account, by applying an energy balance across the rotor plane. Due to the Glauert model, efficiency is below the Betz limit, and asymptotically approaches this limit when the tip speed ratio goes to infinity. In 2001, Gorban, Gorlov and Silantyev introduced an exactly solvable model (GGS), that considers non-uniform pressure distribution and curvilinear flow across the turbine plane (issues not included in the Betz approach). They utilized and modified the Kirchhoff model, which describes the turbulent wake behind the actuator as the "degenerated" flow and uses the Euler equation outside the degenerate area. The GGS model predicts that peak efficiency is achieved when the flow through the turbine is approximately 61% of the total flow which is very similar to the Betz result of &lt;templatestyles src="Fraction/styles.css" /&gt;2⁄3 for a flow resulting in peak efficiency, but the GGS predicted that the peak efficiency itself is much smaller: 30.1%. In 2008, viscous computations based on computational fluid dynamics (CFD) were applied to wind turbine modeling and demonstrated satisfactory agreement with experiment. Computed optimal efficiency is, typically, between the Betz limit and the GGS solution.
[ { "math_id": 0, "text": "\\dot m = \\rho A_1 v_1 = \\rho S v = \\rho A_2 v_2," }, { "math_id": 1, "text": "S" }, { "math_id": 2, "text": "A_1" }, { "math_id": 3, "text": "A_2" }, { "math_id": 4, "text": " \\begin{align}\n F &= ma \\\\\n &= m \\frac {dv} {dt} \\\\\n &= \\dot m \\, \\Delta v \\\\\n &= \\rho S v (v_1 - v_2).\n\\end{align} " }, { "math_id": 5, "text": "dE = F \\, dx," }, { "math_id": 6, "text": "P = \\frac{dE}{dt} = F \\frac{dx}{dt} = F v." }, { "math_id": 7, "text": "P = \\rho S v^2 (v_1 - v_2)." }, { "math_id": 8, "text": "P = \\frac{\\Delta E}{\\Delta t} = \\tfrac12 \\dot m (v_1^2 - v_2^2)." }, { "math_id": 9, "text": "P = \\tfrac12 \\rho S v (v_1^2 - v_2^2)." }, { "math_id": 10, "text": "P = \\tfrac12 \\rho S v (v_1^2 - v_2^2) = \\rho S v^2 (v_1 - v_2)." }, { "math_id": 11, "text": "\\tfrac12 (v_1^2 - v_2^2) = \\tfrac12 (v_1 - v_2) (v_1 + v_2) = v (v_1 - v_2)," }, { "math_id": 12, "text": "v = \\tfrac12 (v_1 + v_2)." }, { "math_id": 13, "text": "\\begin{align}\nP &= \\tfrac12 \\dot m (v_1^2 - v_2^2) \\\\\n& = \\tfrac12 \\rho S v (v_1^2 - v_2^2) \\\\\n&= \\tfrac14 \\rho S (v_1 + v_2) (v_1^2 - v_2^2) \\\\\n&= \\tfrac14 \\rho S v_1^3 \\left(1 + \\left(\\frac{v_2}{v_1}\\right) - \\left(\\frac{v_2}{v_1}\\right)^2 - \\left(\\frac{v_2}{v_1}\\right)^3\\right).\n\\end{align}" }, { "math_id": 14, "text": "P" }, { "math_id": 15, "text": "\\tfrac{v_2}{v_1}" }, { "math_id": 16, "text": "\\tfrac{v_2}{v_1} = \\tfrac13" }, { "math_id": 17, "text": "P_\\text{max} = \\tfrac{16}{27} \\cdot \\tfrac{1}{2} \\rho S v_1^3." }, { "math_id": 18, "text": "P = C_\\text{P} \\cdot \\tfrac12 \\rho S v_1^3." }, { "math_id": 19, "text": "P_\\text{wind} = \\tfrac12 \\rho S v_1^3." }, { "math_id": 20, "text": "C_P \\left(\\frac{v_2}{v_1}\\right) = \\tfrac12 \\left(1 + \\left(\\frac{v_2}{v_1}\\right) - \\left(\\frac{v_2}{v_1}\\right)^2 - \\left(\\frac{v_2}{v_1}\\right)^3\\right)" }, { "math_id": 21, "text": "(\\tfrac13)^2 = \\tfrac19" }, { "math_id": 22, "text": "\\tfrac89" }, { "math_id": 23, "text": "A_1 = \\tfrac23 S" }, { "math_id": 24, "text": "\\tfrac 23" }, { "math_id": 25, "text": "\\tfrac{16}{27}," }, { "math_id": 26, "text": "\\tfrac{2}{3}" } ]
https://en.wikipedia.org/wiki?curid=8099349
8099744
Adaptive quadrature
Adaptive quadrature is a numerical integration method in which the integral of a function formula_0 is approximated using static quadrature rules on adaptively refined subintervals of the region of integration. Generally, adaptive algorithms are just as efficient and effective as traditional algorithms for "well behaved" integrands, but are also effective for "badly behaved" integrands for which traditional algorithms may fail. General scheme. Adaptive quadrature follows the general scheme 1. procedure integrate ( f, a, b, τ ) 2. formula_1 3. formula_2 4. if "ε" &gt; "τ" then 5. m = (a + b) / 2 6. Q = integrate(f, a, m, τ/2) + integrate(f, m, b, τ/2) 7. endif 8. return Q An approximation formula_3 to the integral of formula_0 over the interval formula_4 is computed (line 2), as well as an error estimate formula_5 (line 3). If the estimated error is larger than the required tolerance formula_6(line 4), the interval is subdivided (line 5) and the quadrature is applied on both halves separately (line 6). Either the initial estimate or the sum of the recursively computed halves is returned (line 7). The important components are the quadrature rule itself formula_7 the error estimator formula_8 and the logic for deciding which interval to subdivide, and when to terminate. There are several variants of this scheme. The most common will be discussed later. Basic rules. The quadrature rules generally have the form formula_9 where the nodes formula_10 and weights formula_11 are generally precomputed. In the simplest case, Newton–Cotes formulas of even degree are used, where the nodes formula_10 are evenly spaced in the interval: formula_12 When such rules are used, the points at which formula_0 has been evaluated can be re-used upon recursion: A similar strategy is used with Clenshaw–Curtis quadrature, where the nodes are chosen as formula_13 Or, when Fejér quadrature is used, formula_14 Other quadrature rules, such as Gaussian quadrature or Gauss-Kronrod quadrature, may also be used. An algorithm may elect to use different quadrature methods on different subintervals, for example using a high-order method only where the integrand is smooth. Error estimation. Some quadrature algorithms generate a sequence of results which should approach the correct value. Otherwise one can use a "null rule" which has the form of the above quadrature rule, but whose value would be zero for a simple integrand (for example, if the integrand were a polynomial of the appropriate degree). See: Subdivision logic. "Local" adaptive quadrature makes the acceptable error for a given interval proportional to the length of that interval. This criterion can be difficult to satisfy if the integrands are badly behaved at only a few points, for example with a few step discontinuities. Alternatively, one could require only that the sum of the errors on each of the subintervals be less than the user's requirement. This would be "global" adaptive quadrature. Global adaptive quadrature can be more efficient (using fewer evaluations of the integrand) but is generally more complex to program and may require more working space to record information on the current set of intervals. Notes. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "f(x)" }, { "math_id": 1, "text": "Q \\approx \\int_a^b f(x)\\,\\mathrm{d}x" }, { "math_id": 2, "text": "\\varepsilon \\approx \\left|Q - \\int_a^b f(x)\\,\\mathrm{d}x\\right|" }, { "math_id": 3, "text": "Q" }, { "math_id": 4, "text": "[a,b]" }, { "math_id": 5, "text": "\\varepsilon" }, { "math_id": 6, "text": "\\tau" }, { "math_id": 7, "text": "Q \\approx \\int_a^bf(x)\\,\\mathrm{d}x ," }, { "math_id": 8, "text": "\\varepsilon \\approx \\left|Q - \\int_a^bf(x)\\,\\mathrm{d}x\\right| ," }, { "math_id": 9, "text": "Q_n \\quad = \\quad \\sum_{i=0}^n w_if(x_i) \\quad \\approx \\quad \\int_a^b f(x)\\,\\mathrm{d}x" }, { "math_id": 10, "text": "x_i" }, { "math_id": 11, "text": "w_i" }, { "math_id": 12, "text": "x_i = a + \\frac{b - a}{n} i." }, { "math_id": 13, "text": "x_i = \\cos\\left( \\frac{2i}{n}\\pi \\right)." }, { "math_id": 14, "text": "x_i = \\cos\\left( \\frac{2(i+0.5)}{n+1}\\pi \\right)." } ]
https://en.wikipedia.org/wiki?curid=8099744
8101374
Ultraconnected space
Property of topological spaces In mathematics, a topological space is said to be ultraconnected if no two nonempty closed sets are disjoint. Equivalently, a space is ultraconnected if and only if the closures of two distinct points always have non trivial intersection. Hence, no T1 space with more than one point is ultraconnected. Properties. Every ultraconnected space formula_0 is path-connected (but not necessarily arc connected). If formula_1 and formula_2 are two points of formula_0 and formula_3 is a point in the intersection formula_4, the function formula_5 defined by formula_6 if formula_7, formula_8 and formula_9 if formula_10, is a continuous path between formula_1 and formula_2. Every ultraconnected space is normal, limit point compact, and pseudocompact. Examples. The following are examples of ultraconnected topological spaces. Notes. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "X" }, { "math_id": 1, "text": "a" }, { "math_id": 2, "text": "b" }, { "math_id": 3, "text": "p" }, { "math_id": 4, "text": "\\operatorname{cl}\\{a\\}\\cap\\operatorname{cl}\\{b\\}" }, { "math_id": 5, "text": "f:[0,1]\\to X" }, { "math_id": 6, "text": "f(t)=a" }, { "math_id": 7, "text": "0 \\le t < 1/2" }, { "math_id": 8, "text": "f(1/2)=p" }, { "math_id": 9, "text": "f(t)=b" }, { "math_id": 10, "text": "1/2 < t \\le 1" } ]
https://en.wikipedia.org/wiki?curid=8101374
810250
Periodic sequence
Sequence for which the same terms are repeated over and over In mathematics, a periodic sequence (sometimes called a cycle or orbit) is a sequence for which the same terms are repeated over and over: "a"1, "a"2, ..., "a""p",  "a"1, "a"2, ..., "a""p",  "a"1, "a"2, ..., "a""p", ... The number "p" of repeated terms is called the period (period). Definition. A (purely) periodic sequence (with period "p), or a p-"periodic sequence, is a sequence "a"1, "a"2, "a"3, ... satisfying "a""n"+"p" = "a""n" for all values of "n". If a sequence is regarded as a function whose domain is the set of natural numbers, then a periodic sequence is simply a special type of periodic function. The smallest "p" for which a periodic sequence is "p"-periodic is called its least period or exact period. Examples. Every constant function is 1-periodic. The sequence formula_0 is periodic with least period 2. The sequence of digits in the decimal expansion of 1/7 is periodic with period 6: formula_1 More generally, the sequence of digits in the decimal expansion of any rational number is eventually periodic (see below). The sequence of powers of −1 is periodic with period two: formula_2 More generally, the sequence of powers of any root of unity is periodic. The same holds true for the powers of any element of finite order in a group. A periodic point for a function "f" : "X" → "X" is a point x whose orbit formula_3 is a periodic sequence. Here, formula_4 means the composition of f applied to x. Periodic points are important in the theory of dynamical systems. Every function from a finite set to itself has a periodic point; cycle detection is the algorithmic problem of finding such a point. formula_5 Where k and m if it can be made periodic by dropping some finite number of terms from the beginning. Equivalently, the last condition can be stated as formula_6 for some "r" and sufficiently large "k". For example, the sequence of digits in the decimal expansion of 1/56 is eventually periodic: 1 / 56 = 0 . 0 1 7  8 5 7 1 4 2  8 5 7 1 4 2  8 5 7 1 4 2  ... Identities. Partial Sums. A sequence is asymptotically periodic if its terms approach those of a periodic sequence. That is, the sequence "x"1, "x"2, "x"3, ... is asymptotically periodic if there exists a periodic sequence "a"1, "a"2, "a"3, ... for which formula_7 For example, the sequence 1 / 3,  2 / 3,  1 / 4,  3 / 4,  1 / 5,  4 / 5,  ... is asymptotically periodic, since its terms approach those of the periodic sequence 0, 1, 0, 1, 0, 1, ... References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "1,2,1,2,1,2\\dots" }, { "math_id": 1, "text": "\\frac{1}{7} = 0.142857\\,142857\\,142857\\,\\ldots" }, { "math_id": 2, "text": "-1,1,-1,1,-1,1,\\ldots" }, { "math_id": 3, "text": "x,\\, f(x),\\, f(f(x)),\\, f^3(x),\\, f^4(x),\\, \\ldots" }, { "math_id": 4, "text": "f^n(x)" }, { "math_id": 5, "text": "\\sum_{n=1}^{kp+m} a_{n} = k*\\sum_{n=1}^{p} a_{n} + \\sum_{n=1}^{m} a_{n}" }, { "math_id": 6, "text": "a_{k+r} = a_k" }, { "math_id": 7, "text": "\\lim_{n\\rightarrow\\infty} x_n - a_n = 0." } ]
https://en.wikipedia.org/wiki?curid=810250
8103136
ABX test
Double-blind audio quality testing method An ABX test is a method of comparing two choices of sensory stimuli to identify detectable differences between them. A subject is presented with two known samples (sample A, the first reference, and sample B, the second reference) followed by one unknown sample X that is randomly selected from either A or B. The subject is then required to identify X as either A or B. If X cannot be identified reliably with a low p-value in a predetermined number of trials, then the null hypothesis cannot be rejected and it cannot be proven that there is a perceptible difference between A and B. ABX tests can easily be performed as double-blind trials, eliminating any possible unconscious influence from the researcher or the test supervisor. Because samples A and B are provided just prior to sample X, the difference does not have to be discerned using long-term memory or past experience. Thus, the ABX test answers whether or not, under the test circumstances, a perceptual difference can be found. ABX tests are commonly used in evaluations of digital audio data compression methods; sample A is typically an uncompressed sample, and sample B is a compressed version of A. Audible compression artifacts that indicate a shortcoming in the compression algorithm can be identified with subsequent testing. ABX tests can also be used to compare the different degrees of fidelity loss between two different audio formats at a given bitrate. ABX tests can be used to audition input, processing, and output components as well as cabling: virtually any audio product or prototype design. History. The history of ABX testing and naming dates back to 1950 in a paper published by two Bell Labs researchers, W. A. Munson and Mark B. Gardner, titled " Standardizing Auditory Tests". The purpose of the present paper is to describe a test procedure which has shown promise in this direction and to give descriptions of equipment which have been found helpful in minimizing the variability of the test results. The procedure, which we have called the "ABX" test, is a modification of the method of paired comparisons. An observer is presented with a time sequence of three signals for each judgment he is asked to make. During the first time interval he hears signal A, during the second, signal B, and finally signal X. His task is to indicate whether the sound heard during the X interval was more like that during the A interval or more like that during the B interval. For a threshold test, the A interval is quiet, the B interval is signal, and the X interval is either quiet or signal. The test has evolved to other variations such as subject control over duration and sequence of testing. One such example was the hardware ABX comparator in 1977, built by the ABX company in Troy, Michigan, and documented by one of its founders, David Clark. Refinements to the A/B test The author's first experience with double-blind audibility testing was as a member of the SMWTMS Audio Club in early 1977. A button was provided which would select at random component A or B. Identifying one of these, the X component was greatly hampered by not having the known A and B available for reference. This was corrected by using three interlocked pushbuttons, A, B, and X. Once an X was selected, it would remain that particular A or B until it was decided to move on to another random selection. However, another problem quickly became obvious. There was always an audible relay transition time delay when switching from A to B. When switching from A to X, however, the time delay would be missing if X was really A and present if X was really B. This extraneous cue was removed by inserting a fixed length dropout time when any change was made. The dropout time was selected to be 50 ms which produces a slight consistent click while allowing subjectively instant comparison. The ABX company is now defunct and hardware comparators in general as commercial offerings extinct. Myriad of software tools exist such as Foobar ABX plug-in for performing file comparisons. But hardware equipment testing requires building custom implementations. Hardware tests. ABX test equipment utilizing relays to switch between two different hardware paths can help determine if there are perceptual differences in cables and components. Video, audio and digital transmission paths can be compared. If the switching is microprocessor controlled, double-blind tests are possible. Loudspeaker level and line level audio comparisons could be performed on an ABX test device offered for sale as the "ABX Comparator" by QSC Audio Products from 1998 to 2004. Other hardware solutions have been fabricated privately by individuals or organizations for internal testing. Confidence. If only one ABX trial were performed, random guessing would incur a 50% chance of choosing the correct answer, the same as flipping a coin. In order to make a statement having some degree of confidence, many trials must be performed. By increasing the number of trials, the likelihood of statistically asserting a person's ability to distinguish A and B is enhanced for a given confidence level. A 95% confidence level is commonly considered statistically significant. The company QSC, in the ABX Comparator user manual, recommended a minimum of ten listening trials in each round of tests. QSC recommended that no more than 25 trials be performed, as subject fatigue can set in, making the test less sensitive (less likely to reveal one's actual ability to discern the difference between A and B). However, a more sensitive test can be obtained by pooling the results from a number of such tests using separate individuals or tests from the same subject conducted in between rest breaks. For a large number of total trials N, a significant result (one with 95% confidence) can be claimed if the number of correct responses exceeds formula_0. Important decisions are normally based on a higher level of confidence, since an erroneous "significant result" would be claimed in one of 20 such tests simply by chance. Software tests. The foobar2000 and the Amarok audio players support software-based ABX testing, the latter using a third-party script. Lacinato ABX is a cross-platform audio testing tool for Linux, Windows, and 64-bit Mac. Lacinato WebABX is a web-based cross-browser audio ABX tool. Open source aveX was mainly developed for Linux which also provides test-monitoring from a remote computer. ABX patcher is an ABX implementation for Max/MSP. More ABX software can be found at the archived PCABX website. Codec listening tests. A codec listening test is a scientific study designed to compare two or more lossy audio codecs, usually with respect to perceived fidelity or compression efficiency. Potential flaws. ABX is a type of forced choice testing. A subject's choices can be on merit, i.e. the subject indeed honestly tried to identify whether X seemed closer to A or B. But uninterested or tired subjects might choose randomly without even trying. If not caught, this may dilute the results of other subjects who intently took the test and subject the outcome to Simpson's paradox, resulting in false summary results. Simply looking at the outcome totals of the test ("m" out of "n" answers correct) cannot reveal occurrences of this problem. This problem becomes more acute if the differences are small. The user may get frustrated and simply aim to finish the test by voting randomly. In this regard, forced-choice tests such as ABX tend to favor negative outcomes when differences are small if proper protocols are not used to guard against this problem. Best practices call for both the inclusion of controls and the screening of subjects: A major consideration is the inclusion of appropriate control conditions. Typically, control conditions include the presentation of unimpaired audio materials, introduced in ways that are unpredictable to the subjects. It is the differences between judgement of these control stimuli and the potentially impaired ones that allows one to conclude that the grades are actual assessments of the impairments. 3.2.2 Post-screening of subjects Post-screening methods can be roughly separated into at least two classes; one is based on inconsistencies compared with the mean result and another relies on the ability of the subject to make correct identifications. The first class is never justifiable. Whenever a subjective listening test is performed with the test method recommended here, the required information for the second class of post-screening is automatically available. A suggested statistical method for doing this is described in Attachment 1.' The methods are primarily used to eliminate subjects who cannot make the appropriate discriminations. The application of a post-screening method may clarify the tendencies in a test result. However, bearing in mind the variability of subjects’ sensitivities to different artefacts, caution should be exercised. Other flaws include lack of subject training and familiarization with the test and content selected: 4.1 Familiarization or training phase Prior to formal grading, subjects must be allowed to become thoroughly familiar with the test facilities, the test environment, the grading process, the grading scales and the methods of their use. Subjects should also become thoroughly familiar with the artefacts under study. For the most sensitive tests they should be exposed to all the material they will be grading later in the formal grading sessions. During familiarization or training, subjects should be preferably together in groups (say, consisting of three subjects), so that they can interact freely and discuss the artefacts they detect with each other. Other problems might arise from the ABX equipment itself, as outlined by Clark, where the equipment provides a tell, allowing the subject to identify the source. Lack of transparency of the ABX fixture creates similar problems. Since auditory tests and many other sensory tests rely on short-term memory, which only lasts a few seconds, it is critical that the test fixture allows the subject to identify short segments that can be compared quickly. Pops and glitches in switching apparatus likewise must be eliminated, as they may dominate or otherwise interfere with the stimuli being tested in what is stored in the subject's short-term memory. Alternatives. Algorithmic Audio Compression Evaluation. Since ABX testing requires human beings for evaluation of lossy audio codecs, it is time-consuming and costly. Therefore, cheaper approaches have been developed, e.g. PEAQ, which is an implementation of the ODG. MUSHRA. In MUSHRA, the subject is presented with the reference (labeled as such), a certain number of test samples, a hidden version of the reference and one or more anchors. A 0–100 rating scale makes it possible to rate very small differences, and the hidden version still provides discrimination checks. Discrimination testing. Alternative general methods are used in discrimination testing, such as paired comparison, duo–trio, and triangle testing. Of these, duo–trio and triangle testing are particularly close to ABX testing. Schematically: In this context, ABX testing is also known as "duo–trio" in "balanced reference" mode – both knowns are presented as references, rather than one alone. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "N/2+\\sqrt{N}" } ]
https://en.wikipedia.org/wiki?curid=8103136
8103917
Wiener deconvolution
In mathematics, Wiener deconvolution is an application of the Wiener filter to the noise problems inherent in deconvolution. It works in the frequency domain, attempting to minimize the impact of deconvolved noise at frequencies which have a poor signal-to-noise ratio. The Wiener deconvolution method has widespread use in image deconvolution applications, as the frequency spectrum of most visual images is fairly well behaved and may be estimated easily. Wiener deconvolution is named after Norbert Wiener. Definition. Given a system: formula_0 where formula_1 denotes convolution and: Our goal is to find some formula_7 so that we can estimate formula_2 as follows: formula_8 where formula_9 is an estimate of formula_2 that minimizes the mean square error formula_10, with formula_11 denoting the expectation. The Wiener deconvolution filter provides such a formula_7. The filter is most easily described in the frequency domain: formula_12 where: The filtering operation may either be carried out in the time-domain, as above, or in the frequency domain: formula_24 and then performing an inverse Fourier transform on formula_25 to obtain formula_9. Note that in the case of images, the arguments formula_3 and formula_26 above become two-dimensional; however the result is the same. Interpretation. The operation of the Wiener filter becomes apparent when the filter equation above is rewritten: formula_27 Here, formula_28 is the inverse of the original system, formula_29 is the signal-to-noise ratio, and formula_30 is the ratio of the pure filtered signal to noise spectral density. When there is zero noise (i.e. infinite signal-to-noise), the term inside the square brackets equals 1, which means that the Wiener filter is simply the inverse of the system, as we might expect. However, as the noise at certain frequencies increases, the signal-to-noise ratio drops, so the term inside the square brackets also drops. This means that the Wiener filter attenuates frequencies according to their filtered signal-to-noise ratio. The Wiener filter equation above requires us to know the spectral content of a typical image, and also that of the noise. Often, we do not have access to these exact quantities, but we may be in a situation where good estimates can be made. For instance, in the case of photographic images, the signal (the original image) typically has strong low frequencies and weak high frequencies, while in many cases the noise content will be relatively flat with frequency. Derivation. As mentioned above, we want to produce an estimate of the original signal that minimizes the mean square error, which may be expressed: formula_31 . The equivalence to the previous definition of formula_32, can be derived using Plancherel theorem or Parseval's theorem for the Fourier transform. If we substitute in the expression for formula_25, the above can be rearranged to formula_33 If we expand the quadratic, we get the following: formula_34 However, we are assuming that the noise is independent of the signal, therefore: formula_35 Substituting the power spectral densities formula_36 and formula_37, we have: formula_38 To find the minimum error value, we calculate the Wirtinger derivative with respect to formula_13 and set it equal to zero. formula_39 This final equality can be rearranged to give the Wiener filter.
[ { "math_id": 0, "text": "\\ y(t) = (h*x)(t) + n(t)" }, { "math_id": 1, "text": "*" }, { "math_id": 2, "text": "\\ x(t)" }, { "math_id": 3, "text": "\\ t " }, { "math_id": 4, "text": "\\ h(t)" }, { "math_id": 5, "text": "\\ n(t)" }, { "math_id": 6, "text": "\\ y(t)" }, { "math_id": 7, "text": "\\ g(t)" }, { "math_id": 8, "text": "\\ \\hat{x}(t) = (g*y)(t)" }, { "math_id": 9, "text": "\\ \\hat{x}(t)" }, { "math_id": 10, "text": "\\ \\epsilon(t) = \\mathbb{E} \\left| x(t) - \\hat{x}(t) \\right|^2" }, { "math_id": 11, "text": "\\ \\mathbb{E}" }, { "math_id": 12, "text": "\\ G(f) = \\frac{H^*(f)S(f)}{ |H(f)|^2 S(f) + N(f) }" }, { "math_id": 13, "text": "\\ G(f)" }, { "math_id": 14, "text": "\\ H(f)" }, { "math_id": 15, "text": "\\ S(f) = \\mathbb{E}|X(f)|^2 " }, { "math_id": 16, "text": "\\ N(f) = \\mathbb{E}|V(f)|^2 " }, { "math_id": 17, "text": "X(f)" }, { "math_id": 18, "text": "Y(f)" }, { "math_id": 19, "text": "V(f)" }, { "math_id": 20, "text": "x(t)" }, { "math_id": 21, "text": "y(t)" }, { "math_id": 22, "text": "n(t)" }, { "math_id": 23, "text": "{}^*" }, { "math_id": 24, "text": "\\ \\hat{X}(f) = G(f)Y(f)" }, { "math_id": 25, "text": "\\ \\hat{X}(f)" }, { "math_id": 26, "text": "\\ f " }, { "math_id": 27, "text": "\n\\begin{align}\n G(f) & = \\frac{1}{H(f)} \\left[ \\frac{ 1 }{ 1 + 1/(|H(f)|^2 \\mathrm{SNR}(f))} \\right]\n\\end{align}\n" }, { "math_id": 28, "text": "\\ 1/H(f)" }, { "math_id": 29, "text": "\\ \\mathrm{SNR}(f) = S(f)/N(f)" }, { "math_id": 30, "text": "\\ |H(f)|^2 \\mathrm{SNR}(f)" }, { "math_id": 31, "text": "\\ \\epsilon(f) = \\mathbb{E} \\left| X(f) - \\hat{X}(f) \\right|^2" }, { "math_id": 32, "text": "\\epsilon" }, { "math_id": 33, "text": "\n\\begin{align}\n \\epsilon(f) & = \\mathbb{E} \\left| X(f) - G(f)Y(f) \\right|^2 \\\\\n & = \\mathbb{E} \\left| X(f) - G(f) \\left[ H(f)X(f) + V(f) \\right] \\right|^2 \\\\\n & = \\mathbb{E} \\big| \\left[ 1 - G(f)H(f) \\right] X(f) - G(f)V(f) \\big|^2\n\\end{align}\n" }, { "math_id": 34, "text": "\n\\begin{align} \n \\epsilon(f) & = \\Big[ 1-G(f)H(f) \\Big] \\Big[ 1-G(f)H(f) \\Big]^*\\, \\mathbb{E}|X(f)|^2 \\\\\n & {} - \\Big[ 1-G(f)H(f) \\Big] G^*(f)\\, \\mathbb{E}\\Big\\{X(f)V^*(f)\\Big\\} \\\\\n & {} - G(f) \\Big[ 1-G(f)H(f) \\Big]^*\\, \\mathbb{E}\\Big\\{V(f)X^*(f)\\Big\\} \\\\\n & {} + G(f) G^*(f)\\, \\mathbb{E}|V(f)|^2\n\\end{align}\n" }, { "math_id": 35, "text": "\\ \\mathbb{E}\\Big\\{X(f)V^*(f)\\Big\\} = \\mathbb{E}\\Big\\{V(f)X^*(f)\\Big\\} = 0" }, { "math_id": 36, "text": "\\ S(f) " }, { "math_id": 37, "text": "\\ N(f) " }, { "math_id": 38, "text": "\n \\epsilon(f) = \\Big[ 1-G(f)H(f) \\Big]\\Big[ 1-G(f)H(f) \\Big]^ * S(f) + G(f)G^*(f)N(f)\n" }, { "math_id": 39, "text": "\\ \n \\frac{d\\epsilon(f)}{dG(f)} = 0 \\Rightarrow G^*(f)N(f) - H(f)\\Big[1 - G(f)H(f)\\Big]^* S(f) = 0\n" } ]
https://en.wikipedia.org/wiki?curid=8103917
8104537
Crosswordese
Terms found more frequently in crosswords Crosswordese is the group of words frequently found in US crossword puzzles but seldom found in everyday conversation. The words are usually short, three to five letters, with letter combinations which crossword constructors find useful in the creation of crossword puzzles, such as words that start and/or end with vowels, abbreviations consisting entirely of consonants, unusual combinations of letters, and words consisting almost entirely of frequently used letters. Such words are needed in almost every puzzle to some extent. Too much crosswordese in a crossword puzzle is frowned upon by crossword-makers and crossword enthusiasts. Knowing the language of "crosswordese" is helpful to constructors and solvers alike. According to Marc Romano, "to do well solving crosswords, you absolutely need to keep a running mental list of 'crosswordese', the set of recurring words that constructors reach for whenever they are heading for trouble in a particular section of the grid". The popularity of individual words and names of crosswordese, and the way they are clued, changes over time. For instance, ITO was occasionally clued in the 1980s and 1990s in reference to dancer Michio Itō and actor Robert Ito, then boomed in the late 1990s and 2000s when judge Lance Ito was a household name, and has since fallen somewhat, and when it appears today, the clue typically references figure skater Midori Ito or uses the partial phrase "I to" (as in ["How was ___ know?"]). List of crosswordese. "When applicable, example clues will be denoted in square brackets and answers will be denoted in all caps, e.g. [Example clue] for ANSWER." Portions of phrases are occasionally used as fill in the blank clues. For instance, "Et tu, Brute?" might appear in a puzzle's clue sheet as "_____, Brute?" Architecture. &lt;templatestyles src="Div col/styles.css"/&gt; Biblical references. &lt;templatestyles src="Div col/styles.css"/&gt; Brand and trade names. &lt;templatestyles src="Div col/styles.css"/&gt; Computers and the Internet. &lt;templatestyles src="Div col/styles.css"/&gt; Currency and business. &lt;templatestyles src="Div col/styles.css"/&gt; Directions. Many puzzles ask for the direction from one place to another. These directions always fall between the standard octaval compass points—i.e., North (N – 0° or 360°), Northeast (NE – 45°), East (E – 90°), etc. The directions asked for on clue sheets are usually approximations. Starting at north and going clockwise, the directions are: &lt;templatestyles src="Div col/styles.css"/&gt; Fictional characters. &lt;templatestyles src="Div col/styles.css"/&gt; Food and drink. &lt;templatestyles src="Div col/styles.css"/&gt; Foreign words. &lt;templatestyles src="Div col/styles.css"/&gt; Geography. Proper names. &lt;templatestyles src="Div col/styles.css"/&gt; General terms. &lt;templatestyles src="Div col/styles.css"/&gt; Interjections. &lt;templatestyles src="Div col/styles.css"/&gt; Jargon and slang. &lt;templatestyles src="Div col/styles.css"/&gt; Language. Because of crossword rules that restrict the usage of two-letter words, only entries of three or more letters have been listed. Often these letters are clued as puns, e.g. the clue [Puzzle center?] for ZEES, referring to the two Zs in the center of the word "puzzle". The "zed" spelling of Z is often indicated by a reference to a Commonwealth country, where that is the standard pronunciation (e.g. [British puzzle center?] for ZEDS). Greek letters often appear as well, such as ETA. Latin words and phrases. &lt;templatestyles src="Div col/styles.css"/&gt; Manmade items. &lt;templatestyles src="Div col/styles.css"/&gt; Music. &lt;templatestyles src="Div col/styles.css"/&gt; Names. &lt;templatestyles src="Div col/styles.css"/&gt; Nature. &lt;templatestyles src="Div col/styles.css"/&gt; Poetic phrases and terms. &lt;templatestyles src="Div col/styles.css"/&gt; Prefixes. &lt;templatestyles src="Div col/styles.css"/&gt; Suffixes. &lt;templatestyles src="Div col/styles.css"/&gt; Religion and mythology. &lt;templatestyles src="Div col/styles.css"/&gt; Roman numerals. Many puzzles ask for Roman numerals either as answers or as portions of answers. For instance: Standard Roman numerals run from 1 to 3999, or I to MMMCMXCIX. The first ten Roman numerals are: formula_0 The following table shows the numerals used in crossword puzzles. Science. &lt;templatestyles src="Div col/styles.css"/&gt; Sports and gaming. &lt;templatestyles src="Div col/styles.css"/&gt; Team nicknames. &lt;templatestyles src="Div col/styles.css"/&gt; Scoreboard abbreviations. &lt;templatestyles src="Div col/styles.css"/&gt; Titles of books, plays, movies, etc.. &lt;templatestyles src="Div col/styles.css"/&gt; Titles used by royalty and the nobility. &lt;templatestyles src="Div col/styles.css"/&gt; Transportation. &lt;templatestyles src="Div col/styles.css"/&gt; U.S. states and Canadian provinces. Postal abbreviations: Since the late 1970s, the post offices in the United States and Canada have used computerized letter sorting. This prompted the creation of the two-capital-letter abbreviations used today for all states and most provinces (i.e., "MN" for Minnesota and "QC" for Quebec). Previously, when mail was sorted by hand, many states and provinces had abbreviations of three to five letters. Many of these longer abbreviations are now part of crosswordese. (Notes: (1) Except for Texas, states with four- or five-letter names were generally spelled out. (2) Other states and provinces not shown below had the same two-letter abbreviations that are still used today.) &lt;templatestyles src="Div col/styles.css"/&gt; Weaponry and warfare. &lt;templatestyles src="Div col/styles.css"/&gt; Miscellaneous crosswordese. &lt;templatestyles src="Div col/styles.css"/&gt; Outdated crosswordese. "These once-common terms are especially rare or never found in new puzzles." &lt;templatestyles src="Div col/styles.css"/&gt; References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\mathrm{I,\\;II,\\;III,\\;IV,\\;V,\\;VI,\\;VII,\\;VIII,\\;IX, and \\;X.}" } ]
https://en.wikipedia.org/wiki?curid=8104537
8105109
Common Vulnerability Scoring System
Standard for assessing computer system vulnerabilitiesThe Common Vulnerability Scoring System (CVSS) is a free and open industry standard for assessing the severity of computer system security vulnerabilities. CVSS attempts to assign severity scores to vulnerabilities, allowing responders to prioritize responses and resources according to threat. Scores are calculated based on a formula that depends on several metrics that approximate ease and impact of an exploit. Scores range from 0 to 10, with 10 being the most severe. While many use only the CVSS Base score for determining severity, temporal and environmental scores also exist, to factor in availability of mitigations and how widespread vulnerable systems are within an organization, respectively. The current version of CVSS (CVSSv4.0) was released in November 2023. CVSS is not intended to be used as a method for patch management prioritization, but is used like that regardless. &lt;templatestyles src="Template:TOC limit/styles.css" /&gt; History. Research by the National Infrastructure Advisory Council (NIAC) in 2003/2004 led to the launch of CVSS version 1 (CVSSv1) in February 2005, with the goal of being "designed to provide open and universally standard severity ratings of software vulnerabilities". This initial draft had not been subject to peer review or review by other organizations. In April 2005, NIAC selected the Forum of Incident Response and Security Teams (FIRST) to become the custodian of CVSS for future development. Feedback from vendors using CVSSv1 in production suggested there were "significant issues with the initial draft of CVSS". Work on CVSS version 2 (CVSSv2) began in April 2005 with the final specification being launched in June 2007. Further feedback resulted in work beginning on CVSS version 3 in 2012, ending with CVSSv3.0 being released in June 2015. Terminology. The CVSS assessment measures three areas of concern: A numerical score is generated for each of these metric groups. A vector string (or simply "vector" in CVSSv2) represents the values of all the metrics as a block of text. Version 2. Complete documentation for CVSSv2 is available from FIRST. A summary is provided below. Base metrics. Access Vector. The access vector (AV) shows how a vulnerability may be exploited. Access Complexity. The access complexity (AC) metric describes how easy or difficult it is to exploit the discovered vulnerability. Authentication. The authentication (Au) metric describes the number of times that an attacker must authenticate to a target to exploit it. It does not include (for example) authentication to a network in order to gain access. For locally exploitable vulnerabilities, this value should only be set to Single or Multiple if further authentication is required after initial access. Impact metrics. Confidentiality. The confidentiality (C) metric describes the impact on the confidentiality of data processed by the system. Integrity. The Integrity (I) metric describes the impact on the integrity of the exploited system. Availability. The availability (A) metric describes the impact on the availability of the target system. Attacks that consume network bandwidth, processor cycles, memory, or any other resources affect the availability of a system. Calculations. These six metrics are used to calculate the exploitability and impact sub-scores of the vulnerability. These sub-scores are used to calculate the overall base score. formula_0 formula_1 formula_2 formula_3 The metrics are concatenated to produce the CVSS Vector for the vulnerability. Example. A buffer overflow vulnerability affects web server software that allows a remote user to gain partial control of the system, including the ability to cause it to shut down: This would give an exploitability sub-score of 10, and an impact sub-score of 8.5, giving an overall base score of 9.0. The vector for the base score in this case would be AV:N/AC:L/Au:N/C:P/I:P/A:C. The score and vector are normally presented together to allow the recipient to fully understand the nature of the vulnerability and to calculate their own environmental score if necessary. Temporal metrics. The value of temporal metrics change over the lifetime of the vulnerability, as exploits are developed, disclosed and automated and as mitigations and fixes are made available. Exploitability. The exploitability (E) metric describes the current state of exploitation techniques or automated exploitation code. Remediation Level. The remediation level (RL) of a vulnerability allows the temporal score of a vulnerability to decrease as mitigations and official fixes are made available. Report Confidence. The report confidence (RC) of a vulnerability measures the level of confidence in the existence of the vulnerability and also the credibility of the technical details of the vulnerability. Calculations. These three metrics are used in conjunction with the base score that has already been calculated to produce the temporal score for the vulnerability with its associated vector. The formula used to calculate the temporal score is: formula_4 Example. To continue with the example above, if the vendor was first informed of the vulnerability by a posting of proof-of-concept code to a mailing list, the initial temporal score would be calculated using the values shown below: This would give a temporal score of 7.3, with a temporal vector of E:P/RL:U/RC:UC (or a full vector of AV:N/AC:L/Au:N/C:P/I:P/A:C/E:P/RL:U/RC:UC). If the vendor then confirms the vulnerability, then the score rises to 8.1, with a temporal vector of E:P/RL:U/RC:C A temporary fix from the vendor would reduce the score back to 7.3 (E:P/RL:T/RC:C), while an official fix would reduce it further to 7.0 (E:P/RL:O/RC:C). As it is not possible to be confident that every affected system has been fixed or patched, the temporal score cannot reduce below a certain level based on the vendor's actions, and may increase if an automated exploit for the vulnerability is developed. Environmental metrics. The environmental metrics use the base and current temporal score to assess the severity of a vulnerability in the context of the way that the vulnerable product or software is deployed. This measure is calculated subjectively, typically by affected parties. Collateral Damage Potential. The collateral damage potential (CDP) metric measures the potential loss or impact on either physical assets such as equipment (and lives), or the financial impact upon the affected organisation if the vulnerability is exploited. Target Distribution. The target distribution (TD) metric measures the proportion of vulnerable systems in the environment. Impact Subscore Modifier. Three further metrics assess the specific security requirements for confidentiality (CR), integrity (IR) and availability (AR), allowing the environmental score to be fine-tuned according to the users' environment. Calculations. The five environmental metrics are used in conjunction with the previously assessed base and temporal metrics to calculate the environmental score and to produce the associated environmental vector. formula_5 formula_6 formula_7 Example. If the aforementioned vulnerable web server were used by a bank to provide online banking services, and a temporary fix was available from the vendor, then the environmental score could be assessed as: This would give an environmental score of 8.2, and an environmental vector of CDP:MH/TD:H/CR:H/IR:H/AR:L. This score is within the range 7.0-10.0, and therefore constitutes a critical vulnerability in the context of the affected bank's business. Criticism of Version 2. Several vendors and organizations expressed dissatisfaction with CVSSv2. Risk Based Security, which manages the Open Source Vulnerability Database, and the Open Security Foundation jointly published a public letter to FIRST regarding the shortcomings and failures of CVSSv2. The authors cited a lack of granularity in several metrics, which results in CVSS vectors and scores that do not properly distinguish vulnerabilities of different type and risk profiles. The CVSS scoring system was also noted as requiring too much knowledge of the exact impact of the vulnerability. Oracle introduced the new metric value of "Partial+" for Confidentiality, Integrity, and Availability, to fill perceived gaps in the description between Partial and Complete in the official CVSS specifications. Version 3. To address some of these criticisms, development of CVSS version 3 was started in 2012. The final specification was named CVSSv3.0 and released in June 2015. In addition to a Specification Document, a User Guide and Examples document were also released. Several metrics were changed, added, and removed. The numerical formulas were updated to incorporate the new metrics while retaining the existing scoring range of 0-10. Textual severity ratings of None (0), Low (0.1-3.9), Medium (4.0-6.9), High (7.0-8.9), and Critical (9.0-10.0) were defined, similar to the categories NVD defined for CVSSv2 that were not part of that standard. Changes from Version 2. Base metrics. In the Base vector, the new metrics User Interaction (UI) and Privileges Required (PR) were added to help distinguish vulnerabilities that required user interaction or user or administrator privileges to be exploited. Previously, these concepts were part of the Access Vector metric of CVSSv2. UI can take the values None or Required; attacks that do not require logging in as a user are considered more severe. PR can take the values None, Low, or High; similarly, attacks requiring fewer privileges are more severe. The Base vector also saw the introduction of the new Scope (S) metric, which was designed to make clear which vulnerabilities may be exploited and then used to attack other parts of a system or network. These new metrics allow the Base vector to more clearly express the type of vulnerability being evaluated. The Confidentiality, Integrity, and Availability (C, I, A) metrics were updated to have scores consisting of None, Low, or High, rather than the None, Partial, and Complete of CVSSv2. This allows more flexibility in determining the impact of a vulnerability on CIA metrics. Access Complexity was renamed Attack Complexity (AC) to make clear that access privileges were moved to a separate metric. This metric now describes how repeatable exploit of this vulnerability may be; AC is High if the attacker requires perfect timing or other circumstances (other than user interaction, which is also a separate metric) which may not be easily duplicated on future attempts. Attack Vector (AV) saw the inclusion of a new metric value of Physical (P), to describe vulnerabilities that require physical access to the device or system to perform. Temporal metrics. The Temporal metrics were essentially unchanged from CVSSv2. Environmental metrics. The Environmental metrics of CVSSv2 were completely removed and replaced with essentially a second Base score, known as the Modified vector. The Modified Base is intended to reflect differences within an organization or company compared to the world as a whole. New metrics to capture the importance of Confidentiality, Integrity, and Availability to a specific environment were added. Criticism of Version 3. In a blog post in September 2015, the CERT Coordination Center discussed limitations of CVSSv2 and CVSSv3.0 for use in scoring vulnerabilities in emerging technology systems such as the Internet of Things. Version 3.1. A minor update to CVSS was released on June 17, 2019. The goal of CVSSv3.1 was to clarify and improve upon the existing CVSSv3.0 standard without introducing new metrics or metric values, allowing for frictionless adoption of the new standard by both scoring providers and scoring consumers alike. Usability was a prime consideration when making improvements to the CVSS standard. Several changes being made in CVSSv3.1 are to improve the clarity of concepts introduced in CVSSv3.0, and thereby improve the overall ease of use of the standard. FIRST has used input from industry subject-matter experts to continue to enhance and refine CVSS to be more and more applicable to the vulnerabilities, products, and platforms being developed over the past 15 years and beyond. The primary goal of CVSS is to provide a deterministic and repeatable way to score the severity of a vulnerability across many different constituencies, allowing consumers of CVSS to use this score as input to a larger decision matrix of risk, remediation, and mitigation specific to their particular environment and risk tolerance. Updates to the CVSSv3.1 specification include clarification of the definitions and explanation of existing base metrics such as Attack Vector, Privileges Required, Scope, and Security Requirements. A new standard method of extending CVSS, called the CVSS Extensions Framework, was also defined, allowing a scoring provider to include additional metrics and metric groups while retaining the official Base, Temporal, and Environmental Metrics. The additional metrics allow industry sectors such as privacy, safety, automotive, healthcare, etc., to score factors that are outside the core CVSS standard. Finally, the CVSS Glossary of Terms has been expanded and refined to cover all terms used throughout the CVSSv3.1 documentation. Version 4.0. In June 2023, a public preview of CVSSv4.0 was released, bringing a number of improvements. Version 4.0 was officially released in November 2023. Adoption. Versions of CVSS have been adopted as the primary method for quantifying the severity of vulnerabilities by a wide range of organizations and companies, including: References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\n\\textsf{Exploitability} = 20 \\times \\textsf{AccessVector}\\times\\textsf{AccessComplexity}\\times\\textsf{Authentication}\n" }, { "math_id": 1, "text": "\n\\textsf{Impact} = 10.41 \\times (1-(1-\\textsf{ConfImpact}) \\times (1-\\textsf{IntegImpact}) \\times (1-\\textsf{AvailImpact}))\n" }, { "math_id": 2, "text": "\nf(\\textsf{Impact}) =\n\\begin{cases}\n0, & \\text{if }\\textsf{Impact}\\text{ = 0} \\\\\n1.176, & \\text{otherwise }\n\\end{cases}\n" }, { "math_id": 3, "text": "\n\\textsf{BaseScore} = \\textsf{roundTo1Decimal}( ((0.6 \\times \\textsf{Impact}) +(0.4 \\times \\textsf{Exploitability})-1.5) \\times f(\\textsf{Impact}))\n" }, { "math_id": 4, "text": "\n\\textsf{TemporalScore} = \\textsf{roundTo1Decimal}(\\textsf{BaseScore} \\times \\textsf{Exploitability} \\times \\textsf{RemediationLevel} \\times \\textsf{ReportConfidence})\n" }, { "math_id": 5, "text": "\n\\textsf{AdjustedImpact} = \\min(10,10.41 \\times (1-(1-\\textsf{ConfImpact} \\times \\textsf{ConfReq}) \\times (1-\\textsf{IntegImpact} \\times \\textsf{IntegReq}) \\times (1-\\textsf{AvailImpact} \\times \\textsf{AvailReq})))\n" }, { "math_id": 6, "text": "\n\\textsf{AdjustedTemporal} = \\textsf{TemporalScore}\\text{ recomputed with the }\\textsf{BaseScore}\\text{s }\\textsf{Impact}\\text{ sub-equation replaced with the }\\textsf{AdjustedImpact}\\text{ equation}\n" }, { "math_id": 7, "text": "\n\\textsf{EnvironmentalScore} = \\textsf{roundTo1Decimal}((\\textsf{AdjustedTemporal}+(10-\\textsf{AdjustedTemporal}) \\times \\textsf{CollateralDamagePotential}) \\times \\textsf{TargetDistribution})\n" } ]
https://en.wikipedia.org/wiki?curid=8105109
8106467
Preclosure operator
Closure operator In topology, a preclosure operator or Čech closure operator is a map between subsets of a set, similar to a topological closure operator, except that it is not required to be idempotent. That is, a preclosure operator obeys only three of the four Kuratowski closure axioms. Definition. A preclosure operator on a set formula_0 is a map formula_1 formula_2 where formula_3 is the power set of formula_4 The preclosure operator has to satisfy the following properties: The last axiom implies the following: 4. formula_8 implies formula_9. Topology. A set formula_10 is closed (with respect to the preclosure) if formula_11. A set formula_12 is open (with respect to the preclosure) if its complement formula_13 is closed. The collection of all open sets generated by the preclosure operator is a topology; however, the above topology does not capture the notion of convergence associated to the operator, one should consider a pretopology, instead. Examples. Premetrics. Given formula_14 a premetric on formula_0, then formula_15 is a preclosure on formula_4 Sequential spaces. The sequential closure operator formula_16 is a preclosure operator. Given a topology formula_17 with respect to which the sequential closure operator is defined, the topological space formula_18 is a sequential space if and only if the topology formula_19 generated by formula_16 is equal to formula_20 that is, if formula_21 References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "X" }, { "math_id": 1, "text": "[\\ \\ ]_p" }, { "math_id": 2, "text": "[\\ \\ ]_p:\\mathcal{P}(X) \\to \\mathcal{P}(X)" }, { "math_id": 3, "text": "\\mathcal{P}(X)" }, { "math_id": 4, "text": "X." }, { "math_id": 5, "text": " [\\varnothing]_p = \\varnothing \\! " }, { "math_id": 6, "text": " A \\subseteq [A]_p " }, { "math_id": 7, "text": " [A \\cup B]_p = [A]_p \\cup [B]_p" }, { "math_id": 8, "text": "A \\subseteq B" }, { "math_id": 9, "text": "[A]_p \\subseteq [B]_p" }, { "math_id": 10, "text": "A" }, { "math_id": 11, "text": "[A]_p=A" }, { "math_id": 12, "text": "U \\subset X" }, { "math_id": 13, "text": "A = X \\setminus U" }, { "math_id": 14, "text": "d" }, { "math_id": 15, "text": "[A]_p = \\{x \\in X : d(x,A)=0\\}" }, { "math_id": 16, "text": "[\\ \\ ]_\\text{seq}" }, { "math_id": 17, "text": "\\mathcal{T}" }, { "math_id": 18, "text": "(X,\\mathcal{T})" }, { "math_id": 19, "text": "\\mathcal{T}_\\text{seq}" }, { "math_id": 20, "text": "\\mathcal{T}," }, { "math_id": 21, "text": "\\mathcal{T}_\\text{seq} = \\mathcal{T}." } ]
https://en.wikipedia.org/wiki?curid=8106467
81094
Proposition
Bearer of truth or falsity A proposition is a central concept in the philosophy of language, semantics, logic, and related fields, often characterized as the primary bearer of truth or falsity. Propositions are also often characterized as being the kind of thing that declarative sentences denote. For instance the sentence "The sky is blue" denotes the proposition that the sky is blue. However, crucially, propositions are not themselves linguistic expressions. For instance, the English sentence "Snow is white" denotes the same proposition as the German sentence "Schnee ist weiß" even though the two sentences are not the same. Similarly, propositions can also be characterized as the objects of belief and other propositional attitudes. For instance if one believes that the sky is blue, what one believes is the proposition that the sky is blue. A proposition can also be thought of as a kind of idea: Collins Dictionary has a definition for "proposition" as "a statement or an idea that people can consider or discuss whether it is true." Formally, propositions are often modeled as functions which map a possible world to a truth value. For instance, the proposition that the sky is blue can be modeled as a function which would return the truth value formula_0 if given the actual world as input, but would return formula_1 if given some alternate world where the sky is green. However, a number of alternative formalizations have been proposed, notably the structured propositions view. Propositions have played a large role throughout the history of logic, linguistics, philosophy of language, and related disciplines. Some researchers have doubted whether a consistent definition of propositionhood is possible, David Lewis even remarking that "the conception we associate with the word ‘proposition’ may be something of a jumble of conflicting desiderata". The term is often used broadly and has been used to refer to various related concepts. Historical usage. By Aristotle. Aristotelian logic identifies a categorical proposition as a sentence which affirms or denies a predicate of a subject, optionally with the help of a copula. An Aristotelian proposition may take the form of "All men are mortal" or "Socrates is a man." In the first example, the subject is "men", predicate is "mortal" and copula is "are", while in the second example, the subject is "Socrates", the predicate is "a man" and copula is "is". By the logical positivists. Often, propositions are related to closed formulae (or logical sentence) to distinguish them from what is expressed by an open formula. In this sense, propositions are "statements" that are truth-bearers. This conception of a proposition was supported by the philosophical school of logical positivism. Some philosophers argue that some (or all) kinds of speech or actions besides the declarative ones also have propositional content. For example, yes–no questions present propositions, being inquiries into the truth value of them. On the other hand, some signs can be declarative assertions of propositions, without forming a sentence nor even being linguistic (e.g. traffic signs convey definite meaning which is either true or false). Propositions are also spoken of as the content of beliefs and similar intentional attitudes, such as desires, preferences, and hopes. For example, "I desire "that I have a new car"", or "I wonder "whether it will snow"" (or, whether it is the case that "it will snow"). Desire, belief, doubt, and so on, are thus called propositional attitudes when they take this sort of content. By Russell. Bertrand Russell held that propositions were structured entities with objects and properties as constituents. One important difference between Ludwig Wittgenstein's view (according to which a proposition is the set of possible worlds/states of affairs in which it is true) is that on the Russellian account, two propositions that are true in all the same states of affairs can still be differentiated. For instance, the proposition "two plus two equals four" is distinct on a Russellian account from the proposition "three plus three equals six". If propositions are sets of possible worlds, however, then all mathematical truths (and all other necessary truths) are the same set (the set of all possible worlds). Relation to the mind. In relation to the mind, propositions are discussed primarily as they fit into propositional attitudes. Propositional attitudes are simply attitudes characteristic of folk psychology (belief, desire, etc.) that one can take toward a proposition (e.g. 'it is raining,' 'snow is white,' etc.). In English, propositions usually follow folk psychological attitudes by a "that clause" (e.g. "Jane believes "that" it is raining"). In philosophy of mind and psychology, mental states are often taken to primarily consist in propositional attitudes. The propositions are usually said to be the "mental content" of the attitude. For example, if Jane has a mental state of believing that it is raining, her mental content is the proposition 'it is raining.' Furthermore, since such mental states are "about" something (namely, propositions), they are said to be intentional mental states. Explaining the relation of propositions to the mind is especially difficult for non-mentalist views of propositions, such as those of the logical positivists and Russell described above, and Gottlob Frege's view that propositions are Platonist entities, that is, existing in an abstract, non-physical realm. So some recent views of propositions have taken them to be mental. Although propositions cannot be particular thoughts since those are not shareable, they could be types of cognitive events or properties of thoughts (which could be the same across different thinkers). Philosophical debates surrounding propositions as they relate to propositional attitudes have also recently centered on whether they are internal or external to the agent, or whether they are mind-dependent or mind-independent entities. For more, see the entry on internalism and externalism in philosophy of mind. Treatment in logic. Aristotelian logic. As noted above, in Aristotelian logic a proposition is a particular kind of sentence (a declarative sentence) that affirms or denies a predicate of a subject, optionally with the help of a copula. Aristotelian propositions take forms like "All men are mortal" and "Socrates is a man." Syntactic characterization. In modern logic, the term "proposition" is often used for sentences of a formal language. In this usage, propositions are formal syntactic objects which can be studied independently of the meaning they would receive from a semantics. Propositions are also called sentences, statements, statement forms, formulas, and well-formed formulas, though these terms are usually not synonymous within a single text. A formal language begins with different types of symbols. These types can include variables, operators, function symbols, predicate (or relation) symbols, quantifiers, and propositional constants.(Grouping symbols such as delimiters are often added for convenience in using the language, but do not play a logical role.) Symbols are concatenated together according to recursive rules, in order to construct strings to which truth-values will be assigned. The rules specify how the operators, function and predicate symbols, and quantifiers are to be concatenated with other strings. A proposition is then a string with a specific form. The form that a proposition takes depends on the type of logic. The type of logic called propositional, sentential, or statement logic includes only operators and propositional constants as symbols in its language. The propositions in this language are propositional constants, which are considered atomic propositions, and composite (or compound) propositions, which are composed by recursively applying operators to propositions. "Application" here is simply a short way of saying that the corresponding concatenation rule has been applied. The types of logics called predicate, quantificational, or "n"-order logic include variables, operators, predicate and function symbols, and quantifiers as symbols in their languages. The propositions in these logics are more complex. First, one typically starts by defining a term as follows: For example, if "+" is a binary function symbol and "x", "y", and "z" are variables, then "x"+("y"+"z") is a term, which might be written with the symbols in various orders. Once a term is defined, a proposition can then be defined as follows: For example, if "=" is a binary predicate symbol and "∀" is a quantifier, then ∀"x","y","z" [("x" = "y") → ("x"+"z" = "y"+"z")] is a proposition. This more complex structure of propositions allows these logics to make finer distinctions between inferences, i.e., to have greater expressive power. Semantic characterization. Propositions are standardly understood semantically as indicator functions that take a possible world and return a truth value. For example, the proposition that the sky is blue could be represented as a function formula_2 such that formula_3 for every world formula_4 if any, where the sky is blue, and formula_5 for every world formula_6 if any, where it is not. A proposition can be modeled equivalently with the inverse image of formula_7 under the indicator function, which is sometimes called the "characteristic set" of the proposition. For instance, if formula_8 and formula_9 are the only worlds in which the sky is blue, the proposition that the sky is blue could be modeled as the set formula_10. Numerous refinements and alternative notions of proposition-hood have been proposed including inquisitive propositions and structured propositions. Propositions are called structured propositions if they have constituents, in some broad sense. Assuming a structured view of propositions, one can distinguish between singular propositions (also Russellian propositions, named after Bertrand Russell) which are about a particular individual, general propositions, which are not about any particular individual, and particularized propositions, which are about a particular individual but do not contain that individual as a constituent. Objections to propositions. Attempts to provide a workable definition of proposition include the following: Two meaningful declarative sentences express the same proposition, if and only if they mean the same thing. which defines "proposition" in terms of synonymity. For example, "Snow is white" (in English) and "Schnee ist weiß" (in German) are different sentences, but they say the same thing, so they express the same proposition. Another definition of proposition is: Two meaningful declarative sentence-tokens express the same proposition, if and only if they mean the same thing. The above definitions can result in two identical sentences/sentence-tokens appearing to have the same meaning, and thus expressing the same proposition and yet having different truth-values, as in "I am Spartacus" said by Spartacus and said by John Smith, and "It is Wednesday" said on a Wednesday and on a Thursday. These examples reflect the problem of ambiguity in common language, resulting in a mistaken equivalence of the statements. “I am Spartacus” spoken by Spartacus is the declaration that the individual speaking is called Spartacus and it is true. When spoken by John Smith, it is a declaration about a different speaker and it is false. The term “I” means different things, so “I am Spartacus” means different things. A related problem is when identical sentences have the same truth-value, yet express different propositions. The sentence “I am a philosopher” could have been spoken by both Socrates and Plato. In both instances, the statement is true, but means something different. These problems are addressed in predicate logic by using a variable for the problematic term, so that “X is a philosopher” can have Socrates or Plato substituted for X, illustrating that “Socrates is a philosopher” and “Plato is a philosopher” are different propositions. Similarly, “I am Spartacus” becomes “X is Spartacus”, where X is replaced with terms representing the individuals Spartacus and John Smith. In other words, the example problems can be averted if sentences are formulated with precision such that their terms have unambiguous meanings. A number of philosophers and linguists claim that all definitions of a proposition are too vague to be useful. For them, it is just a misleading concept that should be removed from philosophy and semantics. W. V. Quine, who granted the existence of sets in mathematics, maintained that the indeterminacy of translation prevented any meaningful discussion of propositions, and that they should be discarded in favor of sentences. P. F. Strawson, on the other hand, advocated for the use of the term "statement". References. &lt;templatestyles src="Reflist/styles.css" /&gt; External links. [[Category:Formal semantics (natural language)
[ { "math_id": 0, "text": "T " }, { "math_id": 1, "text": "F " }, { "math_id": 2, "text": " f " }, { "math_id": 3, "text": "f(w)=T" }, { "math_id": 4, "text": " w ," }, { "math_id": 5, "text": "f(v)=F" }, { "math_id": 6, "text": " v ," }, { "math_id": 7, "text": "T" }, { "math_id": 8, "text": " w " }, { "math_id": 9, "text": " w' " }, { "math_id": 10, "text": " \\{w, w'\\} " } ]
https://en.wikipedia.org/wiki?curid=81094
810954
Underemployment equilibrium
Economic situation In Keynesian economics, underemployment equilibrium is a situation with a persistent shortfall relative to full employment and potential output so that unemployment is higher than at the NAIRU or the "natural" rate of unemployment. Theoretical framework. Origin. The concept of underemployment equilibrium originates from analyzing underemployment in the context of General Equilibrium Theory, a branch of microeconomics. It describes a steady economic state when consumptions and production outputs are both suboptimal – many economic agents in the economy are producing less than what they could produce in some other equilibrium states.[1] Economic theory dictates that underemployment equilibrium possesses certain stability features under standard assumptions[2] – the “invisible hand” (market force) can not, by itself, alter the equilibrium outcome to a more socially desirable equilibrium.[3] Exogenous forces such as fiscal policy have to be implemented in order to drive the economy to a better state. Formal definition. In an economy formula_0: every economic agent "h" has a utility function formula_1 and an initial endowment of wealth formula_2 every firm "f" has a production function formula_3 every agent’s share of firm "f" is formula_4 An underemployment equilibrium, given a price vector "p", is defined as the consumption-production vector formula_5 such that [4] For every firm "f", producing formula_6 maximizes its profit For every economic agent "h", consuming formula_7 maximizes its utility The market clears, meaning that the sum of optimal consumptions of all agents, formula_8 , equals to the sum of their initial endowments, formula_9 plus the sum of optimal profits for all firms, formula_10[1] Causes. Given a well-defined economy[2][4], there could be many stable equilibrium states – some are more desirable than others from a social welfare point of view. Many factors contribute to the existence of undesirable equilibriums, among which two are crucial for underemployment equilibrium: oversupply and insufficient demand. When the labor force are overeducated for the skill level of available employment opportunities in the economy, an underemployment equilibrium will occur. Insufficient demand addresses the same issue at the macro level. When there are many fewer job opportunities than unemployed individuals, the unemployment rate is high. Moreover, well-qualified workers will face a tougher job market and thus have to settle for jobs originally meant for less skilled individuals. “Oversupply” here refers to an excess in both labor quantity and quality. Forms of underemployment equilibrium. Overqualification. Overqualification is the most common form of underemployment equilibrium and is a direct result of oversupply. It defines the situation when individuals work in professions which require less education, skill, experience or ability than they possess. In economic terms, these agents are producing less than their socially optimal output. Collectively, when a lot of individuals produce below their full potential, the economy is in a sub-optimal underemployment equilibrium.[5] Overstaffing. Overstaffing refers to the state when firms or other organizations that act as employers in an economy are hiring more people than they need. This is much less common than overqualification. This redundancy invalidates unemployment rates as a signal for the existence of underemployment equilibrium. When firms are overstaffed, they can not achieve their maximum profit levels, which leads to undesirable social consequences such as low GDP growth. Organizations plagued by overstaffing, including not-for-profit organizations, cannot achieve maximum efficiency and their ability to create value according to their mission, vision and purpose will be hampered.[6] Applications and historical examples. Underemployment during the Great Depression. During the 1930s, Great Depression, U.S. unemployment rate reached 25% and GDP growth rate fell to −13%.[7] The U.S. economy at this period can be characterized by an underemployment equilibrium. On the one hand, many outside forces (including financial instability, hyper-inflation, lack of capital, etc.) created a negative shock to the demand of job market. On the other hand, the first two decades of the 20th century saw rapid advancement in production technologies, which effectively eliminated a large number of skilled jobs. Both of the above forces help create an insufficient demand of labor market during that time, causing an underemployment equilibrium. This particular underemployment equilibrium takes form of overqualification, characterized by high unemployment rate and low household incomes. Underemployment in the aftermath of the 2008 financial crisis. Graduates entering the job market in 2012 faced very tough competitions[8], caused by an oversupply of skilled workers, including fresh graduates and people who were laid off during the 2008 financial crisis. Graduates did not have enough time to respond to the 2008 financial crisis and continued to finish their degrees, only to find that there are not enough jobs upon graduation. This underemployment equilibrium state is characterized by overqualification – many college graduates are taking positions designed for less educated individuals due to gloomy job market conditions.[8] Data. The Bureau of Labor Statistics calculates monthly the “Underemployment Rate” starting from January, 1948. The underemployment rate has a cyclical trend and is generally higher during recession periods. Similar to the unemployment rate, the underemployment rate varies for different subgroups of the labor force. For example, individuals with Ph.Ds enjoy low underemployment rate while individuals with high school diploma or lower usually suffer from a high underemployment rate.[9] References. &lt;templatestyles src="Reflist/styles.css" /&gt; Reference list. 1. Jean-Jacques Herings, P. “Underemployment Equilibria”. "The New Palgrave Dictionary of Economics". Second Edition. Palgrave Macmillan, 2008. 2. Arrow, Kenneth J.; Block, H. D.; Hurwicz, Leonid. “On the Stability of the Competitive Equilibrium, II”. "Econometrica", Vol. 27, No. 1 (Jan., 1959), pp. 82–109 3. Feldman, D. C.. “The nature, antecedents and consequences of underemployment. "Journal of Management", 22(3), 385–407. 4. Truman F. Bewley. "General Equilibrium, Overlapping Generations Models, and Optimal Growth Theory". Harvard University Press, 2007 5. Erdogan, B., &amp; Bauer, T. N.. “Perceived overqualification and its outcomes: The moderating role of empowerment”. "Journal of Applied Psychology", 94(2), 557–565. 6. Felices, G.. “Assessing the Extent of Labour Hoarding”. "Bank of England Quarterly Bulletin", 43(2), 198–206. 7. Frank, Robert H.; Bernanke, Ben S. Principles of Macroeconomics (3rd ed.). Boston: McGraw-Hill/Irwin. p. 98. . 8. Shierholz, Heidi; Sabadish, Natalie; Wething, Hilary. “The Class of 2012, Labor market for young graduates remains grim”. Economic Policy Institute Report: Jobs and Unemployment. May 3, 2012. http://www.epi.org/publication/bp340-labor-market-young-graduates/ 9. Economic Policy Institute. Underemployment Rate. http://www.economytrack.org/underemployment.php
[ { "math_id": 0, "text": "E=((u^h,e^h )_h,(Y^f,(\\theta^fh )_h )_f)" }, { "math_id": 1, "text": "u^h" }, { "math_id": 2, "text": "e^h;" }, { "math_id": 3, "text": "Y^f;" }, { "math_id": 4, "text": "(\\theta^fh )_h." }, { "math_id": 5, "text": "(x^*,y^* )" }, { "math_id": 6, "text": "y^*" }, { "math_id": 7, "text": "x^*" }, { "math_id": 8, "text": "\\sum x^*" }, { "math_id": 9, "text": "\\sum e^*," }, { "math_id": 10, "text": "\\sum y^*." } ]
https://en.wikipedia.org/wiki?curid=810954
811030
Schwinger–Dyson equation
Equations for correlation functions in QFT The Schwinger–Dyson equations (SDEs) or Dyson–Schwinger equations, named after Julian Schwinger and Freeman Dyson, are general relations between correlation functions in quantum field theories (QFTs). They are also referred to as the Euler–Lagrange equations of quantum field theories, since they are the equations of motion corresponding to the Green's function. They form a set of infinitely many functional differential equations, all coupled to each other, sometimes referred to as the infinite tower of SDEs. In his paper "The S-Matrix in Quantum electrodynamics", Dyson derived relations between different S-matrix elements, or more specific "one-particle Green's functions", in quantum electrodynamics, by summing up infinitely many Feynman diagrams, thus working in a perturbative approach. Starting from his variational principle, Schwinger derived a set of equations for Green's functions non-perturbatively, which generalize Dyson's equations to the Schwinger–Dyson equations for the Green functions of quantum field theories. Today they provide a non-perturbative approach to quantum field theories and applications can be found in many fields of theoretical physics, such as solid-state physics and elementary particle physics. Schwinger also derived an equation for the two-particle irreducible Green functions, which is nowadays referred to as the inhomogeneous Bethe–Salpeter equation. Derivation. Given a polynomially bounded functional formula_0 over the field configurations, then, for any state vector (which is a solution of the QFT), formula_1, we have formula_2 where formula_3 is the action functional and formula_4 is the time ordering operation. Equivalently, in the density state formulation, for any (valid) density state formula_5, we have formula_6 This infinite set of equations can be used to solve for the correlation functions nonperturbatively. To make the connection to diagrammatic techniques (like Feynman diagrams) clearer, it is often convenient to split the action formula_3 as formula_7 where the first term is the quadratic part and formula_8 is an invertible symmetric (antisymmetric for fermions) covariant tensor of rank two in the deWitt notation whose inverse, formula_9 is called the bare propagator and formula_10 is the "interaction action". Then, we can rewrite the SD equations as formula_11 If formula_0 is a functional of formula_12, then for an operator formula_13, formula_14 is defined to be the operator which substitutes formula_13 for formula_12. For example, if formula_15 and formula_16 is a functional of formula_17, then formula_18 If we have an "analytic" (a function that is locally given by a convergent power series) functional formula_19 (called the generating functional) of formula_17 (called the source field) satisfying formula_20 then, from the properties of the functional integrals formula_21 the Schwinger–Dyson equation for the generating functional is formula_22 If we expand this equation as a Taylor series about formula_23, we get the entire set of Schwinger–Dyson equations. An example: "φ"4. To give an example, suppose formula_24 for a real field "φ". Then, formula_25 The Schwinger–Dyson equation for this particular example is: formula_26 Note that since formula_27 is not well-defined because formula_28 is a distribution in "x"1, "x"2 and "x"3, this equation needs to be regularized. In this example, the bare propagator D is the Green's function for formula_29 and so, the Schwinger–Dyson set of equations goes as formula_30 and formula_31 etc. References. &lt;templatestyles src="Reflist/styles.css" /&gt; Further reading. There are not many books that treat the Schwinger–Dyson equations. Here are three standard references: There are some review article about applications of the Schwinger–Dyson equations with applications to special field of physics. For applications to Quantum Chromodynamics there are
[ { "math_id": 0, "text": "F" }, { "math_id": 1, "text": "|\\psi\\rangle" }, { "math_id": 2, "text": "\\left\\langle\\psi\\left|\\mathcal{T}\\left\\{\\frac{\\delta}{\\delta\\varphi}F[\\varphi]\\right\\}\\right|\\psi\\right\\rangle = -i\\left\\langle\\psi\\left|\\mathcal{T}\\left\\{F[\\varphi]\\frac{\\delta}{\\delta\\varphi}S[\\varphi]\\right\\}\\right|\\psi\\right\\rangle" }, { "math_id": 3, "text": "S" }, { "math_id": 4, "text": "\\mathcal{T}" }, { "math_id": 5, "text": "\\rho" }, { "math_id": 6, "text": "\\rho\\left(\\mathcal{T}\\left\\{\\frac{\\delta}{\\delta\\varphi}F[\\varphi]\\right\\}\\right) = -i\\rho\\left(\\mathcal{T}\\left\\{ F[\\varphi] \\frac{\\delta}{\\delta\\varphi}S[\\varphi]\\right\\}\\right)." }, { "math_id": 7, "text": "S[\\varphi] = \\frac{1}{2}\\varphi^{i}D^{-1}_{ij}\\varphi^{j} + S_{\\text{int}}[\\varphi]," }, { "math_id": 8, "text": "D^{-1}" }, { "math_id": 9, "text": "D" }, { "math_id": 10, "text": "S_{\\text{int}}[\\varphi]" }, { "math_id": 11, "text": "\\langle\\psi|\\mathcal{T}\\{F \\varphi^j\\}|\\psi\\rangle=\\langle\\psi|\\mathcal{T}\\{iF_{,i}D^{ij}-FS_{\\text{int},i}D^{ij}\\}|\\psi\\rangle." }, { "math_id": 12, "text": "\\varphi" }, { "math_id": 13, "text": "K" }, { "math_id": 14, "text": "F[K]" }, { "math_id": 15, "text": "F[\\varphi]=\\frac{\\partial^{k_1}}{\\partial x_1^{k_1}}\\varphi(x_1)\\cdots \\frac{\\partial^{k_n}}{\\partial x_n^{k_n}}\\varphi(x_n)" }, { "math_id": 16, "text": "G" }, { "math_id": 17, "text": "J" }, { "math_id": 18, "text": "F\\left[-i\\frac{\\delta}{\\delta J}\\right]G[J]=(-i)^n \\frac{\\partial^{k_1}}{\\partial x_1^{k_1}}\\frac{\\delta}{\\delta J(x_1)} \\cdots \\frac{\\partial^{k_n}}{\\partial x_n^{k_n}}\\frac{\\delta}{\\delta J(x_n)} G[J]." }, { "math_id": 19, "text": "Z" }, { "math_id": 20, "text": "\\frac{\\delta^n Z}{\\delta J(x_1) \\cdots \\delta J(x_n)}[0]=i^n Z[0] \\langle\\varphi(x_1)\\cdots \\varphi(x_n)\\rangle," }, { "math_id": 21, "text": "{\\left \\langle \\frac{\\delta \\mathcal{S}}{\\delta \\varphi(x)}\\left[\\varphi \\right] + J(x)\\right\\rangle}_J=0," }, { "math_id": 22, "text": "\\frac{\\delta S}{\\delta \\varphi(x)}\\left[-i \\frac{\\delta}{\\delta J} \\right] Z[J] + J(x)Z[J]=0." }, { "math_id": 23, "text": "J = 0" }, { "math_id": 24, "text": "S[\\varphi]=\\int d^dx \\left (\\frac{1}{2} \\partial^\\mu \\varphi(x) \\partial_\\mu \\varphi(x) -\\frac{1}{2}m^2\\varphi(x)^2 -\\frac{\\lambda}{4!}\\varphi(x)^4\\right )" }, { "math_id": 25, "text": "\\frac{\\delta S}{\\delta \\varphi(x)}=-\\partial_\\mu \\partial^\\mu \\varphi(x) -m^2 \\varphi(x) - \\frac{\\lambda}{3!}\\varphi^3(x)." }, { "math_id": 26, "text": "i\\partial_\\mu \\partial^\\mu \\frac{\\delta}{\\delta J(x)}Z[J]+im^2\\frac{\\delta}{\\delta J(x)}Z[J]-\\frac{i\\lambda}{3!}\\frac{\\delta^3}{\\delta J(x)^3} Z[J] + J(x)Z[J] = 0" }, { "math_id": 27, "text": "\\frac{\\delta^3}{\\delta J(x)^3}" }, { "math_id": 28, "text": "\\frac{\\delta^3}{\\delta J(x_1)\\delta J(x_2) \\delta J(x_3)} Z[J]" }, { "math_id": 29, "text": "-\\partial^\\mu \\partial_\\mu-m^2" }, { "math_id": 30, "text": "\n\\begin{align}\n& \\langle\\psi\\mid\\mathcal{T}\\{ \\varphi(x_0) \\varphi(x_1)\\} \\mid \\psi\\rangle \\\\[4pt]\n= {} & iD(x_0,x_1) +\\frac{\\lambda}{3!}\\int d^dx_2 \\, D(x_0,x_2) \\langle \\psi \\mid \\mathcal{T} \\{\\varphi(x_1)\\varphi(x_2)\\varphi(x_2)\\varphi(x_2)\\} \\mid \\psi\\rangle\n\\end{align}\n" }, { "math_id": 31, "text": "\n\\begin{align}\n& \\langle\\psi\\mid\\mathcal{T}\\{\\varphi(x_0) \\varphi(x_1) \\varphi(x_2) \\varphi(x_3)\\} \\mid \\psi\\rangle \\\\[6pt]\n= {} & iD(x_0,x_1)\\langle\\psi\\mid\\mathcal{T}\\{\\varphi(x_2)\\varphi(x_3)\\}\\mid\\psi\\rangle + iD(x_0,x_2)\\langle\\psi\\mid\\mathcal{T}\\{\\varphi(x_1)\\varphi(x_3)\\}\\mid\\psi\\rangle \\\\[4pt]\n& {} + iD(x_0,x_3)\\langle\\psi\\mid\\mathcal{T}\\{\\varphi(x_1)\\varphi(x_2)\\}\\mid\\psi\\rangle \\\\[4pt]\n& {} + \\frac{\\lambda}{3!}\\int d^dx_4 \\, D(x_0,x_4)\\langle\\psi\\mid\\mathcal{T}\\{\\varphi(x_1)\\varphi(x_2)\\varphi(x_3)\\varphi(x_4)\\varphi(x_4)\\varphi(x_4)\\}\\mid\\psi\\rangle\n\\end{align}\n" } ]
https://en.wikipedia.org/wiki?curid=811030
8111079
Gravitational wave
Propagating spacetime ripple Gravitational waves are transient displacements in a gravitational field—generated by the motion or acceleration of gravitating masses—that radiate outward from their source at the speed of light. They were first proposed by Oliver Heaviside in 1893 and then later by Henri Poincaré in 1905 as the gravitational equivalent of electromagnetic waves. In 1916, Albert Einstein demonstrated that gravitational waves result from his general theory of relativity as ripples in spacetime. Gravitational waves transport energy as gravitational radiation, a form of radiant energy similar to electromagnetic radiation. Newton's law of universal gravitation, part of classical mechanics, does not provide for their existence, since that law is predicated on the assumption that physical interactions propagate instantaneously (at infinite speed) – showing one of the ways the methods of Newtonian physics are unable to explain phenomena associated with relativity. In gravitational-wave astronomy, observations of gravitational waves are used to infer data about the sources of gravitational waves. Sources that can be studied this way include binary star systems composed of white dwarfs, neutron stars, and black holes; events such as supernovae; and the formation of the early universe shortly after the Big Bang. The first indirect evidence for the existence of gravitational waves came in 1974 from the observed orbital decay of the Hulse–Taylor binary pulsar, which matched the decay predicted by general relativity as energy is lost to gravitational radiation. In 1993, Russell A. Hulse and Joseph Hooton Taylor Jr. received the Nobel Prize in Physics for this discovery. The first direct observation of gravitational waves was made in 2015, when a signal generated by the merger of two black holes was received by the LIGO gravitational wave detectors in Livingston, Louisiana, and in Hanford, Washington. The 2017 Nobel Prize in Physics was subsequently awarded to Rainer Weiss, Kip Thorne and Barry Barish for their role in the direct detection of gravitational waves. Introduction. In Albert Einstein's general theory of relativity, gravity is treated as a phenomenon resulting from the curvature of spacetime. This curvature is caused by the presence of mass. Generally, the more mass that is contained within a given volume of space, the greater the curvature of spacetime will be at the boundary of its volume. As objects with mass move around in spacetime, the curvature changes to reflect the changed locations of those objects. In certain circumstances, accelerating objects generate changes in this curvature which propagate outwards at the speed of light in a wave-like manner. These propagating phenomena are known as gravitational waves. As a gravitational wave passes an observer, that observer will find spacetime distorted by the effects of strain. Distances between objects increase and decrease rhythmically as the wave passes, at a frequency equal to that of the wave. The magnitude of this effect is inversely proportional to the distance from the source. Inspiraling binary neutron stars are predicted to be a powerful source of gravitational waves as they coalesce, due to the very large acceleration of their masses as they orbit close to one another. However, due to the astronomical distances to these sources, the effects when measured on Earth are predicted to be very small, having strains of less than 1 part in 1020. Scientists demonstrate the existence of these waves with highly-sensitive detectors at multiple observation sites. As of 2012[ [update]], the LIGO and VIRGO observatories were the most sensitive detectors, operating at resolutions of about one part in . The Japanese detector KAGRA was completed in 2019; its first joint detection with LIGO and VIRGO was reported in 2021. Another European ground-based detector, the Einstein Telescope, is under development. A space-based observatory, the Laser Interferometer Space Antenna, is also being developed by the European Space Agency. Gravitational waves do not strongly interact with matter in the way that electromagnetic radiation does. This allows for the observation of events involving exotic objects in the distant universe that cannot be observed with more traditional means such as optical telescopes or radio telescopes; accordingly, gravitational wave astronomy gives new insights into the workings of the universe. In particular, gravitational waves could be of interest to cosmologists as they offer a possible way of observing the very early universe. This is not possible with conventional astronomy, since before recombination the universe was opaque to electromagnetic radiation. Precise measurements of gravitational waves will also allow scientists to test more thoroughly the general theory of relativity. In principle, gravitational waves can exist at any frequency. Very low frequency waves are detected using pulsar timing arrays. Astronomers monitor the timing of approximately 100 pulsars spread widely across our galaxy over the course of years. Detectable changes in the arrival time of their signals can result from passing gravitational waves generated by merging supermassive black holes with wavelengths measured in lightyears. These timing changes can be used to locate the source of the waves. Using this technique, astronomers have discovered the 'hum' of various SMBH mergers occurring in the universe. Stephen Hawking and Werner Israel list different frequency bands for gravitational waves that could plausibly be detected, ranging from 10−7 Hz up to 1011 Hz. Speed of gravity. The speed of gravitational waves in the general theory of relativity is equal to the "speed of light" in vacuum, c. Within the theory of special relativity, the constant c is not only about light; instead it is the highest possible speed for any interaction in nature. Formally, c is a conversion factor for changing the unit of time to the unit of space. This makes it the only speed which does not depend either on the motion of an observer or a source of light and/or gravity. Thus, the speed of "light" is also the speed of gravitational waves, and, further, the speed of any massless particle. Such particles include the gluon (carrier of the strong force), the photons that make up light (hence carrier of electromagnetic force), and the hypothetical gravitons (which are the presumptive field particles associated with gravity; however, an understanding of the graviton, if any exist, requires an as-yet unavailable theory of quantum gravity). In August 2017, the LIGO and Virgo detectors received gravitational wave signals within 2 seconds of gamma ray satellites and optical telescopes seeing signals from the same direction. This confirmed that the speed of gravitational waves was the same as the speed of light. History. The possibility of gravitational waves and that those might travel at the speed of light was discussed in 1893 by Oliver Heaviside, using the analogy between the inverse-square law of gravitation and the electrostatic force. In 1905, Henri Poincaré proposed gravitational waves, emanating from a body and propagating at the speed of light, as being required by the Lorentz transformations and suggested that, in analogy to an accelerating electrical charge producing electromagnetic waves, accelerated masses in a relativistic field theory of gravity should produce gravitational waves. In 1915 Einstein published his general theory of relativity, a complete relativistic theory of gravitation. He conjectured, like Poincare, that the equation would produce gravitational waves, but, as he mentions in a letter to Schwarzschild in February 1916, these could not be similar to electromagnetic waves. Electromagnetic waves can be produced by dipole motion, requiring both a positive and a negative charge. Gravitation has no equivalent to negative charge. Einstein continued to work through the complexity of the equations of general relativity to find an alternative wave model. The result was published in June 1916, and there he came to the conclusion that the gravitational wave must propagate with the speed of light, and there must, in fact, be three types of gravitational waves dubbed longitudinal–longitudinal, transverse–longitudinal, and transverse–transverse by Hermann Weyl. However, the nature of Einstein's approximations led many (including Einstein himself) to doubt the result. In 1922, Arthur Eddington showed that two of Einstein's types of waves were artifacts of the coordinate system he used, and could be made to propagate at any speed by choosing appropriate coordinates, leading Eddington to jest that they "propagate at the speed of thought". This also cast doubt on the physicality of the third (transverse–transverse) type that Eddington showed always propagate at the speed of light regardless of coordinate system. In 1936, Einstein and Nathan Rosen submitted a paper to "Physical Review" in which they claimed gravitational waves could not exist in the full general theory of relativity because any such solution of the field equations would have a singularity. The journal sent their manuscript to be reviewed by Howard P. Robertson, who anonymously reported that the singularities in question were simply the harmless coordinate singularities of the employed cylindrical coordinates. Einstein, who was unfamiliar with the concept of peer review, angrily withdrew the manuscript, never to publish in "Physical Review" again. Nonetheless, his assistant Leopold Infeld, who had been in contact with Robertson, convinced Einstein that the criticism was correct, and the paper was rewritten with the opposite conclusion and published elsewhere. In 1956, Felix Pirani remedied the confusion caused by the use of various coordinate systems by rephrasing the gravitational waves in terms of the manifestly observable Riemann curvature tensor. At the time, Pirani's work was overshadowed by the community's focus on a different question: whether gravitational waves could transmit energy. This matter was settled by a thought experiment proposed by Richard Feynman during the first "GR" conference at Chapel Hill in 1957. In short, his argument known as the "sticky bead argument" notes that if one takes a rod with beads then the effect of a passing gravitational wave would be to move the beads along the rod; friction would then produce heat, implying that the passing wave had done work. Shortly after, Hermann Bondi published a detailed version of the "sticky bead argument". This later led to a series of articles (1959 to 1989) by Bondi and Pirani that established the existence of plane wave solutions for gravitational waves. Paul Dirac further postulated the existence of gravitational waves, declaring them to have "physical significance" in his 1959 lecture at the Lindau Meetings. Further, it was Dirac who predicted gravitational waves with a well defined energy density in 1964. After the Chapel Hill conference, Joseph Weber started designing and building the first gravitational wave detectors now known as Weber bars. In 1969, Weber claimed to have detected the first gravitational waves, and by 1970 he was "detecting" signals regularly from the Galactic Center; however, the frequency of detection soon raised doubts on the validity of his observations as the implied rate of energy loss of the Milky Way would drain our galaxy of energy on a timescale much shorter than its inferred age. These doubts were strengthened when, by the mid-1970s, repeated experiments from other groups building their own Weber bars across the globe failed to find any signals, and by the late 1970s consensus was that Weber's results were spurious. In the same period, the first indirect evidence of gravitational waves was discovered. In 1974, Russell Alan Hulse and Joseph Hooton Taylor, Jr. discovered the first binary pulsar, which earned them the 1993 Nobel Prize in Physics. Pulsar timing observations over the next decade showed a gradual decay of the orbital period of the Hulse–Taylor pulsar that matched the loss of energy and angular momentum in gravitational radiation predicted by general relativity. This indirect detection of gravitational waves motivated further searches, despite Weber's discredited result. Some groups continued to improve Weber's original concept, while others pursued the detection of gravitational waves using laser interferometers. The idea of using a laser interferometer for this seems to have been floated independently by various people, including M. E. Gertsenshtein and V. I. Pustovoit in 1962, and Vladimir B. Braginskiĭ in 1966. The first prototypes were developed in the 1970s by Robert L. Forward and Rainer Weiss. In the decades that followed, ever more sensitive instruments were constructed, culminating in the construction of GEO600, LIGO, and Virgo. After years of producing null results, improved detectors became operational in 2015. On 11 February 2016, the LIGO-Virgo collaborations announced the first observation of gravitational waves, from a signal (dubbed GW150914) detected at 09:50:45 GMT on 14 September 2015 of two black holes with masses of 29 and 36 solar masses merging about 1.3 billion light-years away. During the final fraction of a second of the merger, it released more than 50 times the power of all the stars in the observable universe combined. The signal increased in frequency from 35 to 250 Hz over 10 cycles (5 orbits) as it rose in strength for a period of 0.2 second. The mass of the new merged black hole was 62 solar masses. Energy equivalent to three solar masses was emitted as gravitational waves. The signal was seen by both LIGO detectors in Livingston and Hanford, with a time difference of 7 milliseconds due to the angle between the two detectors and the source. The signal came from the Southern Celestial Hemisphere, in the rough direction of (but much farther away than) the Magellanic Clouds. The confidence level of this being an observation of gravitational waves was 99.99994%. A year earlier, the BICEP2 collaboration claimed that they had detected the imprint of gravitational waves in the cosmic microwave background. However, they were later forced to retract this result. In 2017, the Nobel Prize in Physics was awarded to Rainer Weiss, Kip Thorne and Barry Barish for their role in the detection of gravitational waves. In 2023, NANOGrav, EPTA, PPTA, and IPTA announced that they found evidence of a universal gravitational wave background. North American Nanohertz Observatory for Gravitational Waves states, that they were created over cosmological time scales by supermassive black holes, identifying the distinctive Hellings-Downs curve in 15 years of radio observations of 25 pulsars. Similar results are published by European Pulsar Timing Array, who claimed a formula_0-significance. They expect that a formula_1-significance will be achieved by 2025 by combining the measurements of several collaborations. Effects of passing. Gravitational waves are constantly passing Earth; however, even the strongest have a minuscule effect and their sources are generally at a great distance. For example, the waves given off by the cataclysmic final merger of GW150914 reached Earth after travelling over a billion light-years, as a ripple in spacetime that changed the length of a 4 km LIGO arm by a thousandth of the width of a proton, proportionally equivalent to changing the distance to the nearest star outside the Solar System by one hair's width. This tiny effect from even extreme gravitational waves makes them observable on Earth only with the most sophisticated detectors. The effects of a passing gravitational wave, in an extremely exaggerated form, can be visualized by imagining a perfectly flat region of spacetime with a group of motionless test particles lying in a plane, e.g., the surface of a computer screen. As a gravitational wave passes through the particles along a line perpendicular to the plane of the particles, i.e., following the observer's line of vision into the screen, the particles will follow the distortion in spacetime, oscillating in a "cruciform" manner, as shown in the animations. The area enclosed by the test particles does not change and there is no motion along the direction of propagation. The oscillations depicted in the animation are exaggerated for the purpose of discussion – in reality a gravitational wave has a very small amplitude (as formulated in linearized gravity). However, they help illustrate the kind of oscillations associated with gravitational waves as produced by a pair of masses in a circular orbit. In this case the amplitude of the gravitational wave is constant, but its plane of polarization changes or rotates at twice the orbital rate, so the time-varying gravitational wave size, or 'periodic spacetime strain', exhibits a variation as shown in the animation. If the orbit of the masses is elliptical then the gravitational wave's amplitude also varies with time according to Einstein's quadrupole formula. As with other waves, there are a number of characteristics used to describe a gravitational wave: The speed, wavelength, and frequency of a gravitational wave are related by the equation "c" = "λf", just like the equation for a light wave. For example, the animations shown here oscillate roughly once every two seconds. This would correspond to a frequency of 0.5 Hz, and a wavelength of about 600 000 km, or 47 times the diameter of the Earth. In the above example, it is assumed that the wave is linearly polarized with a "plus" polarization, written "h"+. Polarization of a gravitational wave is just like polarization of a light wave except that the polarizations of a gravitational wave are 45 degrees apart, as opposed to 90 degrees. In particular, in a "cross"-polarized gravitational wave, "h"×, the effect on the test particles would be basically the same, but rotated by 45 degrees, as shown in the second animation. Just as with light polarization, the polarizations of gravitational waves may also be expressed in terms of circularly polarized waves. Gravitational waves are polarized because of the nature of their source. Sources. In general terms, gravitational waves are radiated by objects whose motion involves acceleration and its change, provided that the motion is not perfectly spherically symmetric (like an expanding or contracting sphere) or rotationally symmetric (like a spinning disk or sphere). A simple example of this principle is a spinning dumbbell. If the dumbbell spins around its axis of symmetry, it will not radiate gravitational waves; if it tumbles end over end, as in the case of two planets orbiting each other, it will radiate gravitational waves. The heavier the dumbbell, and the faster it tumbles, the greater is the gravitational radiation it will give off. In an extreme case, such as when the two weights of the dumbbell are massive stars like neutron stars or black holes, orbiting each other quickly, then significant amounts of gravitational radiation would be given off. Some more detailed examples: More technically, the second time derivative of the quadrupole moment (or the "l"-th time derivative of the "l"-th multipole moment) of an isolated system's stress–energy tensor must be non-zero in order for it to emit gravitational radiation. This is analogous to the changing dipole moment of charge or current that is necessary for the emission of electromagnetic radiation. Binaries. Gravitational waves carry energy away from their sources and, in the case of orbiting bodies, this is associated with an in-spiral or decrease in orbit. Imagine for example a simple system of two masses – such as the Earth–Sun system – moving slowly compared to the speed of light in circular orbits. Assume that these two masses orbit each other in a circular orbit in the "x"–"y" plane. To a good approximation, the masses follow simple Keplerian orbits. However, such an orbit represents a changing quadrupole moment. That is, the system will give off gravitational waves. In theory, the loss of energy through gravitational radiation could eventually drop the Earth into the Sun. However, the total energy of the Earth orbiting the Sun (kinetic energy + gravitational potential energy) is about 1.14×1036 joules of which only 200 watts (joules per second) is lost through gravitational radiation, leading to a decay in the orbit by about 1×10-15 meters per day or roughly the diameter of a proton. At this rate, it would take the Earth approximately 3×1013 times more than the current age of the universe to spiral onto the Sun. This estimate overlooks the decrease in "r" over time, but the radius varies only slowly for most of the time and plunges at later stages, as formula_2 with formula_3 the initial radius and formula_4 the total time needed to fully coalesce. More generally, the rate of orbital decay can be approximated by formula_5 where "r" is the separation between the bodies, "t" time, "G" the gravitational constant, "c" the speed of light, and "m"1 and "m"2 the masses of the bodies. This leads to an expected time to merger of formula_6 Compact binaries. Compact stars like white dwarfs and neutron stars can be constituents of binaries. For example, a pair of solar mass neutron stars in a circular orbit at a separation of 1.89×108 m (189,000 km) has an orbital period of 1,000 seconds, and an expected lifetime of 1.30×1013 seconds or about 414,000 years. Such a system could be observed by LISA if it were not too far away. A far greater number of white dwarf binaries exist with orbital periods in this range. White dwarf binaries have masses in the order of the Sun, and diameters in the order of the Earth. They cannot get much closer together than 10,000 km before they will merge and explode in a supernova which would also end the emission of gravitational waves. Until then, their gravitational radiation would be comparable to that of a neutron star binary. When the orbit of a neutron star binary has decayed to 1.89×106 m (1890 km), its remaining lifetime is about 130,000 seconds or 36 hours. The orbital frequency will vary from 1 orbit per second at the start, to 918 orbits per second when the orbit has shrunk to 20 km at merger. The majority of gravitational radiation emitted will be at twice the orbital frequency. Just before merger, the inspiral could be observed by LIGO if such a binary were close enough. LIGO has only a few minutes to observe this merger out of a total orbital lifetime that may have been billions of years. In August 2017, LIGO and Virgo observed the first binary neutron star inspiral in GW170817, and 70 observatories collaborated to detect the electromagnetic counterpart, a kilonova in the galaxy NGC 4993, 40 megaparsecs away, emitting a short gamma ray burst (GRB 170817A) seconds after the merger, followed by a longer optical transient (AT 2017gfo) powered by r-process nuclei. Advanced LIGO detectors should be able to detect such events up to 200 megaparsecs away. Within this range of the order 40 events are expected per year. Black hole binaries. Black hole binaries emit gravitational waves during their in-spiral, merger, and ring-down phases. Hence, in the early 1990s the physics community rallied around a concerted effort to predict the waveforms of gravitational waves from these systems with the Binary Black Hole Grand Challenge Alliance. The largest amplitude of emission occurs during the merger phase, which can be modeled with the techniques of numerical relativity. The first direct detection of gravitational waves, GW150914, came from the merger of two black holes. Supernova. A supernova is a transient astronomical event that occurs during the last stellar evolutionary stages of a massive star's life, whose dramatic and catastrophic destruction is marked by one final titanic explosion. This explosion can happen in one of many ways, but in all of them a significant proportion of the matter in the star is blown away into the surrounding space at extremely high velocities (up to 10% of the speed of light). Unless there is perfect spherical symmetry in these explosions (i.e., unless matter is spewed out evenly in all directions), there will be gravitational radiation from the explosion. This is because gravitational waves are generated by a changing quadrupole moment, which can happen only when there is asymmetrical movement of masses. Since the exact mechanism by which supernovae take place is not fully understood, it is not easy to model the gravitational radiation emitted by them. Spinning neutron stars. As noted above, a mass distribution will emit gravitational radiation only when there is spherically asymmetric motion among the masses. A spinning neutron star will generally emit no gravitational radiation because neutron stars are highly dense objects with a strong gravitational field that keeps them almost perfectly spherical. In some cases, however, there might be slight deformities on the surface called "mountains", which are bumps extending no more than 10 centimeters (4 inches) above the surface, that make the spinning spherically asymmetric. This gives the star a quadrupole moment that changes with time, and it will emit gravitational waves until the deformities are smoothed out. Inflation. Many models of the Universe suggest that there was an inflationary epoch in the early history of the Universe when space expanded by a large factor in a very short amount of time. If this expansion was not symmetric in all directions, it may have emitted gravitational radiation detectable today as a gravitational wave background. This background signal is too weak for any currently operational gravitational wave detector to observe, and it is thought it may be decades before such an observation can be made. Properties and behaviour. Energy, momentum, and angular momentum. Water waves, sound waves, and electromagnetic waves are able to carry energy, momentum, and angular momentum and by doing so they carry those away from the source. Gravitational waves perform the same function. Thus, for example, a binary system loses angular momentum as the two orbiting objects spiral towards each other—the angular momentum is radiated away by gravitational waves. The waves can also carry off linear momentum, a possibility that has some interesting implications for astrophysics. After two supermassive black holes coalesce, emission of linear momentum can produce a "kick" with amplitude as large as 4000 km/s. This is fast enough to eject the coalesced black hole completely from its host galaxy. Even if the kick is too small to eject the black hole completely, it can remove it temporarily from the nucleus of the galaxy, after which it will oscillate about the center, eventually coming to rest. A kicked black hole can also carry a star cluster with it, forming a hyper-compact stellar system. Or it may carry gas, allowing the recoiling black hole to appear temporarily as a "naked quasar". The quasar SDSS J092712.65+294344.0 is thought to contain a recoiling supermassive black hole. Redshifting. Like electromagnetic waves, gravitational waves should exhibit shifting of wavelength and frequency due to the relative velocities of the source and observer (the Doppler effect), but also due to distortions of spacetime, such as cosmic expansion. Redshifting "of" gravitational waves is different from redshifting "due to" gravity (gravitational redshift). Quantum gravity, wave-particle aspects, and graviton. In the framework of quantum field theory, the graviton is the name given to a hypothetical elementary particle speculated to be the force carrier that mediates gravity. However the graviton is not yet proven to exist, and no scientific model yet exists that successfully reconciles general relativity, which describes gravity, and the Standard Model, which describes all other fundamental forces. Attempts, such as quantum gravity, have been made, but are not yet accepted. If such a particle exists, it is expected to be massless (because the gravitational force appears to have unlimited range) and must be a spin-2 boson. It can be shown that any massless spin-2 field would give rise to a force indistinguishable from gravitation, because a massless spin-2 field must couple to (interact with) the stress–energy tensor in the same way that the gravitational field does; therefore if a massless spin-2 particle were ever discovered, it would be likely to be the graviton without further distinction from other massless spin-2 particles. Such a discovery would unite quantum theory with gravity. Significance for study of the early universe. Due to the weakness of the coupling of gravity to matter, gravitational waves experience very little absorption or scattering, even as they travel over astronomical distances. In particular, gravitational waves are expected to be unaffected by the opacity of the very early universe. In these early phases, space had not yet become "transparent", so observations based upon light, radio waves, and other electromagnetic radiation that far back into time are limited or unavailable. Therefore, gravitational waves are expected in principle to have the potential to provide a wealth of observational data about the very early universe. Determining direction of travel. The difficulty in directly detecting gravitational waves means it is also difficult for a single detector to identify by itself the direction of a source. Therefore, multiple detectors are used, both to distinguish signals from other "noise" by confirming the signal is not of earthly origin, and also to determine direction by means of triangulation. This technique uses the fact that the waves travel at the speed of light and will reach different detectors at different times depending on their source direction. Although the differences in arrival time may be just a few milliseconds, this is sufficient to identify the direction of the origin of the wave with considerable precision. Only in the case of GW170814 were three detectors operating at the time of the event, therefore, the direction is precisely defined. The detection by all three instruments led to a very accurate estimate of the position of the source, with a 90% credible region of just 60 deg2, a factor 20 more accurate than before. Gravitational wave astronomy. During the past century, astronomy has been revolutionized by the use of new methods for observing the universe. Astronomical observations were initially made using visible light. Galileo Galilei pioneered the use of telescopes to enhance these observations. However, visible light is only a small portion of the electromagnetic spectrum, and not all objects in the distant universe shine strongly in this particular band. More information may be found, for example, in radio wavelengths. Using radio telescopes, astronomers have discovered pulsars and quasars, for example. Observations in the microwave band led to the detection of faint imprints of the Big Bang, a discovery Stephen Hawking called the "greatest discovery of the century, if not all time". Similar advances in observations using gamma rays, x-rays, ultraviolet light, and infrared light have also brought new insights to astronomy. As each of these regions of the spectrum has opened, new discoveries have been made that could not have been made otherwise. The astronomy community hopes that the same holds true of gravitational waves. Gravitational waves have two important and unique properties. First, there is no need for any type of matter to be present nearby in order for the waves to be generated by a binary system of uncharged black holes, which would emit no electromagnetic radiation. Second, gravitational waves can pass through any intervening matter without being scattered significantly. Whereas light from distant stars may be blocked out by interstellar dust, for example, gravitational waves will pass through essentially unimpeded. These two features allow gravitational waves to carry information about astronomical phenomena heretofore never observed by humans. The sources of gravitational waves described above are in the low-frequency end of the gravitational-wave spectrum (10−7 to 105 Hz). An astrophysical source at the high-frequency end of the gravitational-wave spectrum (above 105 Hz and probably 1010 Hz) generates relic gravitational waves that are theorized to be faint imprints of the Big Bang like the cosmic microwave background. At these high frequencies it is potentially possible that the sources may be "man made" that is, gravitational waves generated and detected in the laboratory. A supermassive black hole, created from the merger of the black holes at the center of two merging galaxies detected by the Hubble Space Telescope, is theorized to have been ejected from the merger center by gravitational waves. Detection. Indirect detection. Although the waves from the Earth–Sun system are minuscule, astronomers can point to other sources for which the radiation should be substantial. One important example is the Hulse–Taylor binary – a pair of stars, one of which is a pulsar. The characteristics of their orbit can be deduced from the Doppler shifting of radio signals given off by the pulsar. Each of the stars is about 1.4 M☉ and the size of their orbits is about 1/75 of the Earth–Sun orbit, just a few times larger than the diameter of our own Sun. The combination of greater masses and smaller separation means that the energy given off by the Hulse–Taylor binary will be far greater than the energy given off by the Earth–Sun system – roughly 1022 times as much. The information about the orbit can be used to predict how much energy (and angular momentum) would be radiated in the form of gravitational waves. As the binary system loses energy, the stars gradually draw closer to each other, and the orbital period decreases. The resulting trajectory of each star is an inspiral, a spiral with decreasing radius. General relativity precisely describes these trajectories; in particular, the energy radiated in gravitational waves determines the rate of decrease in the period, defined as the time interval between successive periastrons (points of closest approach of the two stars). For the Hulse–Taylor pulsar, the predicted current change in radius is about 3 mm per orbit, and the change in the 7.75 hr period is about 2 seconds per year. Following a preliminary observation showing an orbital energy loss consistent with gravitational waves, careful timing observations by Taylor and Joel Weisberg dramatically confirmed the predicted period decrease to within 10%. With the improved statistics of more than 30 years of timing data since the pulsar's discovery, the observed change in the orbital period currently matches the prediction from gravitational radiation assumed by general relativity to within 0.2 percent. In 1993, spurred in part by this indirect detection of gravitational waves, the Nobel Committee awarded the Nobel Prize in Physics to Hulse and Taylor for "the discovery of a new type of pulsar, a discovery that has opened up new possibilities for the study of gravitation." The lifetime of this binary system, from the present to merger is estimated to be a few hundred million years. Inspirals are very important sources of gravitational waves. Any time two compact objects (white dwarfs, neutron stars, or black holes) are in close orbits, they send out intense gravitational waves. As they spiral closer to each other, these waves become more intense. At some point they should become so intense that direct detection by their effect on objects on Earth or in space is possible. This direct detection is the goal of several large-scale experiments. The only difficulty is that most systems like the Hulse–Taylor binary are so far away. The amplitude of waves given off by the Hulse–Taylor binary at Earth would be roughly "h" ≈ 10−26. There are some sources, however, that astrophysicists expect to find that produce much greater amplitudes of "h" ≈ 10−20. At least eight other binary pulsars have been discovered. Difficulties. Gravitational waves are not easily detectable. When they reach the Earth, they have a small amplitude with strain approximately 10−21, meaning that an extremely sensitive detector is needed, and that other sources of noise can overwhelm the signal. Gravitational waves are expected to have frequencies 10−16 Hz &lt; "f" &lt; 104 Hz.&lt;ref name="arXiv:gr-qc/9506086"&gt;&lt;/ref&gt; Ground-based detectors. Though the Hulse–Taylor observations were very important, they give only "indirect" evidence for gravitational waves. A more conclusive observation would be a "direct" measurement of the effect of a passing gravitational wave, which could also provide more information about the system that generated it. Any such direct detection is complicated by the extraordinarily small effect the waves would produce on a detector. The amplitude of a spherical wave will fall off as the inverse of the distance from the source (the 1/"R" term in the formulas for "h" above). Thus, even waves from extreme systems like merging binary black holes die out to very small amplitudes by the time they reach the Earth. Astrophysicists expect that some gravitational waves passing the Earth may be as large as "h" ≈ 10−20, but generally no bigger. Resonant antennas. A simple device theorised to detect the expected wave motion is called a Weber bar – a large, solid bar of metal isolated from outside vibrations. This type of instrument was the first type of gravitational wave detector. Strains in space due to an incident gravitational wave excite the bar's resonant frequency and could thus be amplified to detectable levels. Conceivably, a nearby supernova might be strong enough to be seen without resonant amplification. With this instrument, Joseph Weber claimed to have detected daily signals of gravitational waves. His results, however, were contested in 1974 by physicists Richard Garwin and David Douglass. Modern forms of the Weber bar are still operated, cryogenically cooled, with superconducting quantum interference devices to detect vibration. Weber bars are not sensitive enough to detect anything but extremely powerful gravitational waves. MiniGRAIL is a spherical gravitational wave antenna using this principle. It is based at Leiden University, consisting of an exactingly machined 1,150 kg sphere cryogenically cooled to 20 millikelvins. The spherical configuration allows for equal sensitivity in all directions, and is somewhat experimentally simpler than larger linear devices requiring high vacuum. Events are detected by measuring deformation of the detector sphere. MiniGRAIL is highly sensitive in the 2–4 kHz range, suitable for detecting gravitational waves from rotating neutron star instabilities or small black hole mergers. There are currently two detectors focused on the higher end of the gravitational wave spectrum (10−7 to 105 Hz): one at University of Birmingham, England, and the other at INFN Genoa, Italy. A third is under development at Chongqing University, China. The Birmingham detector measures changes in the polarization state of a microwave beam circulating in a closed loop about one meter across. Both detectors are expected to be sensitive to periodic spacetime strains of "h" ~ , given as an amplitude spectral density. The INFN Genoa detector is a resonant antenna consisting of two coupled spherical superconducting harmonic oscillators a few centimeters in diameter. The oscillators are designed to have (when uncoupled) almost equal resonant frequencies. The system is currently expected to have a sensitivity to periodic spacetime strains of "h" ~ , with an expectation to reach a sensitivity of "h" ~ . The Chongqing University detector is planned to detect relic high-frequency gravitational waves with the predicted typical parameters ≈1011 Hz (100 GHz) and "h" ≈10−30 to 10−32. Interferometers. A more sensitive class of detector uses a laser Michelson interferometer to measure gravitational-wave induced motion between separated 'free' masses. This allows the masses to be separated by large distances (increasing the signal size); a further advantage is that it is sensitive to a wide range of frequencies (not just those near a resonance as is the case for Weber bars). After years of development ground-based interferometers made the first detection of gravitational waves in 2015. Currently, the most sensitive is LIGO – the Laser Interferometer Gravitational Wave Observatory. LIGO has three detectors: one in Livingston, Louisiana, one at the Hanford site in Richland, Washington and a third (formerly installed as a second detector at Hanford) that is planned to be moved to India. Each observatory has two light storage arms that are 4 kilometers in length. These are at 90 degree angles to each other, with the light passing through 1 m diameter vacuum tubes running the entire 4 kilometers. A passing gravitational wave will slightly stretch one arm as it shortens the other. This is the motion to which an interferometer is most sensitive. Even with such long arms, the strongest gravitational waves will only change the distance between the ends of the arms by at most roughly 10−18 m. LIGO should be able to detect gravitational waves as small as "h" ~ . Upgrades to LIGO and Virgo should increase the sensitivity still further. Another highly sensitive interferometer, KAGRA, which is located in the Kamioka Observatory in Japan, is in operation since February 2020. A key point is that a tenfold increase in sensitivity (radius of 'reach') increases the volume of space accessible to the instrument by one thousand times. This increases the rate at which detectable signals might be seen from one per tens of years of observation, to tens per year. Interferometric detectors are limited at high frequencies by shot noise, which occurs because the lasers produce photons randomly; one analogy is to rainfall – the rate of rainfall, like the laser intensity, is measurable, but the raindrops, like photons, fall at random times, causing fluctuations around the average value. This leads to noise at the output of the detector, much like radio static. In addition, for sufficiently high laser power, the random momentum transferred to the test masses by the laser photons shakes the mirrors, masking signals of low frequencies. Thermal noise (e.g., Brownian motion) is another limit to sensitivity. In addition to these 'stationary' (constant) noise sources, all ground-based detectors are also limited at low frequencies by seismic noise and other forms of environmental vibration, and other 'non-stationary' noise sources; creaks in mechanical structures, lightning or other large electrical disturbances, etc. may also create noise masking an event or may even imitate an event. All of these must be taken into account and excluded by analysis before detection may be considered a true gravitational wave event. Einstein@Home. The simplest gravitational waves are those with constant frequency. The waves given off by a spinning, non-axisymmetric neutron star would be approximately monochromatic: a pure tone in acoustics. Unlike signals from supernovae or binary black holes, these signals evolve little in amplitude or frequency over the period it would be observed by ground-based detectors. However, there would be some change in the measured signal, because of Doppler shifting caused by the motion of the Earth. Despite the signals being simple, detection is extremely computationally expensive, because of the long stretches of data that must be analysed. The Einstein@Home project is a distributed computing project similar to SETI@home intended to detect this type of gravitational wave. By taking data from LIGO and GEO, and sending it out in little pieces to thousands of volunteers for parallel analysis on their home computers, Einstein@Home can sift through the data far more quickly than would be possible otherwise. Space-based interferometers. Space-based interferometers, such as LISA and DECIGO, are also being developed. LISA's design calls for three test masses forming an equilateral triangle, with lasers from each spacecraft to each other spacecraft forming two independent interferometers. LISA is planned to occupy a solar orbit trailing the Earth, with each arm of the triangle being five million kilometers. This puts the detector in an excellent vacuum far from Earth-based sources of noise, though it will still be susceptible to heat, shot noise, and artifacts caused by cosmic rays and solar wind. Using pulsar timing arrays. Pulsars are rapidly rotating stars. A pulsar emits beams of radio waves that, like lighthouse beams, sweep through the sky as the pulsar rotates. The signal from a pulsar can be detected by radio telescopes as a series of regularly spaced pulses, essentially like the ticks of a clock. GWs affect the time it takes the pulses to travel from the pulsar to a telescope on Earth. A pulsar timing array uses millisecond pulsars to seek out perturbations due to GWs in measurements of the time of arrival of pulses to a telescope, in other words, to look for deviations in the clock ticks. To detect GWs, pulsar timing arrays search for a distinct quadrupolar pattern of correlation and anti-correlation between the time of arrival of pulses from different pulsar pairs as a function of their angular separation in the sky. Although pulsar pulses travel through space for hundreds or thousands of years to reach us, pulsar timing arrays are sensitive to perturbations in their travel time of much less than a millionth of a second. The most likely source of GWs to which pulsar timing arrays are sensitive are supermassive black hole binaries, which form from the collision of galaxies. In addition to individual binary systems, pulsar timing arrays are sensitive to a stochastic background of GWs made from the sum of GWs from many galaxy mergers. Other potential signal sources include cosmic strings and the primordial background of GWs from cosmic inflation. Globally there are three active pulsar timing array projects. The North American Nanohertz Observatory for Gravitational Waves uses data collected by the Arecibo Radio Telescope and Green Bank Telescope. The Australian Parkes Pulsar Timing Array uses data from the Parkes radio-telescope. The European Pulsar Timing Array uses data from the four largest telescopes in Europe: the Lovell Telescope, the Westerbork Synthesis Radio Telescope, the Effelsberg Telescope and the Nancay Radio Telescope. These three groups also collaborate under the title of the International Pulsar Timing Array project. In June 2023, NANOGrav published the 15-year data release, which contained the first evidence for a stochastic gravitational wave background. In particular, it included the first measurement of the Hellings-Downs curve, the tell-tale sign of the gravitational wave origin of the observed background. Primordial gravitational wave. Primordial gravitational waves are gravitational waves observed in the cosmic microwave background. They were allegedly detected by the BICEP2 instrument, an announcement made on 17 March 2014, which was withdrawn on 30 January 2015 ("the signal can be entirely attributed to dust in the Milky Way"). LIGO and Virgo observations. On 11 February 2016, the LIGO collaboration announced the first observation of gravitational waves, from a signal detected at 09:50:45 GMT on 14 September 2015 of two black holes with masses of 29 and 36 solar masses merging about 1.3 billion light-years away. During the final fraction of a second of the merger, it released more than 50 times the power of all the stars in the observable universe combined. The signal increased in frequency from 35 to 250 Hz over 10 cycles (5 orbits) as it rose in strength for a period of 0.2 second. The mass of the new merged black hole was 62 solar masses. Energy equivalent to three solar masses was emitted as gravitational waves. The signal was seen by both LIGO detectors in Livingston and Hanford, with a time difference of 7 milliseconds due to the angle between the two detectors and the source. The signal came from the Southern Celestial Hemisphere, in the rough direction of (but much farther away than) the Magellanic Clouds. The gravitational waves were observed in the region more than 5 sigma (in other words, 99.99997% chances of showing/getting the same result), the probability of finding enough to have been assessed/considered as the evidence/proof in an experiment of statistical physics. Since then LIGO and Virgo have reported more gravitational wave observations from merging black hole binaries. On 16 October 2017, the LIGO and Virgo collaborations announced the first-ever detection of gravitational waves originating from the coalescence of a binary neutron star system. The observation of the GW170817 transient, which occurred on 17 August 2017, allowed for constraining the masses of the neutron stars involved between 0.86 and 2.26 solar masses. Further analysis allowed a greater restriction of the mass values to the interval 1.17–1.60 solar masses, with the total system mass measured to be 2.73–2.78 solar masses. The inclusion of the Virgo detector in the observation effort allowed for an improvement of the localization of the source by a factor of 10. This in turn facilitated the electromagnetic follow-up of the event. In contrast to the case of binary black hole mergers, binary neutron star mergers were expected to yield an electromagnetic counterpart, that is, a light signal associated with the event. A gamma-ray burst (GRB 170817A) was detected by the Fermi Gamma-ray Space Telescope, occurring 1.7 seconds after the gravitational wave transient. The signal, originating near the galaxy NGC 4993, was associated with the neutron star merger. This was corroborated by the electromagnetic follow-up of the event (AT 2017gfo), involving 70 telescopes and observatories and yielding observations over a large region of the electromagnetic spectrum which further confirmed the neutron star nature of the merged objects and the associated kilonova. In 2021, the detection of the first two neutron star-black hole binaries by the LIGO and VIRGO detectors was published in the Astrophysical Journal Letters, allowing to first set bounds on the quantity of such systems. No neutron star-black hole binary had ever been observed using conventional means before the gravitational observation. Microscopic sources. In 1964, L. Halpern and B. Laurent theoretically proved that gravitational spin-2 electron transitions are possible in atoms. Compared to electric and magnetic transitions the emission probability is extremely low. Stimulated emission was discussed for increasing the efficiency of the process. Due to the lack of mirrors or resonators for gravitational waves, they determined that a single pass GASER (a kind of laser emitting gravitational waves) is practically unfeasible. In 1998, the possibility of a different implementation of the above theoretical analysis was proposed by Giorgio Fontana. The required coherence for a practical GASER could be obtained by Cooper pairs in superconductors that are characterized by a macroscopic collective wave-function. Cuprate high temperature superconductors are characterized by the presence of s-wave and d-wave Cooper pairs. Transitions between s-wave and d-wave are gravitational spin-2. Out of equilibrium conditions can be induced by injecting s-wave Cooper pairs from a low temperature superconductor, for instance lead or niobium, which is pure s-wave, by means of a Josephson junction with high critical current. The amplification mechanism can be described as the effect of superradiance, and 10 cubic centimeters of cuprate high temperature superconductor seem sufficient for the mechanism to properly work. A detailed description of the approach can be found in "High Temperature Superconductors as Quantum Sources of Gravitational Waves: The HTSC GASER". Chapter 3 of this book. In fiction. An episode of the 1962 Russian science-fiction novel "Space Apprentice" by Arkady and Boris Strugatsky shows the experiment monitoring the propagation of gravitational waves at the expense of annihilating a chunk of asteroid 15 Eunomia the size of Mount Everest. In Stanislaw Lem's 1986 novel "Fiasco", a "gravity gun" or "gracer" (gravity amplification by collimated emission of resonance) is used to reshape a collapsar, so that the protagonists can exploit the extreme relativistic effects and make an interstellar journey. In Greg Egan's 1997 novel "Diaspora", the analysis of a gravitational wave signal from the inspiral of a nearby binary neutron star reveals that its collision and merger is imminent, implying a large gamma-ray burst is going to impact the Earth. In Liu Cixin's 2006 "Remembrance of Earth's Past" series, gravitational waves are used as an interstellar broadcast signal, which serves as a central plot point in the conflict between civilizations within the galaxy. See also. &lt;templatestyles src="Div col/styles.css"/&gt; References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "3\\sigma" }, { "math_id": 1, "text": "5\\sigma" }, { "math_id": 2, "text": "r(t)=r_0\\left(1-\\frac{t}{t_\\text{coalesce}} \\right)^{1/4}," }, { "math_id": 3, "text": "r_0" }, { "math_id": 4, "text": "t_\\text{coalesce}" }, { "math_id": 5, "text": "\\frac{\\mathrm{d}r}{\\mathrm{d}t} = - \\frac{64}{5}\\, \\frac{G^3}{c^5}\\, \\frac{(m_1m_2)(m_1+m_2)}{r^3}\\ , " }, { "math_id": 6, "text": "t= \\frac{5}{256}\\, \\frac{c^5}{G^3}\\, \\frac{r^4}{(m_1m_2)(m_1+m_2)}. " } ]
https://en.wikipedia.org/wiki?curid=8111079
8112825
Bulgarian conjugation
Bulgarian conjugation is the creation of derived forms of a Bulgarian verb from its principal parts by inflection. It is affected by person, number, gender, tense, mood and voice. Bulgarian verbs are conventionally divided into three conjugations according to the thematic vowel they use in the present tense: In a dictionary, Bulgarian verbs are listed with their first-person-singular-present-tense form, due to the lack of an infinitive. This form is called the citation form. Bulgarian verbs are conjugated using the formula: formula_0 where thematic vowel and inflection suffix are only optional. The stem of the verb is what is left of the citation form after removing its final letter. Sometimes in the course of conjugation, the stem may undergo some alterations. In this article, any alteration of the stem is coloured in blue, the thematic vowels are coloured in red, and the inflectional endings in green. Inflectional suffixes. Personal endings. Below are the endings for all finite forms: 1 When unstressed. 2 When stressed. 3 Only some irregular first conjugation verbs. Non-finite form endings. These are the endings for the non-finite forms: Thematic vowels. Below is a table of the thematic vowels. They are inserted between the stem and the ending. Finite forms. Present Tense. First conjugation. Verbs from the first conjugation use the thematic vowel е (/ɛ/) between the stem and the personal endings, except in first person singular and third person plural, where the endings are added directly to the stem. All verbs with citation forms ending in а use the endings -а and -ат in first person singular and third person plural respectively. All verbs with stems ending in a vowel use the endings -я and -ят in first person singular and third person plural respectively. All verbs with citation forms ending in я also use the endings -я and -ят in first person singular and third person plural. All verbs with stems ending in -к (/k/) or -г (/g/) change to -ч (/tʃ/) and -ж (/ʒ/) respectively, before the thematic vowel е. This change is not limited solely to the present tense and happens always before /i/, /ɛ/ and the yat vowel. Second conjugation. Verbs from the second conjugation use the thematic vowel и (/i/) between the stem and the personal endings, except in first person singular and third person plural, where the endings are added directly to the stem. All verbs with stems not ending in -ж (/ʒ/), -ч (/tʃ/) or -ш (/ʃ/) use the endings -я and -ят in first person singular and third person plural. All verbs with stems ending in -ж (/ʒ/), -ч (/tʃ/) or -ш (/ʃ/) use the endings -а and -ат in first person singular and third person plural. Third conjugation. Strictly speaking, verbs from the third conjugation are athematic, because the personal endings are added directly to the stem with no thematic vowel in between. It may seem that the vowel а (/ə/) is inserted between them, but that vowel is actually part of the stem. All verbs have stems ending in either а or я. Past Imperfect. The past imperfect always follows the stress patterns of the present tense. First and second conjugation. These verbs use the old yat vowel between the stem and the personal endings. When stressed, it is pronounced as /ja/ (written я) or /a/ (written а) after /ʒ/, /tʃ/ and /ʃ/. When unstressed, it is pronounced as /ɛ/ (written е). In second and third person singular it is always pronounced as /ɛ/. Additionally, after ж (/ʒ/), ч (/tʃ/) and ш (/ʃ/) the stressed yat vowel can be pronounced either as /a/ (as above) or as /ɛ/. The latter forms have fallen largely into disuse. Third conjugation. Verbs from the third conjugation use no thematic vowel, the endings are added directly to the stem. Past Aorist. In the first and second conjugation, verbs are additionally divided into classes according to the thematic vowel they use. In the third conjugation, verbs are divided into classes according to the final vowel of the stem. Stress. Verbs with stress on the stem can keep it there or move it to the thematic vowel (or the final vowel of the stem in the case of the athematic third conjugation verbs). However, this shift can only happen if the verb is unprefixed or if it is imperfective. Forms with unshifted stress are usually typical for the eastern dialects and forms with shifted stress for the western dialects. However, the latter forms have become stylistically marked as dialectal and should be avoided and used only to distinguish otherwise homonymous forms. Prefixed perfective verbs with stress on the stem do not change it. Verbs with stress on the thematic vowel keep it there with the exception of first conjugation verbs of the first class and a few others. First conjugation. First class. These verbs have the vowel о (/o̝/) or е (/ɛ/) in second and third person singular between the stem and the personal endings. The stems of these verbs end in д (/d̪/), т (/t̪/), с (/s/), з (/z/) and к (/k/). This class contains only 23 main verbs, which, however, are some of the most frequently used and there are hundreds of prefixed verbs formed from them: Although the stem of the verb тъка ends in к (/k/), it is not part of this class, it belongs to the next one. An important feature of regular verbs from the first class is that the stress always moves on the last syllable of the stem (unless it is already there). This stress position is kept in the past active aorist participle, the past passive participle and the verbal noun. The verbs (съ-)блекá, влекá, (в-)ля́за and секá have the old yat vowel in the stem, which alternates between я (/ja/) and е (/ɛ/) according to the pronunciation in the eastern dialects. Second class. These verbs use the thematic vowel а (/a/ or /ə/) between the stem and the personal endings. The stems of these verbs end in a consonant different from ж /ʒ/, ч /tʃ/, ш /ʃ/, д (/d̪/), т (/t̪/), с (/s/), з (/z/) or к (/k/), and their citation forms end in а. This class contains over 400 main verbs. Stems ending in -ер (/ɛr/), such as "бера", "пера" and "дера", lose the е (/ɛ/): In the verbs греба and гриза the stress moves to the stem. This is so because they used to belong to the first class. Third class. These verbs also use the thematic vowel а (/a/ or /ə/). This class is almost identical to the previous one, the only difference is that the citation form ends in я. It contains 23 main verbs: The verb дремя contains the yat vowel which alternates between я (/ja/) and е (/ɛ/): Fourth class. These verbs use the vowel а (/a/ or /ə/) between the stem and the personal endings. The stems of these verbs end in one of the consonants ж /ʒ/, ч /tʃ/ or ш /ʃ/, which in the aorist change to з/г (/z,g/), k (/k/) and с (/s/) respectively. This class contains 27 main verbs: Change from /ʒ/ to /z/: бли́жа, въ́ржа, ка́жа, ли́жа, ма́жа, ни́жа, ре́жа, хари́жа Change from /ʒ/ to /g/: лъ́жа, стри́жа, стъ́ржа Change from /tʃ/ to /k/: ба́уча, дъ́вча, мя́уча, пла́ча, сму́ча, су́ча, тъ́пча Change from /ʃ/ to /s/: бри́ша, бъ́рша, мири́ша, (о-)па́ша, пи́ша, ре́ша*, уйди́ша, уйдурди́ша, че́ша The verb режа has the yat vowel, which alternates between я (/ja/) and е (/ɛ/). The verbs глождя (глозгах), дращя (драсках) and пощя (посках) used to belong to this class but now have completely migrated to the second conjugation. Fifth class. This class uses the yat vowel between the stem and the personal endings. It is consistently pronounced as я (/ja/) in all forms. The stems of these verbs end in a consonant + р (/r/), except for the defective verb ща. This is the smallest class, containing only 6 main verbs: Sixth class. This class uses the thematic vowel я (/ja/ or /jə/). The class contains a small number of verbs, whose stems end in the vowels а (/a/) or е (/ɛ/): Some verbs belong both to this and to the next class. Some examples are: вея, блея, рея, шляя се, etc. Seventh class. These verbs do not use any thematic vowel. The personal endings are added directly to the stem, which almost always ends in a vowel, either а (/a/), я (/ja/), е (/ɛ/), и (/i/), у (/u/) or ю (/ju/). This class contains over 250 main verbs, some of which are: Stems in /a/ or /ja/ : веща́я, вита́я, влия́я, гада́я, жела́я, (по-)зна́я, игра́я, копа́я, мечта́я, обеща́я, сия́я, скуча́я, четра́я, etc. Stems in /ɛ/ : венче́я, възмъже́я, върше́я, гре́я, дебеле́я, живе́я, ле́я, пе́я, се́я, тъмне́я, etc. Stems in /i/ (only 9 main verbs) : би́я, ви́я, гни́я, кри́я, ми́я, пи́я, ри́я, три́я, ши́я. Stems in /u/ or /ju/ (only 6 main verbs) : (на-)ду́я, плу́я, плю́я, (об-)у́я, (на-)хлу́я, чу́я. All stems ending in /ɛ/ actually end in the old yat vowel which is pronounced as я (/ja/) or а (/a/) after /ʒ/, /tʃ/ and /ʃ/ in all aorist forms. Stems ending in -ем (/ɛm/) are also considered to belong in this class since they do not use a thematic vowel. They are a special case because the stem loses the м (/m/) before adding the personal endings. Second conjugation. First class. These verbs use the vowel и (/i/) between the stem and the personal endings. There are both stems ending in a consonant and stems ending in a vowel. The vast majority of the verbs from second conjugation belong to this class. Second class. These verbs use the old yat vowel between the stem and the personal endings. It is consistently pronounces as я (/ja/) in all forms. The majority of the stems end in a consonant (different from ж /ʒ/, ч /tʃ/ or ш /ʃ/) but there a few ending in a vowel. These verbs are characterized by the fact that the stress always falls on the thematic vowel across all forms, not exclusively in the aorist. This class contains 76 main verbs. Third class. These verbs use the vowel а (/a/) between the personal endings and the stem, which always ends in ж /ʒ/ or ч /tʃ/. The stress is always on the thematic vowel in all forms, just as in the previous class. The class contains 27 main verbs: Third conjugation. All verbs conjugate in the same fashion (without a thematic vowel, simply by adding the personal endings directly to the stem), nevertheless, Bulgarian grammar books divide them into two classes, depending on the final vowel of the stem. First class. These are stems ending in а (/a/). The vast majority of third conjugation verbs belong to this class. Second class. The stems of these verbs end in я (/ja/). This class is much smaller compared to the first one. Imperative mood. Inflected imperative forms exist only for the second person. The other persons use periphrastic constructions. All regular verbs, regardless of conjugation, form the imperative mood in the same way: Some verbs, most notably stems ending in з (/z/) from the first class of the first conjugation, and a few other frequently used ones, use only the bare stem without a thematic vowel: Non-finite forms. Present Active Participle. Only imperfective verbs have a present active participle. It is formed from the first-person-singular-past-imperfect form of the verb by removing the final х (/x/) and adding щ (/ʃt̪/). It is inflected as a regular adjective (see the endings). Past Active Aorist Participle. It is formed from the first-person-singular-past-aorist form of the verb by removing the final х (/x/) and adding л (/ɫ/), after that it is inflected as an adjective (see the endings). Only verbs from the first class of the first conjugation form it somewhat differently: the thematic vowel о (/o̝/) is removed and the л (/ɫ/) is added directly to the stem with some additional changes, namely: The past active aorist participle keeps the stress of the past aorist, either shifted or not. 1 Since the yat vowel is followed by a syllable containing и (/i/) it is pronounced as е (/ɛ/). 2 Although the yat vowel is followed by a syllable containing и (/i/), it is not pronounced as е (/ɛ/). Past Active Imperfect Participle. It is formed from the first-person-singular-past-imperfect form of the verb by removing the final х (/x/) and adding л (/ɫ/). It is inflected as a regular adjective, but without definite forms, since it is never used as an actual adjective, but only in certain verbal constructions (see the endings). 1 Since the yat vowel is followed by a syllable containing и (/i/) it is pronounced as е (/ɛ/). 2 Although the yat vowel is followed by a syllable containing и (/i/), it is not pronounced as е (/ɛ/). Past Passive Participle. Only transitive verbs have a past passive participle. It is formed from the first-person-singular-past-aorist form of the verb by removing the final х (/x/) and adding н (/n/) or т (/t̪/), after that it is inflected as an adjective (see the endings). Verbs from the first class of the first conjugation and the first class of the second conjugation change the thematic vowel of the past aorist to е (/ɛ/). The vast majority of the verbs use the ending н (/n/), only some verbs from the first conjugation use т (/t̪/), namely all verbs with stems ending in н (/n/) from the second class, and a few verbs from the seventh class (all stems ending in /i/, /u/, /ju/, /ɛm/ and a few others). Some verbs from the seventh class can use both endings. Although the past passive participle is formed from the past aorist, it does not have a stress shift, it always keeps the stress of the present tense, except for first conjugation verbs from the first class, and the verbs греба and гриза which used to belong to the first class. 1 Notice that the thematic vowel о (/o̝/) is changed to е (/ɛ/). 2 The consonant к (/k/) changes to ч (/tʃ/) before the front vowel е (/ɛ/). 3 Since the yat vowel is followed by a syllable containing и (/i/) it is pronounced as е (/ɛ/). 4 Notice that there is no stress shift, unlike the past aorist and the past active aorist participle. 5 Notice that the thematic vowel и (/i/) is changed to е (/ɛ/). Adverbial Participle. Only imperfective verbs have an adverbial participle. Verbs from the first and second conjugation use the thematic vowel е (/ɛ/) between the stem and the ending -йки (/jkʲi/). Verbs from the third conjugation just add the ending without using a thematic vowel. This participle is immutable. The adverbial participle keeps the stress of the present tense. Verbal Noun. Only imperfective verbs have a verbal noun. It is formed either from the first-person-singular-past-imperfect or -past aorist form of the verb (or from both). The final х (/x/) is removed and the ending не (/nɛ/) is added. After that it is inflected as neuter noun (see the endings). If the thematic vowel is о (/o̝/), и (/i/) or the yat vowel, it is changed to е (/ɛ/) before adding the ending. Stems ending in н (/n/) from the second class of the first conjugation, and stems ending in е (/ɛ/), и (/i/), у (/u/) and ю (/ju/) from the seventh class use only the past imperfect to form the verbal noun. All verbs from the first class of the second conjugation use only the past aorist. The remaining verbs may use only the past aorist, only the past imperfect or both. This is not determined by which conjugation or class a verb belongs to, it is an inherent characteristic of each verb. When the verbal noun is formed from the past aorist, it does not have a stress shift, it usually keeps the stress of the present tense, except for first-conjugation verbs from the first class, and a few other verbs which move the stress further back on the ending. 1 Notice that the thematic vowel о (/o̝/) is changed to е (/ɛ/). 3 Since the yat vowel is followed by a syllable containing е (/ɛ/), it is pronounced as е (/ɛ/). 3 Since both the vowel о (/o̝/) and the yat vowel are changed to е (/ɛ/), the two forms of the verbal noun are written the same, but pronounced differently, they differ by stress position. 4 Notice that the stress is on the ending. 5 Notice that there is no stress shift, just like the past passive participle and unlike the past aorist and the past active aorist participle. 6 Notice that the thematic vowel и (/i/) is changed to е (/ɛ/). 7 Since the past aorist and imperfect forms are identical the two forms of the verbal noun are also identical.
[ { "math_id": 0, "text": "\\mathrm{verb\\ form} = \\mbox{stem} + \\mbox{thematic vowel} + \\mbox{inflectional suffix}" } ]
https://en.wikipedia.org/wiki?curid=8112825
8113126
Globally hyperbolic manifold
In mathematical physics, global hyperbolicity is a certain condition on the causal structure of a spacetime manifold (that is, a Lorentzian manifold). It is called hyperbolic in analogy with the linear theory of wave propagation, where the future state of a system is specified by initial conditions. (In turn, the leading symbol of the wave operator is that of a hyperboloid.) This is relevant to Albert Einstein's theory of general relativity, and potentially to other metric gravitational theories. Definitions. There are several equivalent definitions of global hyperbolicity. Let "M" be a smooth connected Lorentzian manifold without boundary. We make the following preliminary definitions: The following conditions are equivalent: If any of these conditions are satisfied, we say "M" is "globally hyperbolic". If "M" is a smooth connected Lorentzian manifold with boundary, we say it is globally hyperbolic if its interior is globally hyperbolic. Other equivalent characterizations of global hyperbolicity make use of the notion of Lorentzian distance formula_5 where the supremum is taken over all the formula_6 causal curves connecting the points (by convention d=0 if there is no such curve). They are Remarks. Global hyperbolicity, in the first form given above, was introduced by Leray in order to consider well-posedness of the Cauchy problem for the wave equation on the manifold. In 1970 Geroch proved the equivalence of definitions 1 and 2. Definition 3 under the assumption of strong causality and its equivalence to the first two was given by Hawking and Ellis. As mentioned, in older literature, the condition of causality in the first and third definitions of global hyperbolicity given above is replaced by the stronger condition of "strong causality". In 2007, Bernal and Sánchez showed that the condition of strong causality can be replaced by causality. In particular, any globally hyperbolic manifold as defined in 3 is strongly causal. Later Hounnonkpe and Minguzzi proved that for quite reasonable spacetimes, more precisely those of dimension larger than three which are non-compact or non-totally vicious, the 'causal' condition can be dropped from definition 3. In definition 3 the closure of formula_3 seems strong (in fact, the closures of the sets formula_8 imply "causal simplicity", the level of the causal hierarchy of spacetimes which stays just below global hyperbolicity). It is possible to remedy this problem strengthening the causality condition as in definition 4 proposed by Minguzzi in 2009. This version clarifies that global hyperbolicity sets a compatibility condition between the causal relation and the notion of compactness: every causal diamond is contained in a compact set and every inextendible causal curve escapes compact sets. Observe that the larger the family of compact sets the easier for causal diamonds to be contained on some compact set but the harder for causal curves to escape compact sets. Thus global hyperbolicity sets a balance on the abundance of compact sets in relation to the causal structure. Since finer topologies have less compact sets we can also say that the balance is on the number of open sets given the causal relation. Definition 4 is also robust under perturbations of the metric (which in principle could introduce closed causal curves). In fact using this version it has been shown that global hyperbolicity is stable under metric perturbations. In 2003, Bernal and Sánchez showed that any globally hyperbolic manifold "M" has a smooth embedded three-dimensional Cauchy surface, and furthermore that any two Cauchy surfaces for "M" are diffeomorphic. In particular, "M" is diffeomorphic to the product of a Cauchy surface with formula_9. It was previously well known that any Cauchy surface of a globally hyperbolic manifold is an embedded three-dimensional formula_10 submanifold, any two of which are homeomorphic, and such that the manifold splits topologically as the product of the Cauchy surface and formula_9. In particular, a globally hyperbolic manifold is foliated by Cauchy surfaces. In view of the initial value formulation for Einstein's equations, global hyperbolicity is seen to be a very natural condition in the context of general relativity, in the sense that given arbitrary initial data, there is a unique maximal globally hyperbolic solution of Einstein's equations.
[ { "math_id": 0, "text": "J^+(p)" }, { "math_id": 1, "text": "J^-(p)" }, { "math_id": 2, "text": "\\mathcal{C}^0" }, { "math_id": 3, "text": "J^-(p)\\cap J^+(q)" }, { "math_id": 4, "text": "{J^-(p)\\cap J^+(q)}" }, { "math_id": 5, "text": "d(p,q):=\\sup_\\gamma l(\\gamma)" }, { "math_id": 6, "text": "C^1" }, { "math_id": 7, "text": "d" }, { "math_id": 8, "text": "J^\\pm(p)" }, { "math_id": 9, "text": "\\mathbb{R}" }, { "math_id": 10, "text": "C^0" } ]
https://en.wikipedia.org/wiki?curid=8113126
8113769
Schur–Zassenhaus theorem
Theorem in group theory The Schur–Zassenhaus theorem is a theorem in group theory which states that if formula_0 is a finite group, and formula_1 is a normal subgroup whose order is coprime to the order of the quotient group formula_2, then formula_0 is a semidirect product (or split extension) of formula_1 and formula_2. An alternative statement of the theorem is that any normal Hall subgroup formula_1 of a finite group formula_0 has a complement in formula_0. Moreover if either formula_1 or formula_2 is solvable then the Schur–Zassenhaus theorem also states that all complements of formula_1 in formula_0 are conjugate. The assumption that either formula_1 or formula_2 is solvable can be dropped as it is always satisfied, but all known proofs of this require the use of the much harder Feit–Thompson theorem. The Schur–Zassenhaus theorem at least partially answers the question: "In a composition series, how can we classify groups with a certain set of composition factors?" The other part, which is where the composition factors do not have coprime orders, is tackled in extension theory. History. The Schur–Zassenhaus theorem was introduced by Zassenhaus (1937, 1958, Chapter IV, section 7). Theorem 25, which he credits to Issai Schur, proves the existence of a complement, and theorem 27 proves that all complements are conjugate under the assumption that formula_1 or formula_2 is solvable. It is not easy to find an explicit statement of the existence of a complement in Schur's published works, though the results of Schur (1904, 1907) on the Schur multiplier imply the existence of a complement in the special case when the normal subgroup is in the center. Zassenhaus pointed out that the Schur–Zassenhaus theorem for non-solvable groups would follow if all groups of odd order are solvable, which was later proved by Feit and Thompson. Ernst Witt showed that it would also follow from the Schreier conjecture (see Witt (1998, p.277) for Witt's unpublished 1937 note about this), but the Schreier conjecture has only been proved using the classification of finite simple groups, which is far harder than the Feit–Thompson theorem. Examples. If we do not impose the coprime condition, the theorem is not true: consider for example the cyclic group formula_3 and its normal subgroup formula_4. Then if formula_3 were a semidirect product of formula_4 and formula_5 then formula_3 would have to contain two elements of order 2, but it only contains one. Another way to explain this impossibility of splitting formula_3 (i.e. expressing it as a semidirect product) is to observe that the automorphisms of formula_4 are the trivial group, so the only possible [semi]direct product of formula_4 with itself is a direct product (which gives rise to the Klein four-group, a group that is non-isomorphic with formula_3). An example where the Schur–Zassenhaus theorem does apply is the symmetric group on 3 symbols, formula_6, which has a normal subgroup of order 3 (isomorphic with formula_7) which in turn has index 2 in formula_6 (in agreement with the theorem of Lagrange), so formula_8. Since 2 and 3 are relatively prime, the Schur–Zassenhaus theorem applies and formula_9. Note that the automorphism group of formula_7 is formula_4 and the automorphism of formula_7 used in the semidirect product that gives rise to formula_6 is the non-trivial automorphism that permutes the two non-identity elements of formula_7. Furthermore, the three subgroups of order 2 in formula_6 (any of which can serve as a complement to formula_7 in formula_6) are conjugate to each other. The non-triviality of the (additional) conjugacy conclusion can be illustrated with the Klein four-group formula_10 as the non-example. Any of the three proper subgroups of formula_10 (all of which have order 2) is normal in formula_10; fixing one of these subgroups, any of the other two remaining (proper) subgroups complements it in formula_10, but none of these three subgroups of formula_10 is a conjugate of any other one, because formula_10 is abelian. The quaternion group has normal subgroups of order 4 and 2 but is not a [semi]direct product. Schur's papers at the beginning of the 20th century introduced the notion of central extension to address examples such as formula_3 and the quaternions. Proof. The existence of a complement to a normal Hall subgroup "H" of a finite group "G" can be proved in the following steps:
[ { "math_id": 0, "text": "G" }, { "math_id": 1, "text": "N" }, { "math_id": 2, "text": "G/N" }, { "math_id": 3, "text": "C_4" }, { "math_id": 4, "text": "C_2" }, { "math_id": 5, "text": "C_4 / C_2 \\cong C_2" }, { "math_id": 6, "text": "S_3" }, { "math_id": 7, "text": "C_3" }, { "math_id": 8, "text": "S_3 / C_3 \\cong C_2" }, { "math_id": 9, "text": "S_3 \\cong C_3 \\rtimes C_2" }, { "math_id": 10, "text": "V" } ]
https://en.wikipedia.org/wiki?curid=8113769
8117002
Newton–Wigner localization
Scheme for obtaining the position operator Newton–Wigner localization (named after Theodore Duddell Newton and Eugene Wigner) is a scheme for obtaining a position operator for massive relativistic quantum particles. It is known to largely conflict with the Reeh–Schlieder theorem outside of a very limited scope. The Newton–Wigner position operators x1, x2, x3, are the premier notion of position in relativistic quantum mechanics of a single particle. They enjoy the same commutation relations with the 3 space momentum operators and transform under rotations in the same way as the x, y, z in ordinary QM. Though formally they have the same properties with respect to p1, p2, p3, as the position in ordinary QM, they have additional properties: One of these is that formula_0 This ensures that the free particle moves at the expected velocity with the given momentum/energy. Apparently these notions were discovered when attempting to define a self adjoint operator in the relativistic setting that resembled the position operator in basic quantum mechanics in the sense that at low momenta it approximately agreed with that operator. It also has several famous strange behaviors (see the Hegerfeldt theorem in particular), one of which is seen as the motivation for having to introduce quantum field theory. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": " [x_i \\, , p_0 ] = p_i/p_0 ~." } ]
https://en.wikipedia.org/wiki?curid=8117002
81173
Rayleigh fading
Radio signal statistical model Rayleigh fading is a statistical model for the effect of a propagation environment on a radio signal, such as that used by wireless devices. Rayleigh fading models assume that the magnitude of a signal that has passed through such a transmission medium (also called a communication channel) will vary randomly, or fade, according to a Rayleigh distribution — the radial component of the sum of two uncorrelated Gaussian random variables. Rayleigh fading is viewed as a reasonable model for tropospheric and ionospheric signal propagation as well as the effect of heavily built-up urban environments on radio signals. Rayleigh fading is most applicable when there is no dominant propagation along a line of sight between the transmitter and receiver. If there is a dominant line of sight, Rician fading may be more applicable. Rayleigh fading is a special case of two-wave with diffuse power (TWDP) fading. The model. Rayleigh fading is a reasonable model when there are many objects in the environment that scatter the radio signal before it arrives at the receiver. The central limit theorem holds that, if there is sufficiently much scatter, the channel impulse response will be well-modelled as a Gaussian process irrespective of the distribution of the individual components. If there is no dominant component to the scatter, then such a process will have zero mean and phase evenly distributed between 0 and 2π radians. The envelope of the channel response will therefore be Rayleigh distributed. Calling this random variable formula_0, it will have a probability density function: formula_1 where formula_2. Often, the gain and phase elements of a channel's distortion are conveniently represented as a complex number. In this case, Rayleigh fading is exhibited by the assumption that the real and imaginary parts of the response are modelled by independent and identically distributed zero-mean Gaussian processes so that the amplitude of the response is the sum of two such processes. Applicability. The requirement that there be many scatterers present means that Rayleigh fading can be a useful model in heavily built-up city centres where there is no line of sight between the transmitter and receiver and many buildings and other objects attenuate, reflect, refract, and diffract the signal. Experimental work in Manhattan has found near-Rayleigh fading there. In tropospheric and ionospheric signal propagation the many particles in the atmospheric layers act as scatterers and this kind of environment may also approximate Rayleigh fading. If the environment is such that, in addition to the scattering, there is a strongly dominant signal seen at the receiver, usually caused by a line of sight, then the mean of the random process will no longer be zero, varying instead around the power-level of the dominant path. Such a situation may be better modelled as Rician fading. Note that Rayleigh fading is a small-scale effect. There will be bulk properties of the environment such as path loss and shadowing upon which the fading is superimposed. How rapidly the channel fades will be affected by how fast the receiver and/or transmitter are moving. Motion causes Doppler shift in the received signal components. The figures show the power variation over 1 second of a constant signal after passing through a single-path Rayleigh fading channel with a maximum Doppler shift of 10 Hz and 100 Hz. These Doppler shifts correspond to velocities of about 6 km/h (4 mph) and 60 km/h (40 mph) respectively at 1800 MHz, one of the operating frequencies for GSM mobile phones. This is the classic shape of Rayleigh fading. Note in particular the 'deep fades' where signal strength can drop by a factor of several thousand, or 30–40 dB. Properties. Since it is based on a well-studied distribution with special properties, the Rayleigh distribution lends itself to analysis, and the key features that affect the performance of a wireless network have analytic expressions. Note that the parameters discussed here are for a non-static channel. If a channel is not changing with time, it does not fade and instead remains at some particular level. Separate instances of the channel in this case will be uncorrelated with one another, owing to the assumption that each of the scattered components fades independently. Once relative motion is introduced between any of the transmitter, receiver, and scatterers, the fading becomes correlated and varying in time. Level crossing rate. The level crossing rate is a measure of the rapidity of the fading. It quantifies how often the fading crosses some threshold, usually in the positive-going direction. For Rayleigh fading, the level crossing rate is: formula_3 where formula_4 is the maximum Doppler shift and formula_5 is the threshold level normalised to the root mean square (RMS) signal level: formula_6 Average fade duration. The average fade duration quantifies how long the signal spends below the threshold formula_5. For Rayleigh fading, the average fade duration is: formula_7 The level crossing rate and average fade duration taken together give a useful means of characterizing the severity of the fading over time. For a particular normalized threshold value formula_8, the product of the average fade duration and the level crossing rate is a constant and is given by formula_9 Doppler power spectral density. The Doppler power spectral density of a fading channel describes how much spectral broadening it causes. This shows how a pure frequency, e.g., a pure sinusoid, which is an impulse in the frequency domain, is spread out across frequency when it passes through the channel. It is the Fourier transform of the time-autocorrelation function. For Rayleigh fading with a vertical receive antenna with equal sensitivity in all directions, this has been shown to be: formula_10 where formula_11 is the frequency shift relative to the carrier frequency. This equation is valid only for values of formula_11 between formula_12; the spectrum is zero outside this range. This spectrum is shown in the figure for a maximum Doppler shift of 10 Hz. The 'bowl shape' or 'bathtub shape' is the classic form of this Doppler spectrum. Generating Rayleigh fading. As described above, a Rayleigh fading channel itself can be modelled by generating the real and imaginary parts of a complex number according to independent normal Gaussian variables. However, it is sometimes the case that it is simply the amplitude fluctuations that are of interest (such as in the figure shown above). There are two main approaches to this. In both cases, the aim is to produce a signal that has the Doppler power spectrum given above and the equivalent autocorrelation properties. Jakes's model. In his book, Jakes popularised a model for Rayleigh fading based on summing sinusoids. Let the scatterers be uniformly distributed around a circle at angles formula_13 with formula_14 rays emerging from each scatterer. The Doppler shift on ray formula_15 is formula_16 and, with formula_17 such scatterers, the Rayleigh fading of the formula_18 waveform over time formula_19 can be modelled as: formula_20 Here, formula_21 and the formula_22 and formula_23 are model parameters with formula_21 usually set to zero, formula_22 chosen so that there is no cross-correlation between the real and imaginary parts of formula_24: formula_25 and formula_23 used to generate multiple waveforms. If a single-path channel is being modelled, so that there is only one waveform then formula_26 can be zero. If a multipath, frequency-selective channel is being modelled so that multiple waveforms are needed, Jakes suggests that uncorrelated waveforms are given by formula_27 In fact, it has been shown that the waveforms are correlated among themselves — they have non-zero cross-correlation — except in special circumstances. The model is also deterministic (it has no random element to it once the parameters are chosen). A modified Jakes's model chooses slightly different spacings for the scatterers and scales their waveforms using Walsh–Hadamard sequences to ensure zero cross-correlation. Setting formula_28 results in the following model, usually termed the Dent model or the modified Jakes model: formula_29 The weighting functions formula_30 are the formula_14th Walsh–Hadamard sequence in formula_15. Since these have zero cross-correlation by design, this model results in uncorrelated waveforms. The phases formula_26 can be initialised randomly and have no effect on the correlation properties. The fast Walsh transform can be used to efficiently generate samples using this model. The Jakes's model also popularised the Doppler spectrum associated with Rayleigh fading, and, as a result, this Doppler spectrum is often termed Jakes's spectrum. Filtered white noise. Another way to generate a signal with the required Doppler power spectrum is to pass a white Gaussian noise signal through a Gaussian filter with a frequency response equal to the square-root of the Doppler spectrum required. Although simpler than the models above, and non-deterministic, it presents some implementation questions related to needing high-order filters to approximate the irrational square-root function in the response and sampling the Gaussian waveform at an appropriate rate. Butterworth filter as Doppler power spectral density. According to Doppler PSD can also be modeled via Butterworth filter as: formula_31 where "f" is a frequency, formula_32 is the Butterworth filter response, "B" is the normalization constant, "k" is the filter order and formula_33 is the Cutoff frequency which should be selected with respect to maximum Doppler shift. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "R" }, { "math_id": 1, "text": "p_R(r) = \\frac{2r} \\Omega e^{-r^2/\\Omega},\\ r\\geq 0" }, { "math_id": 2, "text": "\\Omega = \\operatorname E(R^2)" }, { "math_id": 3, "text": "\\mathrm{LCR} = \\sqrt{2\\pi}f_d\\rho e^{-\\rho^2}" }, { "math_id": 4, "text": "f_d" }, { "math_id": 5, "text": "\\,\\!\\rho" }, { "math_id": 6, "text": "\\rho = \\frac{R_\\mathrm{threshold}}{R_\\mathrm{rms}}." }, { "math_id": 7, "text": "\\mathrm{AFD} = \\frac{e^{\\rho^2}-1}{\\rho f_d \\sqrt{2\\pi}}." }, { "math_id": 8, "text": "\\rho" }, { "math_id": 9, "text": "\\mathrm{AFD} \\times \\mathrm{LCR} = 1 - e^{-\\rho^2}. " }, { "math_id": 10, "text": "S(\\nu) = \\frac{1}{\\pi f_d \\sqrt{1 - \\left(\\frac \\nu {f_d}\\right)^2}}," }, { "math_id": 11, "text": "\\,\\!\\nu" }, { "math_id": 12, "text": "\\pm f_d" }, { "math_id": 13, "text": "\\alpha_n" }, { "math_id": 14, "text": "k" }, { "math_id": 15, "text": "n" }, { "math_id": 16, "text": "\\,\\!f_n = f_d\\cos\\alpha_n " }, { "math_id": 17, "text": "M" }, { "math_id": 18, "text": "k^\\text{th}" }, { "math_id": 19, "text": "t" }, { "math_id": 20, "text": "\n\\begin{align}\nR(t,k) = 2\\sqrt{2}\\left[\\sum_{n=1}^M \\right. & \\left(\\cos\\beta_n + j\\sin\\beta_n\\right)\\cos\\left(2 \\pi f_n t + \\theta_{n,k}\\right) \\\\[4pt]\n& \\left. {} + \\frac 1 {\\sqrt{2}} \\left(\\cos\\alpha + j\\sin\\alpha\\right)\\cos(2 \\pi f_d t)\\right].\n\\end{align}\n" }, { "math_id": 21, "text": "\\,\\!\\alpha" }, { "math_id": 22, "text": "\\,\\!\\beta_n" }, { "math_id": 23, "text": "\\,\\!\\theta_{n,k}" }, { "math_id": 24, "text": "R(t)" }, { "math_id": 25, "text": "\\,\\!\\beta_n = \\frac{\\pi n}{M+1}" }, { "math_id": 26, "text": "\\,\\!\\theta_n" }, { "math_id": 27, "text": "\\theta_{n,k} = \\beta_n + \\frac{2\\pi(k-1)}{M+1}." }, { "math_id": 28, "text": "\\alpha_n = \\frac{\\pi(n-0.5)}{2M} \\text{ and }\\beta_n = \\frac{\\pi n} M," }, { "math_id": 29, "text": "R(t,k) = \\sqrt{\\frac 2 M} \\sum_{n=1}^M A_k(n)\\left( \\cos\\beta_n + j\\sin\\beta_n \\right)\\cos\\left(2\\pi f_d t \\cos\\alpha_n + \\theta_n\\right)." }, { "math_id": 30, "text": "A_k(n)" }, { "math_id": 31, "text": "S_{Doppler}(f) = |H_{Butterworth}|^2 = \\frac{B}{\\sqrt{1 - (f/f_0)^{2k}}}" }, { "math_id": 32, "text": "H_{Butterworth}" }, { "math_id": 33, "text": "f_0" } ]
https://en.wikipedia.org/wiki?curid=81173
8118739
Qualitative economics
Qualitative economics is the representation and analysis of information about the direction of change (+, -, or 0) in some economic variable(s) as related to change of some other economic variable(s). For the non-zero case, what makes the change "qualitative" is that its direction but not its magnitude is specified. Typical exercises of qualitative economics include comparative-static changes studied in microeconomics or macroeconomics and comparative equilibrium-growth states in a macroeconomic growth model. A simple example illustrating qualitative change is from macroeconomics. Let: "GDP" = nominal gross domestic product, a measure of national income "M" = money supply "T" = total taxes. Monetary theory hypothesizes a positive relationship between "GDP" the dependent variable and "M" the independent variable. Equivalent ways to represent such a qualitative relationship between them are as a signed functional relationship and as a signed derivative: formula_0 or formula_1 where the '+' indexes a positive relationship of "GDP" to "M", that is, as "M" increases, "GDP" increases as a result. Another model of GDP hypothesizes that "GDP" has a negative relationship to "T". This can be represented similarly to the above, with a theoretically appropriate sign change as indicated: formula_2 or formula_3 That is, as "T" increases, "GDP" decreases as a result. A combined model uses both "M" and "T" as independent variables. The hypothesized relationships can be equivalently represented as signed functional relationships and signed partial derivatives (suitable for more than one independent variable): formula_4 or formula_5 formula_6 Qualitative hypotheses occur in earliest history of formal economics but only as to formal economic models from the late 1930s with Hicks's model of general equilibrium in a competitive economy. A classic exposition of qualitative economics is Samuelson, 1947. There Samuelson identifies qualitative restrictions and the hypotheses of maximization and stability of equilibrium as the three fundamental sources of "meaningful" theorems — hypotheses about empirical data that could conceivably be refuted by empirical data. Notes. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": " GDP = f(\\overset{+}{M}) \\quad\\! " }, { "math_id": 1, "text": "\\quad\\frac{ df(M) }{ dM} > 0." }, { "math_id": 2, "text": " GDP = f(\\overset{-}{T}) \\quad\\! " }, { "math_id": 3, "text": "\\quad\\frac{ df(T)}{ dT} < 0." }, { "math_id": 4, "text": " GDP = f(\\overset{+}{M},\\overset{-}{ T}) \\,\\! \\quad " }, { "math_id": 5, "text": "\\quad\\frac{\\partial f(M, T)}{\\partial M} > 0,\\quad " }, { "math_id": 6, "text": "\\frac{\\partial f(M, T)}{\\partial T} < 0." } ]
https://en.wikipedia.org/wiki?curid=8118739
812032
Monkey saddle
Mathematical surface defined by z = x³ – 3xy² In mathematics, the monkey saddle is the surface defined by the equation formula_0 or in cylindrical coordinates formula_1 It belongs to the class of saddle surfaces, and its name derives from the observation that a saddle for a monkey would require two depressions for the legs and one for the tail. The point &amp;NoBreak;&amp;NoBreak; on the monkey saddle corresponds to a degenerate critical point of the function &amp;NoBreak;&amp;NoBreak; at &amp;NoBreak;&amp;NoBreak;. The monkey saddle has an isolated umbilical point with zero Gaussian curvature at the origin, while the curvature is strictly negative at all other points. One can relate the rectangular and cylindrical equations using complex numbers formula_2 formula_3 By replacing 3 in the cylindrical equation with any integer &amp;NoBreak;&amp;NoBreak; one can create a saddle with &amp;NoBreak;&amp;NoBreak; depressions. Another orientation of the monkey saddle is the "Smelt petal" defined by formula_4 so that the "z-"axis of the monkey saddle corresponds to the direction &amp;NoBreak;&amp;NoBreak; in the Smelt petal. Horse saddle. The term "horse saddle" may be used in contrast to monkey saddle, to designate an ordinary saddle surface in which "z"("x","y") has a saddle point, a local minimum or maximum in every direction of the "xy"-plane. In contrast, the monkey saddle has a stationary point of inflection in every direction.
[ { "math_id": 0, "text": " z = x^3 - 3xy^2, \\, " }, { "math_id": 1, "text": "z = \\rho^3 \\cos(3\\varphi)." }, { "math_id": 2, "text": "x+iy = r e^{i\\varphi}:" }, { "math_id": 3, "text": " z = x^3 - 3xy^2 = \\operatorname{Re} [(x+iy)^3] = \\operatorname{Re}[r^3 e^{3i\\varphi}] = r^3\\cos(3\\varphi)." }, { "math_id": 4, "text": "x+y+z+xyz=0," } ]
https://en.wikipedia.org/wiki?curid=812032
8121006
Harmonic superspace
In supersymmetry, harmonic superspace is one way of dealing with supersymmetric theories with 8 real SUSY generators in a manifestly covariant manner. It turns out that the 8 real SUSY generators are pseudoreal, and after complexification, correspond to the tensor product of a four-dimensional Dirac spinor with the fundamental representation of SU(2)R. The quotient space formula_0, which is a 2-sphere/Riemann sphere. Harmonic superspace describes N=2 D=4, N=1 D=5, and N=(1,0) D=6 SUSY in a manifestly covariant manner. There are many possible coordinate systems over S2, but the one chosen not only involves redundant coordinates, but also happen to be a coordinatization of formula_1. We only get S2 "after" a projection over formula_2. This is of course the Hopf fibration. Consider the left action of SU(2)R upon itself. We can then extend this to the space of complex valued smooth functions over SU(2)R. In particular, we have the subspace of functions which transform as the fundamental representation under SU(2)R. The fundamental representation (up to isomorphism, of course) is a two-dimensional complex vector space. Let us denote the indices of this representation by i,j,k...=1,2. The subspace of interest consists of two copies of the fundamental representation. Under the right action by U(1)R -- which commutes with any left action—one copy has a "charge" of +1, and the other of -1. Let us label the basis functions formula_3. formula_4. The redundancy in the coordinates is given by formula_5. Everything can be interpreted in terms of algebraic geometry. The projection is given by the "gauge transformation" formula_6 where φ is any real number. Think of S3 as a U(1)R-principal bundle over S2 with a nonzero first Chern class. Then, "fields" over S2 are characterized by an integral U(1)R charge given by the right action of U(1)R. For instance, u+ has a charge of +1, and u− of -1. By convention, fields with a charge of +r are denoted by a superscript with r +'s, and ditto for fields with a charge of -r. R-charges are additive under the multiplication of fields. The SUSY charges are formula_7, and the corresponding fermionic coordinates are formula_8. Harmonic superspace is given by the product of ordinary extended superspace (with 8 real fermionic coordinatates) with S2 with the nontrivial U(1)R bundle over it. The product is somewhat twisted in that the fermionic coordinates are also charged under U(1)R. This charge is given by formula_9. We can define the covariant derivatives formula_10 with the property that they supercommute with the SUSY transformations, and formula_11 where "f" is any function of the harmonic variables. Similarly, define formula_12 and formula_13. A chiral superfield "q" with an R-charge of "r" satisfies formula_14. A scalar hypermultiplet is given by a chiral superfield formula_15. We have the additional constraint formula_16. According to the Atiyah-Singer index theorem, the solution space to the previous constraint is a two-dimensional complex manifold. Relation to quaternions. The group formula_17 can be identified with the Lie group of quaternions with unit norm under multiplication. formula_17, and hence the quaternions act upon the tangent space of extended superspace. The bosonic spacetime dimensions transform trivially under formula_17 while the fermionic dimensions transform according to the fundamental representation. The left multiplication by quaternions is linear. Now consider the subspace of unit quaternions with no real component, which is isomorphic to S2. Each element of this subspace can act as the imaginary number "i" in a complex subalgebra of the quaternions. So, for each element of S2, we can use the corresponding imaginary unit to define a complex-real structure over the extended superspace with 8 real SUSY generators. The totality of all CR structures for each point in S2 is harmonic superspace. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "SU(2)_R/U(1)_R \\approx S^2 \\simeq \\mathbb{CP}^1" }, { "math_id": 1, "text": "SU(2)_R \\approx S^3" }, { "math_id": 2, "text": "U(1)_R \\approx S^1" }, { "math_id": 3, "text": "u^{\\pm i}" }, { "math_id": 4, "text": "\\left(u^{+i}\\right)^* = u^-_i" }, { "math_id": 5, "text": "u^{+i}u^-_i = 1" }, { "math_id": 6, "text": "u^{\\pm i} \\to e^{\\pm i \\phi} u^{\\pm i}" }, { "math_id": 7, "text": "Q^{i\\alpha}" }, { "math_id": 8, "text": "\\theta^{i\\alpha}" }, { "math_id": 9, "text": "\\theta^{\\pm \\alpha}= u^{\\pm}_i \\theta^{i\\alpha}" }, { "math_id": 10, "text": "D^{\\pm}_{\\alpha}" }, { "math_id": 11, "text": "D^{\\pm}_{\\alpha}f(u)=0" }, { "math_id": 12, "text": "D^{++} \\equiv u^{+i}\\frac{\\partial}{\\partial u^{-i}}" }, { "math_id": 13, "text": "D^{--} \\equiv u^{-i}\\frac{\\partial}{\\partial u^{+i}}" }, { "math_id": 14, "text": "D^+_{\\alpha}q=0" }, { "math_id": 15, "text": "q^+" }, { "math_id": 16, "text": "D^{++}q^+ = J^{+++}(q^+,\\, u)" }, { "math_id": 17, "text": "SU(2)_R" } ]
https://en.wikipedia.org/wiki?curid=8121006
81211
Fading
Term in wireless communications In wireless communications, fading is the variation of signal attenuation over variables like time, geographical position, and radio frequency. Fading is often modeled as a random process. In wireless systems, fading may either be due to multipath propagation, referred to as multipath-induced fading, weather (particularly rain), or shadowing from obstacles affecting the wave propagation, sometimes referred to as shadow fading. A fading channel is a communication channel that experiences fading. Key concepts. The presence of reflectors in the environment surrounding a transmitter and receiver create multiple paths that a transmitted signal can traverse. As a result, the receiver sees the superposition of multiple copies of the transmitted signal, each traversing a different path. Each signal copy will experience differences in attenuation, delay and phase shift while traveling from the source to the receiver. This can result in either constructive or destructive interference, which amplifies or attenuates the signal power seen at the receiver. Strong destructive interference is frequently referred to as a deep fade and may result in temporary failure of communication due to a severe drop in the channel signal-to-noise ratio. A common example of deep fade is the experience of stopping at a traffic light and hearing an FM broadcast degenerate into static, while the signal is re-acquired if the vehicle moves only a fraction of a meter. The loss of the broadcast is caused by the vehicle stopping at a point where the signal experienced severe destructive interference. Cellular phones can also exhibit similar momentary fades. Fading channel models are often used to model the effects of electromagnetic transmission of information over the air in cellular networks and broadcast communication. Fading channel models are also used in underwater acoustic communications to model the distortion caused by the water. Types. Slow versus fast fading. The terms "slow" and "fast" fading refer to the rate at which the magnitude and phase change imposed by the channel on the signal changes. The coherence time is a measure of the minimum time required for the magnitude change or phase change of the channel to become uncorrelated from its previous value. In a fast-fading channel, the transmitter may take advantage of the variations in the channel conditions using time diversity to help increase robustness of the communication to a temporary deep fade. Although a deep fade may temporarily erase some of the information transmitted, use of an error-correcting code coupled with successfully transmitted bits during other time instances (interleaving) can allow for the erased bits to be recovered. In a slow-fading channel, it is not possible to use time diversity because the transmitter sees only a single realization of the channel within its delay constraint. A deep fade therefore lasts the entire duration of transmission and cannot be mitigated using coding. The coherence time of the channel is related to a quantity known as the Doppler spread of the channel. When a user (or reflectors in its environment) is moving, the user's velocity causes a shift in the frequency of the signal transmitted along each signal path. This phenomenon is known as the Doppler shift. Signals traveling along different paths can have different Doppler shifts, corresponding to different rates of change in phase. The difference in Doppler shifts between different signal components contributing to a signal fading channel tap is known as the Doppler spread. Channels with a large Doppler spread have signal components that are each changing independently in phase over time. Since fading depends on whether signal components add constructively or destructively, such channels have a very short coherence time. In general, coherence time is inversely related to Doppler spread, typically expressed as formula_0 where formula_1 is the coherence time, formula_2 is the Doppler spread. This equation is just an approximation, to be exact, see Coherence time. Block fading. Block fading is where the fading process is approximately constant for a number of symbol intervals. A channel can be 'doubly block-fading' when it is block fading in both the time and frequency domains. Many wireless communications channels are dynamic by nature, and are commonly modeled as block fading. In these channels each block of symbol goes through a statistically independent transformation. Typically the slowly-varying channels based on jakes model of Rayleigh spectrum is used for block fading in an OFDM system. Selective fading. Selective fading or frequency selective fading is a radio propagation anomaly caused by partial cancellation of a radio signal by itself — the signal arrives at the receiver by two different paths, and at least one of the paths is changing (lengthening or shortening). This typically happens in the early evening or early morning as the various layers in the ionosphere move, separate, and combine. The two paths can both be skywave or one be groundwave. Selective fading manifests as a slow, cyclic disturbance; the cancellation effect, or "null", is deepest at one particular frequency, which changes constantly, sweeping through the received audio. As the carrier frequency of a signal is varied, the magnitude of the change in amplitude will vary. The coherence bandwidth measures the separation in frequency after which two signals will experience uncorrelated fading. Since different frequency components of the signal are affected independently, it is highly unlikely that all parts of the signal will be simultaneously affected by a deep fade. Certain modulation schemes such as orthogonal frequency-division multiplexing (OFDM) and code-division multiple access (CDMA) are well-suited to employing frequency diversity to provide robustness to fading. OFDM divides the wideband signal into many slowly modulated narrowband subcarriers, each exposed to flat fading rather than frequency selective fading. This can be combated by means of error coding, simple equalization or adaptive bit loading. Inter-symbol interference is avoided by introducing a guard interval between the symbols called a cyclic prefix. CDMA uses the rake receiver to deal with each echo separately. Frequency-selective fading channels are also "dispersive", in that the signal energy associated with each symbol is spread out in time. This causes transmitted symbols that are adjacent in time to interfere with each other. Equalizers are often deployed in such channels to compensate for the effects of the intersymbol interference. The echoes may also be exposed to Doppler shift, resulting in a time varying channel model. The effect can be counteracted by applying some diversity scheme, for example OFDM (with subcarrier interleaving and forward error correction), or by using two receivers with separate antennas spaced a quarter-wavelength apart, or a specially designed diversity receiver with two antennas. Such a receiver continuously compares the signals arriving at the two antennas and presents the better signal. Upfade. Upfade is a special case of fading, used to describe constructive interference, in situations where a radio signal gains strength. Some multipath conditions cause a signal's amplitude to be increased in this way because signals travelling by different paths arrive at the receiver in phase and become additive to the main signal. Hence, the total signal that reaches the receiver will be stronger than the signal would otherwise have been without the multipath conditions. The effect is also noticeable in wireless LAN systems. Models. Examples of fading models for the distribution of the attenuation are: Mitigation. Fading can cause poor performance in a communication system because it can result in a loss of signal power without reducing the power of the noise. This signal loss can be over some or all of the signal bandwidth. Fading can also be a problem as it changes over time: communication systems are often designed to adapt to such impairments, but the fading can change faster than the adaptations can be made. In such cases, the probability of experiencing a fade (and associated bit errors as the signal-to-noise ratio drops) on the channel becomes the limiting factor in the link's performance. The effects of fading can be combated by using diversity to transmit the signal over multiple channels that experience independent fading and coherently combining them at the receiver. The probability of experiencing a fade in this composite channel is then proportional to the probability that all the component channels simultaneously experience a fade, a much more unlikely event. Diversity can be achieved in time, frequency, or space. Common techniques used to overcome signal fading include: Besides diversity, techniques such as application of cyclic prefix (e.g. in OFDM) and channel estimation and equalization can also be used to tackle fading. References. &lt;templatestyles src="Reflist/styles.css" /&gt; External links. &lt;br&gt;
[ { "math_id": 0, "text": "T_c \\approx \\frac{1}{D_s}" }, { "math_id": 1, "text": "T_c" }, { "math_id": 2, "text": "D_s" } ]
https://en.wikipedia.org/wiki?curid=81211
8121479
Quasi-set theory
Quasi-set theory is a formal mathematical theory for dealing with collections of objects, some of which may be indistinguishable from one another. Quasi-set theory is mainly motivated by the assumption that certain objects treated in quantum physics are indistinguishable and don't have individuality. Motivation. The American Mathematical Society sponsored a 1974 meeting to evaluate the resolution and consequences of the 23 problems Hilbert proposed in 1900. An outcome of that meeting was a new list of mathematical problems, the first of which, due to Manin (1976, p. 36), questioned whether classical set theory was an adequate paradigm for treating collections of indistinguishable elementary particles in quantum mechanics. He suggested that such collections cannot be sets in the usual sense, and that the study of such collections required a "new language". The use of the term "quasi-set" follows a suggestion in da Costa's 1980 monograph "Ensaio sobre os Fundamentos da Lógica" (see da Costa and Krause 1994), in which he explored possible semantics for what he called "Schrödinger Logics". In these logics, the concept of identity is restricted to some objects of the domain, and has motivation in Schrödinger's claim that the concept of identity does not make sense for elementary particles (Schrödinger 1952). Thus in order to provide a semantics that fits the logic, da Costa submitted that "a theory of quasi-sets should be developed", encompassing "standard sets" as particular cases, yet da Costa did not develop this theory in any concrete way. To the same end and independently of da Costa, Dalla Chiara and di Francia (1993) proposed a theory of "quasets" to enable a semantic treatment of the language of microphysics. The first quasi-set theory was proposed by D. Krause in his PhD thesis, in 1990 (see Krause 1992). A related physics theory, based on the logic of adding fundamental indistinguishability to equality and inequality, was developed and elaborated independently in the book "The Theory of Indistinguishables" by A. F. Parker-Rhodes. Summary of the theory. We now expound Krause's (1992) axiomatic theory formula_0, the first quasi-set theory; other formulations and improvements have since appeared. For an updated paper on the subject, see French and Krause (2010). Krause builds on the set theory ZFU, consisting of Zermelo-Fraenkel set theory with an ontology extended to include two kinds of urelements: Quasi-sets ("q-sets") are collections resulting from applying axioms, very similar to those for ZFU, to a basic domain composed of "m"-atoms, "M"-atoms, and aggregates of these. The axioms of formula_0 include equivalents of extensionality, but in a weaker form, termed "weak extensionality axiom"; axioms asserting the existence of the empty set, unordered pair, union set, and power set; the axiom of separation; an axiom stating the image of a q-set under a q-function is also a q-set; q-set equivalents of the axioms of infinity, regularity, and choice. Q-set theories based on other set-theoretical frameworks are, of course, possible. formula_0 has a primitive concept of quasi-cardinal, governed by eight additional axioms, intuitively standing for the quantity of objects in a collection. The quasi-cardinal of a quasi-set is not defined in the usual sense (by means of ordinals) because the "m"-atoms are assumed (absolutely) indistinguishable. Furthermore, it is possible to define a translation from the language of ZFU into the language of formula_0 in such a way so that there is a 'copy' of ZFU in formula_0. In this copy, all the usual mathematical concepts can be defined, and the 'sets' (in reality, the 'formula_0-sets') turn out to be those q-sets whose transitive closure contains no m-atoms. In formula_0 there may exist q-sets, called "pure" q-sets, whose elements are all m-atoms, and the axiomatics of formula_0 provides the grounds for saying that nothing in formula_0 distinguishes the elements of a pure q-set from one another, for certain pure q-sets. Within the theory, the idea that there is more than one entity in "x" is expressed by an axiom stating that the quasi-cardinal of the power quasi-set of "x" has quasi-cardinal 2qc("x"), where qc("x") is the quasi-cardinal of "x" (which is a cardinal obtained in the 'copy' of ZFU just mentioned). What exactly does this mean? Consider the level 2"p" of a sodium atom, in which there are six indiscernible electrons. Even so, physicists reason as if there are in fact six entities in that level, and not only one. In this way, by saying that the quasi-cardinal of the power quasi-set of "x" is 2qc("x") (suppose that "qc"("x") = 6 to follow the example), we are not excluding the hypothesis that there can exist six subquasi-sets of "x" that are 'singletons', although we cannot distinguish among them. Whether there are or not six elements in "x" is something that cannot be ascribed by the theory (although the notion is compatible with the theory). If the theory could answer this question, the elements of "x" would be individualized and hence counted, contradicting the basic assumption that they cannot be distinguished. In other words, we may consistently (within the axiomatics of formula_0) reason as if there are six entities in "x", but "x" must be regarded as a collection whose elements cannot be discerned as individuals. Using quasi-set theory, we can express some facts of quantum physics without introducing symmetry conditions (Krause et al. 1999, 2005). As is well known, in order to express indistinguishability, the particles are deemed to be "individuals", say by attaching them to coordinates or to adequate functions/vectors like |ψ&gt;. Thus, given two quantum systems labeled |ψ1⟩ and |ψ2⟩ at the outset, we need to consider a function like |ψ12⟩ = |ψ1⟩|ψ2⟩ ± |ψ2⟩|ψ1⟩ (except for certain constants), which keep the quanta indistinguishable by permutations; the probability density of the joint system independs on which is quanta #1 and which is quanta #2. (Note that precision requires that we talk of "two" quanta without distinguishing them, which is impossible in conventional set theories.) In formula_0, we can dispense with this "identification" of the quanta; for details, see Krause et al. (1999, 2005) and French and Krause (2006). Quasi-set theory is a way to operationalize Heinz Post's (1963) claim that quanta should be deemed indistinguishable "right from the start." References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\mathfrak{Q}" } ]
https://en.wikipedia.org/wiki?curid=8121479
812290
DIIS
DIIS (direct inversion in the iterative subspace or direct inversion of the iterative subspace), also known as Pulay mixing, is a technique for extrapolating the solution to a set of linear equations by directly minimizing an error residual (e.g. a Newton–Raphson step size) with respect to a linear combination of known sample vectors. DIIS was developed by Peter Pulay in the field of computational quantum chemistry with the intent to accelerate and stabilize the convergence of the Hartree–Fock self-consistent field method. At a given iteration, the approach constructs a linear combination of approximate error vectors from previous iterations. The coefficients of the linear combination are determined so to best approximate, in a least squares sense, the null vector. The newly determined coefficients are then used to extrapolate the function variable for the next iteration. Details. At each iteration, an approximate error vector, e"i", corresponding to the variable value, p"i" is determined. After sufficient iterations, a linear combination of "m" previous error vectors is constructed formula_0 The DIIS method seeks to minimize the norm of e"m"+1 under the constraint that the coefficients sum to one. The reason why the coefficients must sum to one can be seen if we write the trial vector as the sum of the exact solution (pf) and an error vector. In the DIIS approximation, we get: formula_1 We minimize the second term while it is clear that the sum coefficients must be equal to one if we want to find the exact solution. The minimization is done by a Lagrange multiplier technique. Introducing an undetermined multiplier "λ", a Lagrangian is constructed as formula_2 Equating zero to the derivatives of "L" with respect to the coefficients and the multiplier leads to a system of ("m" + 1) linear equations to be solved for the "m" coefficients (and the Lagrange multiplier). formula_3 Moving the minus sign to "λ", results in an equivalent symmetric problem. formula_4 The coefficients are then used to update the variable as formula_5 Citations. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\mathbf e_{m+1}=\\sum_{i = 1}^m\\ c_i\\mathbf e_i." }, { "math_id": 1, "text": "\n\\begin{align}\n\\mathbf p &= \\sum_i c_i \\left( \\mathbf p^\\text{f} + \\mathbf e_i \\right) \\\\\n &= \\mathbf p^\\text{f} \\sum_i c_i + \\sum_i c_i \\mathbf e_i\n\\end{align}\n" }, { "math_id": 2, "text": "\n\\begin{align}\nL&=\\left\\|\\mathbf e_{m+1}\\right\\|^2-2\\lambda\\left(\\sum_i\\ c_i-1\\right),\\\\\n&=\\sum_{ij}c_jB_{ji}c_i-2\\lambda\\left(\\sum_i\\ c_i-1\\right),\\text{ where } B_{ij}=\\langle\\mathbf e_j, \\mathbf e_i\\rangle.\n\\end{align}\n" }, { "math_id": 3, "text": "\\begin{bmatrix} \nB_{11} & B_{12} & B_{13} & ... & B_{1m} & -1 \\\\\nB_{21} & B_{22} & B_{23} & ... & B_{2m} & -1 \\\\ \nB_{31} & B_{32} & B_{33} & ... & B_{3m} & -1 \\\\ \n\\vdots & \\vdots & \\vdots & \\vdots & \\ddots & \\vdots \\\\\nB_{m1} & B_{m2} & B_{m3} & ... & B_{mm} & -1 \\\\\n1 & 1 & 1 & ... & 1 & 0\n\\end{bmatrix} \\begin{bmatrix} c_1 \\\\ c_2 \\\\ c_3 \\\\ \\vdots \\\\ c_m \\\\ \\lambda \\end{bmatrix}=\n\\begin{bmatrix} 0 \\\\ 0 \\\\ 0 \\\\ \\vdots \\\\ 0 \\\\ 1 \\end{bmatrix}\n" }, { "math_id": 4, "text": "\\begin{bmatrix} \nB_{11} & B_{12} & B_{13} & ... & B_{1m} & 1 \\\\\nB_{21} & B_{22} & B_{23} & ... & B_{2m} & 1 \\\\ \nB_{31} & B_{32} & B_{33} & ... & B_{3m} & 1 \\\\ \n\\vdots & \\vdots & \\vdots & \\vdots & \\ddots & \\vdots \\\\\nB_{m1} & B_{m2} & B_{m3} & ... & B_{mm} & 1 \\\\\n1 & 1 & 1 & ... & 1 & 0\n\\end{bmatrix} \\begin{bmatrix} c_1 \\\\ c_2 \\\\ c_3 \\\\ \\vdots \\\\ c_m \\\\ -\\lambda \\end{bmatrix}=\n\\begin{bmatrix} 0 \\\\ 0 \\\\ 0 \\\\ \\vdots \\\\ 0 \\\\ 1 \\end{bmatrix}\n" }, { "math_id": 5, "text": "\\mathbf p_{m+1}=\\sum_{i = 1}^m c_i\\mathbf p_i." } ]
https://en.wikipedia.org/wiki?curid=812290
8123
D
4th letter of the Latin alphabet D, or d, is the fourth letter of the Latin alphabet, used in the modern English alphabet, the alphabets of other western European languages and others worldwide. Its name in English is "dee" (pronounced ), plural "dees". History. The Semitic letter Dāleth may have developed from the logogram for a fish or a door. There are many different Egyptian hieroglyphs that might have inspired this. In Semitic, Ancient Greek and Latin, the letter represented ; in the Etruscan alphabet the letter was archaic but still retained. The equivalent Greek letter is delta, Δ. The minuscule (lower-case) form of 'd' consists of a lower-story left bowl and a stem ascender. It most likely developed by gradual variations on the majuscule (capital) form 'D', and is now composed as a stem with a full lobe to the right. In handwriting, it was common to start the arc to the left of the vertical stroke, resulting in a serif at the top of the arc. This serif was extended while the rest of the letter was reduced, resulting in an angled stroke and loop. The angled stroke slowly developed into a vertical stroke. Use in writing systems. English. In English, ⟨d⟩ generally represents the voiced alveolar plosive . D is the tenth most frequently used letter in the English language. Other languages. In most languages that use the Latin alphabet, ⟨d⟩ generally represents the voiced alveolar or voiced dental plosive . In the Vietnamese alphabet, it represents the sound in northern dialects or in southern dialects. In Fijian, it represents a prenasalized stop . In some languages where voiceless unaspirated stops contrast with voiceless aspirated stops, ⟨d⟩ represents an unaspirated , while ⟨t⟩ represents an aspirated . Examples of such languages include Icelandic, Scottish Gaelic, Navajo and the pinyin transliteration of Mandarin. Other systems. In the International Phonetic Alphabet, ⟨d⟩ represents the voiced alveolar plosive . Other representations. Computing. The Latin letters ⟨D⟩ and ⟨d⟩ have Unicode encodings and . These are the same code points as those used in ASCII and ISO 8859. There are also precomposed character encodings for ⟨D⟩ and ⟨d⟩ with diacritics, for most of those listed above; the remainder are produced using combining diacritics. Variant forms of the letter have unique code points for specialist use: the alphanumeric symbols set in mathematics and science, plosive sounds in linguistics and halfwidth and fullwidth forms for legacy CJK font compatibility. Other. In British Sign Language (BSL), the letter 'd' is indicated by signing with the right hand held with the index and thumb extended and slightly curved, and the tip of the thumb and finger held against the extended index of the left hand. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\partial" } ]
https://en.wikipedia.org/wiki?curid=8123
81231
Pyrometer
Type of thermometer sensing radiation A pyrometer, or radiation thermometer, is a type of remote sensing thermometer used to measure the temperature of distant objects. Various forms of pyrometers have historically existed. In the modern usage, it is a device that from a distance determines the temperature of a surface from the amount of the thermal radiation it emits, a process known as pyrometry, a type of "radiometry". The word pyrometer comes from the Greek word for fire, "πῦρ" ("pyr"), and "meter", meaning to measure. The word pyrometer was originally coined to denote a device capable of measuring the temperature of an object by its incandescence, visible light emitted by a body which is at least red-hot. Infrared thermometers, can also measure the temperature of cooler objects, down to room temperature, by detecting their infrared radiation flux. Modern pyrometers are available for a wide range of wavelengths and are generally called "radiation thermometers". Principle. It is based on the principle that the intensity of light received by the observer depends upon the distance of the observer from the source and the temperature of the distant source. A modern pyrometer has an optical system and a detector. The optical system focuses the thermal radiation onto the detector. The output signal of the detector (temperature "T") is related to the thermal radiation or irradiance formula_0 of the target object through the Stefan–Boltzmann law, the constant of proportionality σ, called the Stefan–Boltzmann constant and the emissivity ε of the object: formula_1 This output is used to infer the object's temperature from a distance, with no need for the pyrometer to be in thermal contact with the object; most other thermometers (e.g. thermocouples and resistance temperature detectors (RTDs)) are placed in thermal contact with the object and allowed to reach thermal equilibrium. Pyrometry of gases presents difficulties. These are most commonly overcome by using thin-filament pyrometry or soot pyrometry. Both techniques involve small solids in contact with hot gases. History. The term "pyrometer" was coined in the 1730s by Pieter van Musschenbroek, better known as the inventor of the Leyden jar. His device, of which no surviving specimens are known, may be now called a dilatometer because it measured the dilation of a metal rod. The earliest example of a pyrometer thought to be in existence is the Hindley Pyrometer held by the London Science Museum, dating from 1752, produced for the Royal collection. The pyrometer was a well known enough instrument that it was described in some detail by the mathematician Euler in 1760. Around 1782 potter Josiah Wedgwood invented a different type of pyrometer (or rather a pyrometric device) to measure the temperature in his kilns, which first compared the color of clay fired at known temperatures, but was eventually upgraded to measuring the shrinkage of pieces of clay, which depended on kiln temperature (see Wedgwood scale for details). Later examples used the expansion of a metal bar. In the 1860s–1870s brothers William and Werner Siemens developed a platinum resistance thermometer, initially to measure temperature in undersea cables, but then adapted for measuring temperatures in metallurgy up to 1000 °C, hence deserving a name of a pyrometer. Around 1890 Henry Louis Le Chatelier developed the thermoelectric pyrometer. The first disappearing-filament pyrometer was built by L. Holborn and F. Kurlbaum in 1901. This device had a thin electrical filament between an observer's eye and an incandescent object. The current through the filament was adjusted until it was of the same colour (and hence temperature) as the object, and no longer visible; it was calibrated to allow temperature to be inferred from the current. The temperature returned by the vanishing-filament pyrometer and others of its kind, called brightness pyrometers, is dependent on the emissivity of the object. With greater use of brightness pyrometers, it became obvious that problems existed with relying on knowledge of the value of emissivity. Emissivity was found to change, often drastically, with surface roughness, bulk and surface composition, and even the temperature itself. To get around these difficulties, the "ratio" or "two-color" pyrometer was developed. They rely on the fact that Planck's law, which relates temperature to the intensity of radiation emitted at individual wavelengths, can be solved for temperature if Planck's statement of the intensities at two different wavelengths is divided. This solution assumes that the emissivity is the same at both wavelengths and cancels out in the division. This is known as the gray-body assumption. Ratio pyrometers are essentially two brightness pyrometers in a single instrument. The operational principles of the ratio pyrometers were developed in the 1920s and 1930s, and they were commercially available in 1939. As the ratio pyrometer came into popular use, it was determined that many materials, of which metals are an example, do not have the same emissivity at two wavelengths. For these materials, the emissivity does not cancel out, and the temperature measurement is in error. The amount of error depends on the emissivities and the wavelengths where the measurements are taken. Two-color ratio pyrometers cannot measure whether a material's emissivity is wavelength-dependent. To more accurately measure the temperature of real objects with unknown or changing emissivities, multiwavelength pyrometers were envisioned at the US National Institute of Standards and Technology and described in 1992. Multiwavelength pyrometers use three or more wavelengths and mathematical manipulation of the results to attempt to achieve accurate temperature measurement even when the emissivity is unknown, changing or differs according to wavelength of measurement. Applications. Pyrometers are suited especially to the measurement of moving objects or any surfaces that cannot be reached or cannot be touched. Contemporary multispectral pyrometers are suitable for measuring high temperatures inside combustion chambers of gas turbine engines with high accuracy. Temperature is a fundamental parameter in metallurgical furnace operations. Reliable and continuous measurement of the metal temperature is essential for effective control of the operation. Smelting rates can be maximized, slag can be produced at the optimal temperature, fuel consumption is minimized and refractory life may also be lengthened. Thermocouples were the traditional devices used for this purpose, but they are unsuitable for continuous measurement because they melt and degrade. Salt bath furnaces operate at temperatures up to 1300 °C and are used for heat treatment. At very high working temperatures with intense heat transfer between the molten salt and the steel being treated, precision is maintained by measuring the temperature of the molten salt. Most errors are caused by slag on the surface, which is cooler than the salt bath. The "tuyère pyrometer" is an optical instrument for temperature measurement through the tuyeres, which are normally used for feeding air or reactants into the bath of the furnace. A steam boiler may be fitted with a pyrometer to measure the steam temperature in the superheater. A hot air balloon is equipped with a pyrometer for measuring the temperature at the top of the envelope in order to prevent overheating of the fabric. Pyrometers may be fitted to experimental gas turbine engines to measure the surface temperature of turbine blades. Such pyrometers can be paired with a tachometer to tie the pyrometer output with the position of an individual turbine blade. Timing combined with a radial position encoder allows engineers to determine the temperature at exact points on blades moving past the probe. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "j^\\star" }, { "math_id": 1, "text": "\n j^\\star = \\varepsilon \\sigma T^4.\n" } ]
https://en.wikipedia.org/wiki?curid=81231
8124
Delta (letter)
Fourth letter in the Greek alphabet Delta (; uppercase Δ, lowercase δ; , "délta", ) is the fourth letter of the Greek alphabet. In the system of Greek numerals it has a value of 4. It was derived from the Phoenician letter dalet 𐤃. Letters that come from delta include Latin D and Cyrillic Д. A river delta (originally, the delta of the Nile River) is so named because its shape approximates the triangular uppercase letter delta. Contrary to a popular legend, this use of the word "delta" was not coined by Herodotus. Pronunciation. In Ancient Greek, delta represented a voiced dental plosive . In Modern Greek, it represents a voiced dental fricative , like the "th" in "that" or "this" (while in foreign words is instead commonly transcribed as ντ). Delta is romanized as "d" or "dh". Uppercase. The uppercase letter Δ is used to denote: Lowercase. The lowercase letter δ (or 𝛿) can be used to denote: Computer encodings. These characters are used only as mathematical symbols. Stylized Greek text should be encoded using the normal Greek letters, with markup and formatting to indicate text style.
[ { "math_id": 0, "text": "\n \\frac{y_2 - y_1}{x_2 - x_1} = \\frac{\\Delta y}{\\Delta x},\n" }, { "math_id": 1, "text": "\\Delta f = \\sum_{i=1}^n {\\frac{\\partial^2 f}{\\partial x_i^2}}." }, { "math_id": 2, "text": "\\Delta = b^2 - 4ac." }, { "math_id": 3, "text": "\\Delta = \\tfrac{1}{2} a b \\sin{C}." } ]
https://en.wikipedia.org/wiki?curid=8124
81242
Pulse-width modulation
Representation of a signal as a rectangular wave with varying duty cycle Pulse-width modulation (PWM), also known as pulse-duration modulation (PDM) or pulse-length modulation (PLM), is any method of representing a signal as a rectangular wave with a varying duty cycle (and for some methods also a varying period). PWM is useful for controlling the average power or amplitude delivered by an electrical signal. The average value of voltage (and current) fed to the load is controlled by switching the supply between 0 and 100% at a rate faster than it takes the load to change significantly. The longer the switch is on, the higher the total power supplied to the load. Along with maximum power point tracking (MPPT), it is one of the primary methods of controlling the output of solar panels to that which can be utilized by a battery. PWM is particularly suited for running inertial loads such as motors, which are not as easily affected by this discrete switching. The goal of PWM is to control a load; however, the PWM switching frequency must be selected carefully in order to smoothly do so. The PWM switching frequency can vary greatly depending on load and application. For example, switching only has to be done several times a minute in an electric stove; 100 or 120 Hz (double of the utility frequency) in a lamp dimmer; between a few kilohertz (kHz) and tens of kHz for a motor drive; and well into the tens or hundreds of kHz in audio amplifiers and computer power supplies. Choosing a switching frequency that is too high for the application may cause premature failure of mechanical control components despite getting smooth control of the load. Selecting a switching frequency that is too low for the application causes oscillations in the load. The main advantage of PWM is that power loss in the switching devices is very low. When a switch is off there is practically no current, and when it is on and power is being transferred to the load, there is almost no voltage drop across the switch. Power loss, being the product of voltage and current, is thus in both cases close to zero. PWM also works well with digital controls, which, because of their on/off nature, can easily set the needed duty cycle. PWM has also been used in certain communication systems where its duty cycle has been used to convey information over a communications channel. In electronics, many modern microcontrollers (MCUs) integrate PWM controllers exposed to external pins as peripheral devices under firmware control. These are commonly used for direct current (DC) motor control in robotics, switched-mode power supply regulation, and other applications. Duty cycle. The term "duty cycle" describes the proportion of 'on' time to the regular interval or 'period' of time; a low duty cycle corresponds to low power, because the power is off for most of the time. Duty cycle is expressed in percent, 100% being fully on. When a digital signal is on half of the time and off the other half of the time, the digital signal has a duty cycle of 50% and resembles a "square" wave. When a digital signal spends more time in the on state than the off state, it has a duty cycle of &gt;50%. When a digital signal spends more time in the off state than the on state, it has a duty cycle of &lt;50%. Here is a pictorial that illustrates these three scenarios: History. The Corliss steam engine was patented in 1849. It used pulse-width modulation to control the intake valve of a steam engine cylinder. A centrifugal governor was used to provide automatic feedback. Some machines (such as a sewing machine motor) require partial or variable power. In the past, control (such as in a sewing machine's foot pedal) was implemented by use of a rheostat connected in series with the motor to adjust the amount of current flowing through the motor. It was an inefficient scheme, as this also wasted power as heat in the resistor element of the rheostat, but tolerable because the total power was low. While the rheostat was one of several methods of controlling power (see autotransformers and Variac for more info), a low cost and efficient power switching/adjustment method was yet to be found. This mechanism also needed to be able to drive motors for fans, pumps and robotic servomechanisms, and needed to be compact enough to interface with lamp dimmers. PWM emerged as a solution for this complex problem. The Philips, N. V. company designed an optical scanning system (published in 1946) for variable area film soundtrack which produced the PWM. It was intended to reduce noise when playing back a film soundtrack. The proposed system had a threshold between "white" and "black" parts of soundtrack. One early application of PWM was in the Sinclair X10, a 10 W audio amplifier available in kit form in the 1960s. At around the same time, PWM started to be used in AC motor control. Of note, for about a century, some variable-speed electric motors have had decent efficiency, but they were somewhat more complex than constant-speed motors, and sometimes required bulky external electrical apparatus, such as a bank of variable power resistors or rotating converters such as the Ward Leonard drive. Principle. Periodic pulse wave. If we consider a periodic pulse wave formula_2 with period formula_3, low value formula_0, a high value formula_1 and a constant duty cycle D (Figure 1), the average value of the waveform is given by: formula_4 As formula_2 is a pulse wave, its value is formula_1 for formula_5 and formula_0 for formula_6. The above expression then becomes: formula_7 This latter expression can be fairly simplified in many cases where formula_8 as formula_9. From this, the average value of the signal (formula_10) is directly dependent on the duty cycle D. However, by varying (i.e. modulating) the duty cycle (and possibly also the period), the following more advanced pulse-width modulated waves allow variation of the average value of the waveform. Intersective method PWM. The intersective method is a simple way to generate a PWM output signal (magenta in above figure) with fixed period and varying duty cycle is by using a comparator to switch the PWM output state when the input waveform (red) intersects with a sawtooth or a triangle waveform (blue). Depending on the type of sawtooth or triangle waveform (green in below figure), intersective PWM signals (blue in the below figure) can be aligned in three manners: Time proportioning. Many digital circuits can generate PWM signals (e.g., many microcontrollers have PWM outputs). They normally use a counter that increments periodically (it is connected directly or indirectly to the clock of the circuit) and is reset at the end of every period of the PWM. When the counter value is more than the reference value, the PWM output changes state from high to low (or low to high). This technique is referred to as time proportioning, particularly as time-proportioning control – which "proportion" of a fixed cycle time is spent in the high state. The incremented and periodically reset counter is the discrete version of the intersecting method's sawtooth. The analog comparator of the intersecting method becomes a simple integer comparison between the current counter value and the digital (possibly digitized) reference value. The duty cycle can only be varied in discrete steps, as a function of the counter resolution. However, a high-resolution counter can provide quite satisfactory performance. Spectrum. The resulting spectra (of the three alignments) are similar. Each contains a dc component, a base sideband containing the modulating signal, and phase modulated carriers at each harmonic of the frequency of the pulse. The amplitudes of the harmonic groups are restricted by a formula_11 envelope (sinc function) and extend to infinity. The infinite bandwidth is caused by the nonlinear operation of the pulse-width modulator. In consequence, a digital PWM suffers from aliasing distortion that significantly reduce its applicability for modern communication systems. By limiting the bandwidth of the PWM kernel, aliasing effects can be avoided. On the contrary, delta modulation and delta-sigma modulation are random processes that produces a continuous spectrum without distinct harmonics. While intersective PWM uses a fixed period but a varying duty cycle, the period of delta and delta-sigma modulated PWMs varies in addition to their duty cycle. Delta modulation. Delta modulation produces a PWM signal (magenta in above figure) which changes state whenever its integral (blue) hits the limits (green) surrounding the input (red). Asynchronous delta-sigma PWM. Asynchronous (i.e. unclocked) delta-sigma modulation produces a PWM output (blue in bottom plot) which is subtracted from the input signal (green in top plot) to form an error signal (blue in top plot). This error is integrated (magenta in middle plot). When the integral of the error exceeds the limits (the upper and lower grey lines in middle plot), the PWM output changes state. By integrating the difference of the error with the input signal, delta-sigma modulation shapes noise of the resulting spectrum to be more in higher frequencies above the input signal's band. Space vector modulation. Space vector modulation is a PWM control algorithm for multi-phase AC generation, in which the reference signal is sampled regularly; after each sample, non-zero active switching vectors adjacent to the reference vector and one or more of the zero switching vectors are selected for the appropriate fraction of the sampling period in order to synthesize the reference signal as the average of the used vectors. Direct torque control (DTC). Direct torque control is a method used to control AC motors. It is closely related with the delta modulation (see above). Motor torque and magnetic flux are estimated and these are controlled to stay within their hysteresis bands by turning on a new combination of the device's semiconductor switches each time either signal tries to deviate out of its band. PWM sampling theorem. The process of PWM conversion is non-linear and it is generally supposed that low pass filter signal recovery is imperfect for PWM. The PWM sampling theorem shows that PWM conversion can be perfect: Any bandlimited baseband signal whose amplitude is within ±0.637 can be represented by a PWM waveform of unit amplitude (±1). The number of pulses in the waveform is equal to the number of Nyquist samples and the peak constraint is independent of whether the waveform is two-level or three-level. For comparison, the Nyquist–Shannon sampling theorem can be summarized as: If you have a signal that is bandlimited to a bandwidth of f0 then you can collect all the information there is in that signal by sampling it at discrete times, as long as your sample rate is greater than 2f0. Applications. Servos. PWM is used to control servomechanisms; see servo control. Telecommunications. In telecommunications, PWM is a form of signal modulation where the widths of the pulses correspond to specific data values encoded at one end and decoded at the other. Pulses of various lengths (the information itself) will be sent at regular intervals (the carrier frequency of the modulation). _ _ _ _ _ _ _ _ Clock | | | | | | | | | | | | | | | | __| |____| |____| |____| |____| |____| |____| |____| |____ _ __ ____ ____ _ PWM signal | | | | | | | | | | _________| |____| |___| |________| |_| |___________ Data 0 1 2 4 0 4 1 0 The inclusion of a clock signal is not necessary, as the leading edge of the data signal can be used as the clock if a small offset is added to each data value in order to avoid a data value with a zero length pulse. _ __ ___ _____ _ _____ __ _ PWM signal | | | | | | | | | | | | | | | | __| |____| |___| |__| |_| |____| |_| |___| |_____ Data 0 1 2 4 0 4 1 0 Power delivery. PWM can be used to control the amount of power delivered to a load without incurring the losses that would result from linear power delivery by resistive means. Drawbacks to this technique are that the power drawn by the load is not constant but rather discontinuous (see Buck converter), and energy delivered to the load is not continuous either. However, the load may be inductive, and with a sufficiently high frequency and when necessary using additional passive electronic filters, the pulse train can be smoothed and average analog waveform recovered. Power flow into the load can be continuous. Power flow from the supply is not constant and will require energy storage on the supply side in most cases. (In the case of an electrical circuit, a capacitor to absorb energy stored in (often parasitic) supply side inductance.) High frequency PWM power control systems are easily realisable with semiconductor switches. As explained above, almost no power is dissipated by the switch in either on or off state. However, during the transitions between on and off states, both voltage and current are nonzero and thus power is dissipated in the switches. By quickly changing the state between fully on and fully off (typically less than 100 nanoseconds), the power dissipation in the switches can be quite low compared to the power being delivered to the load. Modern semiconductor switches such as MOSFETs or insulated-gate bipolar transistors (IGBTs) are well suited components for high-efficiency controllers. Frequency converters used to control AC motors may have efficiencies exceeding 98%. Switching power supplies have lower efficiency due to low output voltage levels (often even less than 2 V for microprocessors are needed) but still more than 70–80% efficiency can be achieved. Variable-speed computer fan controllers usually use PWM, as it is far more efficient when compared to a potentiometer or rheostat. (Neither of the latter is practical to operate electronically; they would require a small drive motor.) Light dimmers for home use employ a specific type of PWM control. Home-use light dimmers typically include electronic circuitry that suppresses current flow during defined portions of each cycle of the AC line voltage. Adjusting the brightness of light emitted by a light source is then merely a matter of setting at what voltage (or phase) in the AC half-cycle the dimmer begins to provide electric current to the light source (e.g. by using an electronic switch such as a triac). In this case the PWM duty cycle is the ratio of the conduction time to the duration of the half AC cycle defined by the frequency of the AC line voltage (50 Hz or 60 Hz depending on the country). These rather simple types of dimmers can be effectively used with inert (or relatively slow reacting) light sources such as incandescent lamps, for example, for which the additional modulation in supplied electrical energy which is caused by the dimmer causes only negligible additional fluctuations in the emitted light. Some other types of light sources such as light-emitting diodes (LEDs), however, turn on and off extremely rapidly and would perceivably flicker if supplied with low-frequency drive voltages. Perceivable flicker effects from such rapid response light sources can be reduced by increasing the PWM frequency. If the light fluctuations are sufficiently rapid (faster than the flicker fusion threshold), the human visual system can no longer resolve them and the eye perceives the time average intensity without flicker. In electric cookers, continuously variable power is applied to the heating elements such as the hob or the grill using a device known as a simmerstat. This consists of a thermal oscillator running at approximately two cycles per minute and the mechanism varies the duty cycle according to the knob setting. The thermal time constant of the heating elements is several minutes so that the temperature fluctuations are too small to matter in practice. Voltage regulation. PWM is also used in efficient voltage regulators. By switching the voltage to the load with the appropriate duty cycle, the output will approximate a voltage at the desired level. The switching noise is usually filtered with an inductor and a capacitor. One method measures the output voltage. When it is lower than the desired voltage, it turns on the switch. When the output voltage is above the desired voltage, it turns off the switch. Audio effects and amplification. Varying the duty cycle of a pulse waveform in a synthesis instrument creates useful timbral variations. Some synthesizers have a duty-cycle trimmer for their square-wave outputs, and that trimmer can be set by ear; the 50% point (true square wave) is distinctive because even-numbered harmonics essentially disappear at 50%. Pulse waves, usually 50%, 25%, and 12.5%, make up the soundtracks of classic video games. The term PWM as used in sound (music) synthesis refers to the ratio between the high and low level being secondarily modulated with a low-frequency oscillator. This gives a sound effect similar to chorus or slightly detuned oscillators played together. (In fact, PWM is equivalent to the sum of two sawtooth waves with one of them inverted.) Class-D amplifiers produce a PWM equivalent of a lower frequency input signal that can be sent to a loudspeaker via a suitable filter network to block the carrier and recover the original lower frequency signal. Since they switch power directly from the high supply rail and low supply rail, these amplifiers have efficiency above 90% and can be relatively compact and light, even for large power outputs. For a few decades, industrial and military PWM amplifiers have been in common use, often for driving servomotors. Field-gradient coils in MRI machines are driven by relatively high-power PWM amplifiers. Historically, a crude form of PWM has been used to play back PCM digital sound on the PC speaker, which is driven by only two voltage levels, typically 0 V and 5 V. By carefully timing the duration of the pulses, and by relying on the speaker's physical filtering properties (limited frequency response, self-inductance, etc.) it was possible to obtain an approximate playback of mono PCM samples, although at a very low quality, and with greatly varying results between implementations. The Sega 32X uses PWM to play sample-based sound in its games. In more recent times, the Direct Stream Digital sound encoding method was introduced, which uses a generalized form of pulse-width modulation called pulse-density modulation, at a high enough sampling rate (typically in the order of MHz) to cover the whole acoustic frequencies range with sufficient fidelity. This method is used in the SACD format, and reproduction of the encoded audio signal is essentially similar to the method used in class-D amplifiers. Electrical. SPWM (sine–triangle pulse-width modulation) signals are used in micro-inverter design (used in solar and wind power applications). These switching signals are fed to the FETs that are used in the device. The device's efficiency depends on the harmonic content of the PWM signal. There is much research on eliminating unwanted harmonics and improving the fundamental strength, some of which involves using a modified carrier signal instead of a classic sawtooth signal in order to decrease power losses and improve efficiency. Another common application is in robotics where PWM signals are used to control the speed of the robot by controlling the motors. Soft-blinking LED indicator. PWM techniques would typically be used to make some indicator (like an LED) "soft blink". The light will slowly go from dark to full intensity, and slowly dimmed to dark again. Then it repeats. The period would be several soft blinks per second up to several seconds for one blink. An indicator of this type would not disturb as much as a "hard-blinking" on/off indicator. The indicator lamp on the Apple iBook G4, PowerBook 6,7 (2005) was of this type. This kind of indicator is also called "pulsing glow", as opposed to calling it "flashing". References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "y_\\text{min}" }, { "math_id": 1, "text": "y_\\text{max}" }, { "math_id": 2, "text": "f(t)" }, { "math_id": 3, "text": "T" }, { "math_id": 4, "text": "\\bar{y} = \\frac{1}{T}\\int^T_0f(t)\\,dt" }, { "math_id": 5, "text": "0 < t < D \\cdot T" }, { "math_id": 6, "text": "D \\cdot T < t < T" }, { "math_id": 7, "text": "\\begin{align}\n \\bar{y} &= \\frac{1}{T} \\left(\\int_0^{DT} y_\\text{max}\\,dt + \\int_{DT}^T y_\\text{min}\\,dt\\right)\\\\\n &= \\frac{1}{T} \\left(D \\cdot T \\cdot y_\\text{max} + T\\left(1 - D\\right) y_\\text{min}\\right)\\\\\n &= D\\cdot y_\\text{max} + \\left(1 - D\\right) y_\\text{min}\n\\end{align}" }, { "math_id": 8, "text": "y_\\text{min} = 0" }, { "math_id": 9, "text": "\\bar{y} = D \\cdot y_\\text{max}" }, { "math_id": 10, "text": "\\bar{y}" }, { "math_id": 11, "text": "\\sin x / x" } ]
https://en.wikipedia.org/wiki?curid=81242
8124425
Control point (mathematics)
Points used to define the shape of curves and surfaces In computer-aided geometric design a control point is a member of a set of points used to determine the shape of a spline curve or, more generally, a surface or higher-dimensional object. For Bézier curves, it has become customary to refer to the &amp;NoBreak;&amp;NoBreak;-vectors &amp;NoBreak;&amp;NoBreak; in a parametric representation formula_0 of a curve or surface in &amp;NoBreak;&amp;NoBreak;-space as control points, while the scalar-valued functions &amp;NoBreak;&amp;NoBreak;, defined over the relevant parameter domain, are the corresponding "weight" or "blending functions". Some would reasonably insist, in order to give intuitive geometric meaning to the word "control", that the blending functions form a partition of unity, i.e., that the &amp;NoBreak;&amp;NoBreak; are nonnegative and sum to one. This property implies that the curve lies within the convex hull of its control points. This is the case for Bézier's representation of a polynomial curve as well as for the B-spline representation of a spline curve or tensor-product spline surface. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": " \\sum_i \\mathbf p_i \\phi_i" } ]
https://en.wikipedia.org/wiki?curid=8124425
8124665
Igusa zeta function
Type of generating function in mathematics In mathematics, an Igusa zeta function is a type of generating function, counting the number of solutions of an equation, "modulo" "p", "p"2, "p"3, and so on. Definition. For a prime number "p" let "K" be a p-adic field, i.e. formula_0, "R" the valuation ring and "P" the maximal ideal. For formula_1 we denote by formula_2 the valuation of "z", formula_3, and formula_4 for a uniformizing parameter π of "R". Furthermore let formula_5 be a Schwartz–Bruhat function, i.e. a locally constant function with compact support and let formula_6 be a character of formula_7. In this situation one associates to a non-constant polynomial formula_8 the Igusa zeta function formula_9 where formula_10 and "dx" is Haar measure so normalized that formula_11 has measure 1. Igusa's theorem. Jun-Ichi Igusa (1974) showed that formula_12 is a rational function in formula_13. The proof uses Heisuke Hironaka's theorem about the resolution of singularities. Later, an entirely different proof was given by Jan Denef using p-adic cell decomposition. Little is known, however, about explicit formulas. (There are some results about Igusa zeta functions of Fermat varieties.) Congruences modulo powers of "P". Henceforth we take formula_14 to be the characteristic function of formula_11 and formula_6 to be the trivial character. Let formula_15 denote the number of solutions of the congruence formula_16. Then the Igusa zeta function formula_17 is closely related to the Poincaré series formula_18 by formula_19
[ { "math_id": 0, "text": " [K: \\mathbb{Q}_p]<\\infty " }, { "math_id": 1, "text": "z \\in K" }, { "math_id": 2, "text": "\\operatorname{ord}(z)" }, { "math_id": 3, "text": "\\mid z \\mid = q^{-\\operatorname{ord}(z)}" }, { "math_id": 4, "text": "ac(z)=z \\pi^{-\\operatorname{ord}(z)}" }, { "math_id": 5, "text": "\\phi : K^n \\to \\mathbb{C}" }, { "math_id": 6, "text": "\\chi" }, { "math_id": 7, "text": "R^\\times" }, { "math_id": 8, "text": "f(x_1, \\ldots, x_n) \\in K[x_1,\\ldots,x_n]" }, { "math_id": 9, "text": " Z_\\phi(s,\\chi) = \\int_{K^n} \\phi(x_1,\\ldots,x_n) \\chi(ac(f(x_1,\\ldots,x_n))) |f(x_1,\\ldots,x_n)|^s \\, dx " }, { "math_id": 10, "text": "s \\in \\mathbb{C}, \\operatorname{Re}(s)>0," }, { "math_id": 11, "text": "R^n" }, { "math_id": 12, "text": "Z_\\phi (s,\\chi)" }, { "math_id": 13, "text": "t=q^{-s}" }, { "math_id": 14, "text": "\\phi" }, { "math_id": 15, "text": "N_i" }, { "math_id": 16, "text": "f(x_1,\\ldots,x_n) \\equiv 0 \\mod P^i" }, { "math_id": 17, "text": "Z(t)= \\int_{R^n} |f(x_1,\\ldots,x_n)|^s \\, dx " }, { "math_id": 18, "text": "P(t)= \\sum_{i=0}^{\\infty} q^{-in}N_i t^i" }, { "math_id": 19, "text": "P(t)= \\frac{1-t Z(t)}{1-t}." } ]
https://en.wikipedia.org/wiki?curid=8124665
812561
Prime quadruplet
In number theory, a prime quadruplet (sometimes called prime quadruple) is a set of four prime numbers of the form {"p", "p" + 2, "p" + 6, "p" + 8}. This represents the closest possible grouping of four primes larger than 3, and is the only prime constellation of length 4. Prime quadruplets. The first eight prime quadruplets are: All prime quadruplets except {5, 7, 11, 13} are of the form {30"n" + 11, 30"n" + 13, 30"n" + 17, 30"n" + 19} for some integer n. (This structure is necessary to ensure that none of the four primes are divisible by 2, 3 or 5). A prime quadruplet of this form is also called a prime decade. All such prime decades have centers of form 210n + 15, 210n + 105, and 210n + 195 since the centers must be -1, O, or +1 modulo 7. The +15 form may also give rise to a (high) prime quintuplet; the +195 form can also give rise to a (low) quintuplet; while the +105 form can yield both types of quints and possibly prime sextuplets. It is no accident that each prime in a prime decade is displaced from its center by a power of 2, actually 2 or 4, since all centers are odd and divisible by both 3 and 5. A prime quadruplet can be described as a consecutive pair of twin primes, two overlapping sets of prime triplets, or two intermixed pairs of sexy primes. These "quad" primes 11 or above also form the core of prime quintuplets and prime sextuplets by adding or subtracting 8 from their respective centers. It is not known if there are infinitely many prime quadruplets. A proof that there are infinitely many would imply the twin prime conjecture, but it is consistent with current knowledge that there may be infinitely many pairs of twin primes and only finitely many prime quadruplets. The number of prime quadruplets with n digits in base 10 for n = 2, 3, 4, ... is 1, 3, 7, 27, 128, 733, 3869, 23620, 152141, 1028789, 7188960, 51672312, 381226246, 2873279651 (sequence in the OEIS). As of February 2019[ [update]] the largest known prime quadruplet has 10132 digits. It starts with p = 667674063382677 × 233608 − 1, found by Peter Kaiser. The constant representing the sum of the reciprocals of all prime quadruplets, Brun's constant for prime quadruplets, denoted by "B"4, is the sum of the reciprocals of all prime quadruplets: formula_0 with value: "B"4 = 0.87058 83800 ± 0.00000 00005. This constant should not be confused with the Brun's constant for cousin primes, prime pairs of the form ("p", "p" + 4), which is also written as "B"4. The prime quadruplet {11, 13, 17, 19} is alleged to appear on the Ishango bone although this is disputed. Excluding the first prime quadruplet, the shortest possible distance between two quadruplets {"p", "p" + 2, "p" + 6, "p" + 8} and {"q", "q" + 2, "q" + 6, "q" + 8} is "q" - "p" = 30. The first occurrences of this are for p = 1006301, 2594951, 3919211, 9600551, 10531061, ... (OEIS: ). The Skewes number for prime quadruplets {"p", "p" + 2, "p" + 6, "p" + 8} is 1172531 (). Prime quintuplets. If {"p", "p" + 2, "p" + 6, "p" + 8} is a prime quadruplet and "p" − 4 or "p" + 12 is also prime, then the five primes form a prime quintuplet which is the closest admissible constellation of five primes. The first few prime quintuplets with "p" + 12 are: {5, 7, 11, 13, 17}, {11, 13, 17, 19, 23}, {101, 103, 107, 109, 113}, {1481, 1483, 1487, 1489, 1493}, {16061, 16063, 16067, 16069, 16073}, {19421, 19423, 19427, 19429, 19433}, {21011, 21013, 21017, 21019, 21023}, {22271, 22273, 22277, 22279, 22283}, {43781, 43783, 43787, 43789, 43793}, {55331, 55333, 55337, 55339, 55343} … OEIS: . The first prime quintuplets with "p" − 4 are: {7, 11, 13, 17, 19}, {97, 101, 103, 107, 109}, {1867, 1871, 1873, 1877, 1879}, {3457, 3461, 3463, 3467, 3469}, {5647, 5651, 5653, 5657, 5659}, {15727, 15731, 15733, 15737, 15739}, {16057, 16061, 16063, 16067, 16069}, {19417, 19421, 19423, 19427, 19429}, {43777, 43781, 43783, 43787, 43789}, {79687, 79691, 79693, 79697, 79699}, {88807, 88811, 88813, 88817, 88819}... OEIS: . A prime quintuplet contains two close pairs of twin primes, a prime quadruplet, and three overlapping prime triplets. It is not known if there are infinitely many prime quintuplets. Once again, proving the twin prime conjecture might not necessarily prove that there are also infinitely many prime quintuplets. Also, proving that there are infinitely many prime quadruplets might not necessarily prove that there are infinitely many prime quintuplets. The Skewes number for prime quintuplets {"p", "p" + 2, "p" + 6, "p" + 8, "p" + 12} is 21432401 (). Prime sextuplets. If both "p" − 4 and "p" + 12 are prime then it becomes a prime sextuplet. The first few: {7, 11, 13, 17, 19, 23}, {97, 101, 103, 107, 109, 113}, {16057, 16061, 16063, 16067, 16069, 16073}, {19417, 19421, 19423, 19427, 19429, 19433}, {43777, 43781, 43783, 43787, 43789, 43793} OEIS:  Some sources also call {5, 7, 11, 13, 17, 19} a prime sextuplet. Our definition, all cases of primes {"p" − 4, "p", "p" + 2, "p" + 6, "p" + 8, "p" + 12}, follows from defining a prime sextuplet as the closest admissible constellation of six primes. A prime sextuplet contains two close pairs of twin primes, a prime quadruplet, four overlapping prime triplets, and two overlapping prime quintuplets. All prime sextuplets except {7, 11, 13, 17, 19, 23} are of the form formula_1 for some integer n. (This structure is necessary to ensure that none of the six primes is divisible by 2, 3, 5 or 7). It is not known if there are infinitely many prime sextuplets. Once again, proving the twin prime conjecture might not necessarily prove that there are also infinitely many prime sextuplets. Also, proving that there are infinitely many prime quintuplets might not necessarily prove that there are infinitely many prime sextuplets. The Skewes number for the tuplet {"p", "p" + 4, "p" + 6, "p" + 10, "p" + 12, "p" + 16} is 251331775687 (). Prime k-tuples. Prime quadruplets, quintuplets, and sextuplets are examples of prime constellations, and prime constellations are in turn examples of prime k-tuples. A prime constellation is a grouping of k primes, with minimum prime p and maximum prime "p" + "n", meeting the following two conditions: More generally, a prime k-tuple occurs if the first condition but not necessarily the second condition is met. References. &lt;templatestyles src="Reflist/styles.css" /&gt; &lt;templatestyles src="Refbegin/styles.css" /&gt;
[ { "math_id": 0, "text": "B_4 = \\left(\\frac{1}{5} + \\frac{1}{7} + \\frac{1}{11} + \\frac{1}{13}\\right)\n+ \\left(\\frac{1}{11} + \\frac{1}{13} + \\frac{1}{17} + \\frac{1}{19}\\right)\n+ \\left(\\frac{1}{101} + \\frac{1}{103} + \\frac{1}{107} + \\frac{1}{109}\\right) + \\cdots" }, { "math_id": 1, "text": "\\{210n + 97,\\ 210n + 101,\\ 210n + 103,\\ 210n + 107,\\ 210n + 109,\\ 210n + 113\\}" } ]
https://en.wikipedia.org/wiki?curid=812561
8126958
Integrating sphere
An integrating sphere (also known as an Ulbricht sphere) is an optical component consisting of a hollow spherical cavity with its interior covered with a diffuse white reflective coating, with small holes for entrance and exit ports. Its relevant property is a uniform scattering or diffusing effect. Light rays incident on any point on the inner surface are, by multiple scattering reflections, distributed equally to all other points. The effects of the original direction of light are minimized. An integrating sphere may be thought of as a diffuser which preserves power but destroys spatial information. It is typically used with some light source and a detector for optical power measurement. A similar device is the focusing or Coblentz sphere, which differs in that it has a mirror-like (specular) inner surface rather than a diffuse inner surface. In 1892, W. E. Sumpner published an expression for the throughput of a spherical enclosure with diffusely reflecting walls. Ř. Ulbricht developed a practical realization of the integrating sphere, the topic of a publication in 1900. It has become a standard instrument in photometry and radiometry and has the advantage over a goniophotometer that the total power produced by a source can be obtained in a single measurement. Other shapes, such as a cubical box, have also been theoretically analyzed. Even small commercial integrating spheres cost many thousands of dollars, as a result their use is often limited to industry and large academic institutions. However, 3D printing and homemade coatings have seen the production of experimentally accurate DIY spheres for very low cost. Theory. The theory of integrating spheres is based on these assumptions: Using these assumptions the sphere multiplier can be calculated. This number is the average number of times a photon is scattered in the sphere, before it is absorbed in the coating or escapes through a port. This number increases with the reflectivity of the sphere coating and decreases with the ratio between the total area of ports and other absorbing objects and the sphere inner area. To get a high homogeneity a recommended sphere multiplier is 10-25. The theory further states that if the above criteria are fulfilled then the irradiance on any area element on the sphere will be proportional to the total radiant flux input to the sphere. Absolute measurements of instance luminous flux can then be done by measuring a known light source and determining the transfer function or calibration curve. Total exit irradiance. For a sphere with radius r, reflection coefficient ρ, and source flux Φ, the initial reflected irradiance is equal to: formula_0 Every time the irradiance is reflected, the reflection coefficient exponentially grows. The resulting equation is formula_1 Since ρ ≤ 1, the geometric series converges and the total exit irradiance is: formula_2 Applications. Integrating spheres are used for a variety of optical, photometric or radiometric measurements. They are used to measure the total light radiated in all directions from a lamp. An integrating sphere can be used to create a light source with apparent intensity uniform over all positions within its circular aperture, and independent of direction except for the cosine function inherent to ideally diffuse radiating surfaces (Lambertian surfaces). An integrating sphere can be used to measure the diffuse reflectance of surfaces, providing an average over all angles of illumination and observation. A number of methods exist to measure the absolute reflectance of a test object mounted on an integrating sphere. In 1916, E. B. Rosa and A. H. Taylor published the first such method. Subsequent work by A. H. Taylor, Frank A. Benford, C. H. Sharpe &amp; W. F. Little, Enoch Karrer, and Leonard Hanssen &amp; Simon Kaplan expanded the number of unique methods which measure port-mounted test objects. Edwards et al., Korte &amp; Schmidt, and Van den Akker et al. developed methods which measure center-mounted test objects. Light scattered by the interior of the integrating sphere is evenly distributed over all angles. The integrating sphere is used in optical measurements. The total power (flux) of a light source can be measured without inaccuracy caused by the directional characteristics of the source, or the measurement device. Reflection and absorption of samples can be studied. The sphere creates a reference radiation source that can be used to provide a photometric standard. Since all the light incident on the input port is collected, a detector connected to an integrating sphere can accurately measure the sum of all the ambient light incident on a small circular aperture. The total power of a laser beam can be measured, free from the effects of beam shape, incident direction, and incident position, as well as polarization. Materials. The optical properties of the lining of the sphere greatly affect its accuracy. Different coatings must be used at visible, infrared and ultraviolet wavelengths. High-powered illumination sources may heat or damage the coating, so an integrating sphere will be rated for a maximum level of incident power. Various coating materials are used. For visible-spectrum light, early experimenters used a deposit of magnesium oxide, and barium sulfate also has a usefully flat reflectance over the visible spectrum. Various proprietary PTFE compounds are also used for visible light measurements. Finely-deposited gold is used for infrared measurements. An important requirement for the coating material is the absence of fluorescence. Fluorescent materials absorb short-wavelength light and re-emit light at longer wavelengths. Due to the many scatterings this effect is much more pronounced in an integrating sphere than for materials irradiated normally. Structure. The theory of the integrating sphere assumes a uniform inside surface with diffuse reflectivity approaching 100%. Openings where light can exit or enter, used for detectors and sources, are normally called ports. The total area of all ports must be small, less than about 5% of the surface area of the sphere, for the theoretical assumptions to be valid. Unused ports should therefore have matching plugs, with the interior surface of the plug coated with the same material as the rest of the sphere. Integrating spheres vary in size from a few centimeters in diameter up to a few meters in diameter. Smaller spheres are typically used to diffuse incoming radiation, while larger spheres are used to measure integrating properties like the luminous flux of a lamp or luminaries which is then placed inside the sphere. If the entering light is incoherent (rather than a laser beam), then it typically fills the source-port, and the ratio of source-port area to detector-port area is relevant. Baffles are normally inserted in the sphere to block the direct path of light from a source-port to a detector-port, since this light will have non-uniform distribution. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\nE = \\rho \\frac \\Phi {4 \\pi r^2} \\,\n" }, { "math_id": 1, "text": "\nE = \\frac \\Phi {4 \\pi r^2}\\,\\rho(1 + \\rho + \\rho^2 + ...) \n" }, { "math_id": 2, "text": "\nE = \\frac \\Phi {4 \\pi r^2}\\,\\frac \\rho {1 - \\rho}\\,\n" } ]
https://en.wikipedia.org/wiki?curid=8126958
8128722
Eye movement in music reading
Role of the eyes in reading music Eye movement in music reading is the scanning of a musical score by a musician's eyes. This usually occurs as the music is read during performance, although musicians sometimes scan music silently to study it. The phenomenon has been studied by researchers from a range of backgrounds, including cognitive psychology and music education. These studies have typically reflected a curiosity among performing musicians about a central process in their craft, and a hope that investigating eye movement might help in the development of more effective methods of training musicians' sight reading skills. A central aspect of music reading is the sequence of alternating saccades and fixations, as it is for most oculomotor tasks. Saccades are the rapid ‘flicks’ that move the eyes from location to location over a music score. Saccades are separated from each other by fixations, during which the eyes are relatively stationary on the page. It is well established that the perception of visual information occurs almost entirely during fixations and that little if any information is picked up during saccades. Fixations comprise about 90% of music reading time, typically averaging 250–400 ms in duration. Eye movement in music reading is an extremely complex phenomenon that involves a number of unresolved issues in psychology, and which requires intricate experimental conditions to produce meaningful data. Despite some 30 studies in this area over the past 70 years, little is known about the underlying patterns of eye movement in music reading. Relationship with eye movement in language reading. Eye movement in music reading may at first appear to be similar to that in language reading, since in both activities the eyes move over the page in fixations and saccades, picking up and processing coded meanings. However, it is here that the obvious similarities end. Not only is the coding system of music nonlinguistic; it involves what is apparently a unique combination of features among human activities: a strict and continuous time constraint on an output that is generated by a continuous stream of coded instructions. Even the reading of language aloud, which, like musical performance involves turning coded information into a musculoskeletal response, is relatively free of temporal constraint—the pulse in reading aloud is a fluid, improvised affair compared with its rigid presence in most Western music. It is this uniquely strict temporal requirement in musical performance that has made the observation of eye movement in music reading fraught with more difficulty than that in language reading. Another critical difference between reading music and reading language is the role of skill. Most people become reasonably efficient at language reading by adulthood, even though almost all language reading is sight reading. By contrast, some musicians regard themselves as poor sight readers of music even after years of study. Thus, the improvement of music sight reading and the differences between skilled and unskilled readers have always been of prime importance to research into eye movement in music reading, whereas research into eye movement in language reading has been more concerned with the development of a unified psychological model of the reading process. It is therefore unsurprising that most research into eye movement in music reading has aimed to compare the eye movement patterns of the skilled and the unskilled. Equipment and related methodology. From the start, there were basic problems with eye-tracking equipment. The five earliest studies used photographic techniques. These methods involved either training a continuous beam of visible light onto the eye to produce an unbroken line on photographic paper, or a flashing light to produce a series of white spots on photographic paper at sampling intervals around 25 ms (i.e., 40 samples a second). Because the film rolled through the device vertically, the vertical movement of the eyes in their journey across the page was either unrecorded or was recorded using a second camera and subsequently combined to provide data on both dimensions, a cumbersome and inaccurate solution. These systems were sensitive to even small movement of the head or body, which appear to have significantly contaminated the data. Some studies used devices such as a headrest and bite-plate to minimise this contamination, with limited success, and in one case a camera affixed to a motorcycle helmet—weighing nearly 3 kg—which was supported by a system of counterbalancing weights and pulleys attached to the ceiling. In addition to extraneous head movement, researchers faced other physical, bodily problems. The musculoskeletal response required to play a musical instrument involves substantial body movement, usually of the hands, arms and torso. This can upset the delicate balance of tracking equipment and confound the registration of data. Another issue that affects almost all unskilled keyboardists and a considerable proportion of skilled keyboardists is the common tendency to frequently glance down at the hands and back to the score during performance. The disadvantage of this behaviour is that it causes signal dropout in the data every time it occurs, which is sometimes up to several times per bar. When participants are prevented from looking down at their hands, typically the quality of their performance is degraded. Rayner &amp; Pollatsek (1997:49) wrote that: "even skilled musicians naturally look at their hands at times. ... [Because] accurate eye movement recording [is generally incompatible with] these head movements ... musicians often need appreciable training with the apparatus before their eye movements can be measured." Since Lang (1961), all reported studies into eye movement in music reading, aside from Smith (1988), appear to have used infrared tracking technology. However, research into the field has mostly been conducted using less than optimal equipment. This has had a pervasive negative impact on almost all research up until a few recent studies. In summary, the four main equipment problems have been that tracking devices: Not until recently has eye movement in music reading been investigated with more satisfactory equipment. Kinsler and Carpenter (1995) were able to identify eye position to within 0.25º, that is, the size of the individual musical notes, at intervals of 1 ms. Truitt et al. (1997) used a similarly accurate infrared system capable of displaying a movement window and integrated into a computer-monitored musical keyboard. Waters &amp; Underwood (1998) used a machine with an accuracy of plus or minus one character space and a sampling interval of only 4 ms. Tempo and data contamination. Most research into eye movement in music reading has primarily aimed to compare the eye movement patterns of skilled and unskilled performers. The implicit presumption appears to have been that this might lay the foundation for developing better ways of training musicians. However, there are significant methodological problems in attempting this comparison. Skilled and unskilled performers typically sight read the same passage at different tempos and/or levels of accuracy. At a sufficiently slow tempo, players over a large range of skill-levels are capable of accurate performance, but the skilled will have excess capacity in their perception and processing of the information on the page. There is evidence that excess capacity contaminates eye-movement data with a ‘wandering’ effect, in which the eyes tend to stray from the course of the music. Weaver (1943:15) implied the existence of the wandering effect and its confounding influence, as did Truitt et al. (1997:51), who suspected that at slow tempo their participants' eyes were "hanging around rather than extracting information". The wandering effect is undesirable, because it is an unquantifiable and possibly random distortion of normal eye movement patterns. Souter (2001:81) claimed that the ideal tempo for observing eye movement is a range lying between one that is not so fast as to produce a significant level of action slips, and one that is not so slow as to produce a significant wandering effect. The skilled and the unskilled have quite different ranges for sight reading the same music. On the other hand, a faster tempo may minimise excess capacity in the skilled, but will tend to induce inaccurate performance in the unskilled; inaccuracies rob us of the only evidence that a performer has processed the information on the page, and the danger cannot be discounted that feedback from action-slips contaminates eye movement data. Almost all studies have compared temporal variables among participants, chiefly the durations of their fixations and saccades. In these cases, it is self-evident that useful comparisons require consistency in performance tempo and accuracy within and between performances. However, most studies have accommodated their participants’ varied performance ability in the reading of the same stimulus, by allowing them to choose their own tempo or by not strictly controlling that tempo. Theoretically, there is a relatively narrow range, referred to here as the ‘optimal range’, in which capacity matches the task at hand; on either side of this range lie the two problematic tempo ranges within which a performer’s capacity is excessive or insufficient, respectively. The location of the boundaries of the optimal range depends on the skill-level of an individual performer and the relative difficulty of reading/performing the stimulus. Thus, unless participants are drawn from a narrow range of skill-levels, their optimal ranges will be mutually exclusive, and observations at a single, controlled tempo will be likely to result in significant contamination of eye movement data. Most studies "have" sought to compare the skilled and the unskilled in the hope of generating pedagogically useful data; aside from Smith (1988), in which tempo itself was an independent variable, Polanka (1995), who analysed only data from silent preparatory readings, and Souter (2001), who observed only the highly skilled, none has set out to control tempo strictly. Investigators have apparently attempted to overcome the consequences of the fallacy by making compromises, such as (1) exercising little or no control over the tempos at which participants performed in trials, and/or (2) tolerating significant disparity in the level of action slips between skilled and unskilled groups. This issue is part of the broader tempo/skill/action-slip fallacy, which concerns the relationship between tempo, skill and the level of action slips (performance errors). The fallacy is that it is possible to reliably compare the eye movement patterns of skilled and unskilled performers under the same conditions. Musical complexity. Many researchers have been interested in learning whether fixation durations are influenced by the complexity of the music. At least three types of complexity need to be accounted for in music reading: the visual complexity of the musical notation; the complexity of processing visual input into musculoskeletal commands; and the complexity of executing those commands. For example, visual complexity might be in the form of the density of the notational symbols on the page, or of the presence of accidentals, triplet signs, slurs and other expression markings. The complexity of processing visual input into musculoskeletal commands might involve a lack of 'chunkability' or predictability in the music. The complexity of executing musculoskeletal commands might be seen in terms of the demands of fingering and hand position. It is in isolating and accounting for the interplay between these types that the difficulty lies in making sense of musical complexity. For this reason, little useful information has emerged from investigating the relationship between musical complexity and eye movement. Jacobsen (1941:213) concluded that "the complexity of the reading material influenced the number and the duration of [fixations]"; where the texture, rhythm, key and accidentals were "more difficult", there was, on average, a slowing of tempo and an increase in both the duration and the number of fixations in his participants. However, performance tempos were uncontrolled in this study, so the data on which this conclusion was based are likely to have been contaminated by the slower tempos that were reported for the reading of the more difficult stimuli. Weaver (1943) claimed that fixation durations—which ranged from 270–530 ms—lengthened when the notation was more compact and/or complex, as Jacobsen had found, but did not disclose whether slower tempos were used. Halverson (1974), who controlled tempo more closely, observed a mild opposite effect. Schmidt's (1981) participants used longer fixation durations in reading easier melodies (consistent with Halverson); Goolsby's (1987) data mildly supported Halverson's finding, but only for skilled readers. He wrote "both Jacobsen and Weaver ... in letting participants select their own tempo found the opposite effect of notational complexity". On balance, it appears likely that under controlled temporal conditions, denser and more complex music is associated with a higher number of fixations, of shorter mean duration. This might be explained as an attempt by the music-reading process to provide more frequent 'refreshment' of the material being held in working memory, and may compensate for the need to hold more information in working memory. Reader skill. There is no disagreement among the major studies, from Jacobsen (1941) to Smith (1988), that skilled readers appear to use more and shorter fixations across all conditions than do the unskilled. Goolsby (1987) found that mean 'progressive' (forward-moving) fixation duration was significantly longer (474 versus 377 ms) and mean saccade length significantly greater for the less skilled. Although Goolsby did not report the total reading durations of his trials, they can be derived from the mean tempos of his 12 skilled and 12 unskilled participants for each of the four stimuli. His data appear to show that the unskilled played at 93.6% of the tempo of the skilled, and that their mean fixation durations were 25.6% longer. This raises the question as to why skilled readers should distribute more numerous and shorter fixations over a score than the unskilled. Only one plausible explanation appears in the literature. Kinsler &amp; Carpenter (1995) proposed a model for the processing of music notation, based on their data from the reading of rhythm patterns, in which an iconic representation of each fixated image is scanned by a 'processor' and interpreted to a given level of accuracy. The scan ends when this level cannot be reached, its end-point determining the position of the upcoming fixation. The time taken before this decision depends on the complexity of a note, and is presumably shorter for skilled readers, thus promoting more numerous fixations of shorter duration. This model has not been further investigated, and does not explain what "advantage" there is to using short, numerous fixations. Another possible explanation is that skilled readers maintain a larger eye–hand span and therefore hold a larger amount of information in their working memory; thus, they need to refresh that information more frequently from the music score, and may do so by refixating more frequently. Stimulus familiarity. The more familiar readers become with a musical excerpt, the less their reliance on visual input from the score and the correspondingly greater reliance on their stored memory of the music. On logical grounds, it would be expected that this shift would result in fewer and longer fixations. The data from all three studies into eye movement in the reading of increasingly familiar music support this reasoning. York's (1952) participants read each stimulus twice, with each reading preceded by a 28-second silent preview. On average, both skilled and unskilled readers used fewer and longer fixations during the second reading. Goolsby's (1987) participants were observed during three immediately successive readings of the same musical stimulus. Familiarity in these trials appeared to increase fixation duration, but not nearly as much as might have been expected. The second reading produced no significant difference in mean fixation duration (from 422 to 418 ms). On the third encounter, mean fixation duration was higher for both groups (437 ms) but by a barely significant amount, thus mildly supporting York's earlier finding. The smallness of these changes might be explained by the unchallenging reading conditions in the trials. The tempo of MM120 suggested at the start of each of Goolsby's trials appears to be slow for tackling the given melodies, which contained many semibreves and minims, and there may have simply been insufficient pressure to produce significant results. A more likely explanation is that the participants played the stimuli at faster tempos as they grew more familiar with them through the three readings. (The metronome was initially sounded, but was silent during the performances, allowing readers to vary their pace at will.) Thus, it is possible that two influences were at odds with each other: growing familiarity may have promoted low numbers of fixations, and long fixation durations, while faster tempo may have promoted low numbers and short durations. This might explain why mean fixation duration fell in the opposite direction to the prediction for the second encounter, and by the third encounter had risen by only 3.55% across both groups. (Smith's (1988) results, reinforced by those of Kinsler &amp; Carpenter (1995), suggest that faster tempos are likely to reduce both the number and duration of fixations in the reading of a single-line melody. If this hypothesis is correct, it may be connected with the possibility that the more familiar a stimulus, the less the workload on the reader's memory.) Top–down/bottom–up question. There was considerable debate from the 1950s to 1970s as to whether eye movement in language reading is solely or mainly influenced by (1) the pre-existing (top–down) behavioural patterns of an individual's reading technique, (2) the nature of the stimulus (bottom–up), or (3) both factors. Rayner et al. (1971) provides a review of the relevant studies. Decades before this debate, Weaver (1943) had set out to determine the (bottom–up) effects of musical texture on eye movement. He hypothesised that vertical compositional patterns in a two-stave keyboard score would promote vertical saccades, and horizontal compositional patterns horizontal saccades. Weaver's participants read a two-part polyphonic stimulus in which the musical patterns were strongly horizontal, and a four-part homophonic stimulus comprising plain, hymn-like chords, in which the compositional patterns were strongly vertical. Weaver was apparently unaware of the difficulty of proving this hypothesis in the light of the continual need to scan up and down between the staves and move forward along the score. Thus, it is unsurprising that the hypothesis was not confirmed. Four decades later, when evidence was being revealed of the bottom–up influence on eye movement in language reading, Sloboda (1985) was interested in the possibility that there might be an equivalent influence on eye movement in music reading, and appeared to assume that Weaver's hypothesis had been confirmed. "Weaver found that [the vertical] pattern was indeed used when the music was homophonic and chordal in nature. When the music was contrapuntal, however, he found fixation sequences which were grouped in horizontal sweeps along a single line, with a return to another line afterwards." To support this assertion, Sloboda quoted two one-bar fragments taken from Weaver's illustrations that do not appear to be representative of the overall examples. Although Sloboda's claim may be questionable, and despite Weaver's failure to find dimensional links between eye movement and stimulus, eye movement in music reading shows clear evidence in most studies—in particular, Truit et al. (1997) and Goolsby (1987)—of the influence of bottom–up graphical features "and" top–down global factors related to the meaning of the symbols. Peripheral visual input. The role of peripheral visual input in language reading remains the subject of much research. Peripheral input in music reading was a particular focus of Truitt et al. (1997). They used the gaze-contingency paradigm to measure the extent of peripheral perception to the right of a fixation. This paradigm involves the spontaneous manipulation of a display in direct response to where the eyes are gazing at any one point of time. Performance was degraded only slightly when four crotchets to the right were presented as the ongoing preview, but significantly when only two crotchets were presented. Under these conditions, peripheral input extended over a little more than a four-beat measure, on average. For the less skilled, useful peripheral perception extended from half a beat up to between two and four beats. For the more skilled, useful peripheral perception extended up to five beats. Peripheral visual input in music reading is clearly in need of more investigation, particularly now that the paradigm has become more accessible to researchers. A case could be made that Western music notation has developed in such a way as to optimise the use of peripheral input in the reading process. Noteheads, stems, beams, barlines and other notational symbols are all sufficiently bold and distinctive to be useful when picked up peripherally, even when at some distance from the fovea. The upcoming pitch contour and prevailing rhythmic values of a musical line can typically be ascertained ahead of foveal perception. For example, a run of continuous semiquavers beamed together by two thick, roughly horizontal beams, will convey potentially valuable information about rhythm and texture, whether to the right on the currently fixated stave, or above, or above or below in a neighbouring stave. This is reason enough to suspect that the peripheral preprocessing of notational information is a factor in fluent music reading, just as it has been found to be the case for language reading. This would be consistent with the findings of Smith (1988) and Kinsler &amp; Carpenter (1995), who reported that the eyes do not fixate on every note in the reading of melodies. Refixation. A refixation is a fixation on information that has already been fixated on during the same reading. In the reading of two-stave keyboard music, there are two forms of refixation: (1) up or down within a chord, after the chord has already been inspected on both staves (vertical refixation), and (2) leftward refixation to a previous chord (either back horizontally on the same stave or diagonally to the other stave). These are analogous to Pollatsek &amp; Rayner’s two categories of refixation in the reading of language: (1) “same-word rightward refixation”, i.e., on different syllables in the same word, and (2) “leftward refixation” to previously read words (also known as “regression”). Leftward refixation occurs in music reading at all skill-levels. It involves a saccade back to the previous note/chord (occasionally even back two notes/chords), followed by at least one returning saccade to the right, to regain lost ground. Weaver reported that leftward regressions run from 7% to a substantial 23% of all saccades in the sight-reading of keyboard music. Goolsby and Smith reported significant levels of leftward refixation across all skill-levels in the sight-reading of melodies. Looking at the same information more than once is, "prima facie", a costly behaviour that must be weighed against the need to keep pace with the tempo of the music. Leftward refixation involves a greater investment of time than vertical refixation, and on logical grounds is likely to be considerably less common. For the same reason, the rates of both forms of refixation are likely to be sensitive to tempo, with lower rates at faster speed to meet the demand for making swifter progress across the score. Souter confirmed both of these suppositions in the skilled sight-reading of keyboard music. He found that at slow tempo (one chord a second), 23.13% (SD 5.76%) of saccades were involved in vertical refixation compared with 5.05% (4.81%) in leftward refixation ("p" &lt; 0.001). At fast tempo (two chords a second), the rates were 8.15% (SD 4.41%) for vertical refixation compared with 2.41% (2.37%) for leftward refixation ("p" = 0.011). These significant differences occurred even though recovery saccades were included in the counts for leftward refixations, effectively doubling their number. The reductions in the rate of vertical refixation upon the doubling of tempo was highly significant ("p" &lt; 0.001), but for leftward refixation was not ("p" = 0.209), possibly because of the low baseline. Eye–hand span. The eye–hand span is the distance on the score between where the eyes are looking on the score and where the hands are playing on the score. It can be measured in two ways: in notes (the number of notes between hand and eye; the 'note index'), or in time (the length of time between fixation and performance; the 'time index'). The main findings in relation to the eye–voice span in the reading aloud of language were that (1) a larger span is associated with faster, more skilled readers, (2) a shorter span is associated with greater stimulus-difficulty, and (3) the span appears to vary according to linguistic phrasing. At least eight studies into eye movement in music reading have investigated analogous issues. For example, Jacobsen (1941) measured the average span to the right in the sight singing of melodies as up to two notes for the unskilled and between one and four notes for the skilled, whose faster average tempo in that study raises doubt as to whether skill alone was responsible for this difference. In Weaver (1943:28), the eye–hand span varied greatly, but never exceeded 'a separation of eight successive notes or chords, a figure that seems impossibly large for the reading of keyboard scores. Young (1971) found that both skilled and unskilled participants previewed about one chord ahead of their hands, an uncertain finding in view of the methodological problems in that study. Goolsby (1994) found that skilled sight singers' eyes were on average about four beats ahead of their voice, and less for the unskilled. He claimed that when sight singing, 'skilled music readers look farther ahead in the notation and then back to the point of performance' (p. 77). To put this another way, skilled music readers maintain a larger eye–hand span and are more likely to refixate within it. This association between span size and leftward refixation could arise from a greater need for the refreshment of information in working memory. Furneax &amp; Land (1999) found that professional pianists' spans are significantly larger than those of amateurs. The time index was significantly affected by the performance tempo: when fast tempos were imposed on performance, all participants showed a reduction in the time index (to about 0.7 s), and slow tempos increased the time index (to about 1.3  s). This means that the length of time that information is stored in the buffer is related to performance tempo rather than ability, but that professionals can fit more information into their buffers. Sloboda (1974, 1977) cleverly applied Levin &amp; Kaplin's (1970) 'light-out' method in an experiment designed to measure the size of the span in music reading. Sloboda (1977) asked his participants to sight read a melody and turned the lights out at an unpredictable point during each reading. The participants were instructed to continue playing correctly 'without guessing' for as long as they could after visual input was effectively removed, giving an indication as to how far ahead of their hands they were perceiving at that moment. Here, the span was defined as including peripheral input. Participants were allowed to choose their own performing speed for each piece, introducing a layer of uncertainty into the interpretation of the results. Sloboda reported that there was a tendency for the span to coincide with the musical phrasing, so that 'a boundary just beyond the average span "stretches" the span, and a boundary just before the average "contracts" it' (as reported in Sloboda 1985:72). Good readers, he found, maintain a larger span size (up to seven notes) than do poor readers (up to four notes). Truitt et al. (1997) found that in sight reading melodies on the electronic keyboard, span size averaged a little over one beat and ranged from two beats behind the currently fixated point to an incredibly large 12 beats ahead. The normal range of span size was rather smaller: between one beat behind and three beats ahead of the hands for 88% of the total reading duration, and between 0 and 2 beats ahead for 68% of the duration. Such large ranges, in particular, those that extend leftwards from the point of fixation, may have been due to the 'wandering effect'. For the less skilled, the average span was about half a crotchet beat. For the skilled, the span averaged about two beats and useful peripheral perception extended up to five beats. This, in the view of Rayner &amp; Pollatsek (1997:52), suggests that: "a major constraint on tasks that require translation of complex inputs into continuous motor transcription is [the limited capacity of] short-term memory. If the encoding process gets too far ahead of the output, there is likely to be a loss of material that is stored in the queue." Rayner &amp; Pollatsek (1997:52) explained the size of the eye–hand span as a continuous tug-o-war, as it were, between two forces: (1) the need for material to be held in working memory long enough to be processed into musculoskeletal commands, and (2) the need to limit the demand on span size and therefore the workload in the memory system. They claimed that most music pedagogy supports the first aspect [in advising] the student that the eyes should be well ahead of the hands for effective sight reading. They held that despite such advice, for most readers, the second aspect prevails; that is, the need to limit the workload of the memory system. This, they contended, results in a very small span under normal conditions. Tempo. Smith (1988) found that when tempo is increased, fixations are fewer in number and shorter in mean duration, and that fixations tend to be spaced further apart on the score. Kinsler &amp; Carpenter (1995) investigated the effect of increased tempo in reading rhythmic notation, rather than real melodies. They similarly found that increased tempo causes a decrease in mean fixation duration and an increase in mean saccade amplitude (i.e., the distance on the page between successive fixations). Souter (2001) used novel theory and methodology to investigate the effects of tempo on key variables in the sight reading of highly skilled keyboardists. Eye movement studies have typically measured saccade and fixation durations as separate variables. Souter (2001) used a novel variable: pause duration. This is a measure of the duration between the end of one fixation and the end of the next; that is, the sum of the duration of each saccade and of the fixation it leads to. Using this composite variable brings into play a simple relationship between the number of pauses, their mean duration, and the tempo: the number of pauses factored by their mean duration equals the total reading duration. In other words, the time taken to read a passage equals the sum of the durations of the individual pauses, or nd = r, where n is the number of pauses, d is their mean duration, and r is the total reading time. Since the total reading duration is inversely proportional to the tempo—double the tempo and the total reading time will be halved—the relationship can be expressed as nd is proportional to r, where t is tempo. This study observed the effect of a change in tempo on the number and mean duration of pauses; thus, now using the letters to represent proportional changes in values, nd = 1⁄t, where n is the proportional change in pause number, d is the proportional change in their mean duration, and t is the proportional change in tempo. This expression describes a number–duration curve, in which the number and mean duration of pauses form a hyperbolic relationship (since neither n nor d ever reaches zero). The curve represents the range of possible ratios for using these variables to adapt to a change in tempo. In Souter (2001), tempo was doubled from the first to the second reading, from 60 to 120 MM; thus, t = 2, and the number–duration curve is described by nd = 0.5 (Figure 2). In other words, factoring the proportional change in the number and mean duration of pauses between these readings will always equal ½. Each participant’s two readings thus corresponded to a point on this curve. Irrespective of the value of t, all number–duration curves pass through three points of theoretical interest: two ‘sole-contribution’ points and one ‘equal-contribution’ point. At each sole-contribution point, a reader has relied entirely on one of the two variables to adapt to a new tempo. In Souter's study, if a participant adapted to the doubling of tempo by using the same number of pauses and halving their mean duration, the reading would fall on the sole-contribution point (1.0,0.5). Conversely, if a participant adapted by halving the number of pauses and maintaining their mean duration, the reading would fall on the other sole-contribution point (0.5,1.0). These two points represent completely one-sided behaviour. On the other hand, if a reader’s adaptation drew on both variables equally, and factoring them gives 0.5, they must both equal the square root of t (since t = 2 in this case, the square root of 2). The adaptation thus fell on the equal-contribution point: (formula_0, formula_0), equivalent to (0.707,0.707). Predicting where performers would fall on the curve involved considering the possible advantages and disadvantages of using these two adaptive resources. A strategy of relying entirely on altering pause duration to adapt to a new tempo—falling on (1.0,0.5)—would permit the same number of pauses to be used irrespective of tempo. Theoretically, this would enable readers to use a standardised scanpath across a score, whereas if they changed the number of their pauses to adapt to a new tempo, their scanpath would need to be redesigned, sacrificing the benefits of a standardised approach. There is no doubt that readers are able to change their pause duration and number both from moment to moment and averaged over longer stretches of reading. Musicians typically use a large range of fixation durations within a single reading, even at a stable tempo. Indeed, successive fixation durations appear to vary considerably, and seemingly at random; one fixation might be 200 ms, the next 370 ms, and the next 240 ms. (There are no data on successive pause durations in the literature, so mean fixation duration is cited here as a near-equivalent.) In the light of this flexibility in varying fixation duration, and since the process of picking up, processing and performing the information on the page is elaborate, it might be imagined that readers prefer to use a standardised scanpath. For example, in four-part, hymn-style textures for keyboard, such as were used in Souter (2001), the information on the score is presented as a series of two-note, optically separated units—two allocated to an upper stave and two to a lower stave for each chord. A standardised scanpath might consist of a sequence of ‘saw-tooth’ movements from the upper stave to the lower stave for a chord, then diagonally across to the upper stave and down to the lower stave of the next chord, and so on. However, numerous studies have shown that scanpaths in the reading of a number of musical textures—including melody, four-part hymns, and counterpoint—are not predictable and orderly, but are inherently changeable, with a certain ragged, ad-hoc quality. Music readers appear to turn their backs on the theoretical advantage of standardised scanpath: they are either flexible or ad hoc when it comes to the number of pauses—just as they are with respect to their pause durations—and do not scan a score in a strict, predetermined manner. Souter hypothesised that the most likely scenario is that both pause duration and number are used to adapt to tempo, and that a number–duration relationship that lies close to the equal-contribution point allows the apparatus the greatest flexibility to adapt to further changes in reading conditions. He reasoned that it may be dysfunctional to use only one of two available adaptive resources, since that would make it more difficult to subsequently use that direction for further adaptation. This hypothesis—that when tempo is increased, the mean number–duration relationship will be in the vicinity of the equal-contribution point—was confirmed by the data in terms of the mean result: when tempo doubled, both the mean number of pauses per chord and the mean pause duration overall fell such that the mean number–duration relationship was (0.705,0.709), close to the equal-contribution point of (0.708, 0.708), with standard deviations of (0.138,0.118). Thus, the stability of scanpath—tenable only when the relationship is (0.5,1.0)—was sacrificed to maintain a relatively stable mean pause duration. This challenged the notion that scanpath (largely or solely) reflects the horizontal or vertical emphasis of the musical texture, as proposed by Sloboda (1985) and Weaver (1943), since these dimensions depend significantly on tempo. Conclusions. Both logical inference and evidence in the literature point to the fact that there are three oculomotor imperatives in the task of eye movement in music reading. The first imperative seems obvious: the eyes must maintain a pace across the page that is appropriate to the tempo of the music, and they do this by manipulating the number and durations of fixations, and thereby the scanpath across the score. The second imperative is to provide an appropriate rate of refreshment of the information being stored and processed in working memory by manipulating the number and duration of fixations. This workload appears to be related to tempo, stimulus complexity and stimulus familiarity, and there is strong evidence that the capacity for high workload in relation to these variables is also connected with the skill of the reader. The third imperative is to maintain a span size that is appropriate to the reading conditions. The span must not be so small that there is insufficient time to perceive visual input and process it into musculoskeletal commands; it must not be so large that the capacity of the memory system to store and process information is exceeded. Musicians appear to use oculomotor commands to address all three imperatives simultaneously, which are in effect mapped onto each other in the reading process. Eye movement thus embodies a fluid set of characteristics that are not only intimately engaged in engineering the optimal visual input to the apparatus, but in servicing the process of that information in the memory system. Notes. &lt;templatestyles src="Reflist/styles.css" /&gt; References. &lt;templatestyles src="Refbegin/styles.css" /&gt;
[ { "math_id": 0, "text": "1/\\sqrt{2}" } ]
https://en.wikipedia.org/wiki?curid=8128722
813086
Rotational frequency
Number of rotations per unit time &lt;templatestyles src="Hlist/styles.css"/&gt; Rotational frequency, also known as rotational speed or rate of rotation (symbols "ν", lowercase Greek nu, and also "n"), is the frequency of rotation of an object around an axis. Its SI unit is the reciprocal seconds (s−1); other common units of measurement include the hertz (Hz), cycles per second (cps), and revolutions per minute (rpm). Rotational frequency can be obtained dividing "angular frequency", ω, by a full turn (2π radians): "ν" ω/(2πrad). It can also be formulated as the instantaneous rate of change of the number of rotations, "N", with respect to time, "t": "n" d"N"/d"t" (as per International System of Quantities). Similar to ordinary period, the reciprocal of rotational frequency is the rotation period or period of rotation, "T" "ν"−1 "n"−1, with dimension of time (SI unit seconds). Rotational velocity is the vector quantity whose magnitude equals the scalar rotational speed. In the special cases of "spin" (around an axis internal to the body) and "revolution" (external axis), the rotation speed may be called spin speed and revolution speed, respectively. Rotational acceleration is the rate of change of rotational velocity; it has dimension of squared reciprocal time and SI units of squared reciprocal seconds (s−2); thus, it is a normalized version of "angular acceleration" and it is analogous to "chirpyness". Related quantities. Tangential speed formula_1 (Latin letter v), rotational frequency formula_0, and radial distance formula_2, are related by the following equation: formula_3 An algebraic rearrangement of this equation allows us to solve for rotational frequency: formula_4 Thus, the tangential speed will be directly proportional to formula_2 when all parts of a system simultaneously have the same formula_5, as for a wheel, disk, or rigid wand. The direct proportionality of formula_1 to formula_2 is not valid for the planets, because the planets have different rotational frequencies. Regression analysis. Rotational frequency can measure, for example, how fast a motor is running. "Rotational speed" is sometimes used to mean angular frequency rather than the quantity defined in this article. Angular frequency gives the change in angle per time unit, which is given with the unit radian per second in the SI system. Since 2π radians or 360 degrees correspond to a cycle, we can convert angular frequency to rotational frequency by formula_6 where For example, a stepper motor might turn exactly one complete revolution each second. Its angular frequency is 360 degrees per second (360°/s), or 2π radians per second (2π rad/s), while the rotational frequency is 60 rpm. Rotational frequency is not to be confused with tangential speed, despite some relation between the two concepts. Imagine a merry-go-round with a constant rate of rotation. No matter how close to or far from the axis of rotation you stand, your rotational frequency will remain constant. However, your tangential speed does not remain constant. If you stand two meters from the axis of rotation, your tangential speed will be double the amount if you were standing only one meter from the axis of rotation. Notes. &lt;templatestyles src="Reflist/styles.css" /&gt; References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\nu" }, { "math_id": 1, "text": "v" }, { "math_id": 2, "text": "r" }, { "math_id": 3, "text": "\\begin{align}\nv &= 2\\pi r\\nu \\\\\nv &= r\\omega.\n\\end{align}" }, { "math_id": 4, "text": "\\begin{align}\n\\nu &= v/2\\pi r \\\\\n\\omega &= v/r.\n\\end{align}" }, { "math_id": 5, "text": "\\omega" }, { "math_id": 6, "text": "\\nu = \\omega/2\\pi ," }, { "math_id": 7, "text": "\\nu\\," }, { "math_id": 8, "text": "\\omega\\," } ]
https://en.wikipedia.org/wiki?curid=813086
8132847
Grand mean
The grand mean or pooled mean is the average of the means of several subsamples, as long as the subsamples have the same number of data points. For example, consider several lots, each containing several items. The items from each lot are sampled for a measure of some variable and the means of the measurements from each lot are computed. The mean of the measures from each lot constitutes the subsample mean. The mean of these subsample means is then the grand mean. Example. Suppose there are three groups of numbers: group A has 2, 6, 7, 11, 4; group B has 4, 6, 8, 14, 8; group C has 8, 7, 4, 1, 5. The mean of group A = (2+6+7+11+4)/5 = 6, The mean of group B = (4+6+8+14+8)/5 = 8, The mean of group C = (8+7+4+1+5)/5 = 5, Therefore, the grand mean of all numbers = (6+8+5)/3 = 6.333. Application. Suppose one wishes to determine which states in America have the tallest men. To do so, one measures the height of a suitably sized sample of men in each state. Next, one calculates the means of height for each state, and then the grand mean (the mean of the state means) as well as the corresponding standard deviation of the state means. Now, one has the necessary information for a preliminary determination of which states have abnormally tall or short men by comparing the means of each state to the grand mean ± some multiple of the standard deviation. In ANOVA, there is a similar usage of grand mean to calculate sum of squares (SSQ), a measurement of variation. The total variation is defined as the sum of squared differences between each score and the grand mean (designated as GM), given by the equation formula_0 Discussion. The term "grand mean" is used for two different concepts that should not be confused, namely, the overall mean and the mean of means. The overall mean (in a grouped data set) is equal to the sample mean, namely, formula_1. The mean of means is literally the mean of the "G (g=1...,G)" group means formula_2, namely, formula_3. If the sample sizes across the "G" groups are equal, then the two statistics coincide. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "SSQ_{total} = \\sum (X-GM)^2" }, { "math_id": 1, "text": "\\frac{1}{N}\\sum_{i=1}^N x_{ig}" }, { "math_id": 2, "text": "\\bar{x}_g" }, { "math_id": 3, "text": "\\frac{1}{G}\\sum_{g=1}^G \\bar{x}_g" } ]
https://en.wikipedia.org/wiki?curid=8132847
8135659
Particle decay
Spontaneous breakdown of an unstable subatomic particle into other particles In particle physics, particle decay is the spontaneous process of one unstable subatomic particle transforming into multiple other particles. The particles created in this process (the "final state") must each be less massive than the original, although the total mass of the system must be conserved. A particle is unstable if there is at least one allowed final state that it can decay into. Unstable particles will often have multiple ways of decaying, each with its own associated probability. Decays are mediated by one or several fundamental forces. The particles in the final state may themselves be unstable and subject to further decay. The term is typically distinct from radioactive decay, in which an unstable atomic nucleus is transformed into a lighter nucleus accompanied by the emission of particles or radiation, although the two are conceptually similar and are often described using the same terminology. Probability of survival and particle lifetime. Particle decay is a Poisson process, and hence the probability that a particle survives for time "t" before decaying (the survival function) is given by an exponential distribution whose time constant depends on the particle's velocity: formula_0 where formula_1 is the mean lifetime of the particle (when at rest), and formula_2 is the Lorentz factor of the particle. Table of some elementary and composite particle lifetimes. All data are from the Particle Data Group. Decay rate. This section uses natural units, where formula_3 The lifetime of a particle is given by the inverse of its decay rate, formula_4, the probability per unit time that the particle will decay. For a particle of a mass "M" and four-momentum "P" decaying into particles with momenta formula_5, the differential decay rate is given by the general formula (expressing Fermi's golden rule) formula_6 where "n" is the number of particles created by the decay of the original, "S" is a combinatorial factor to account for indistinguishable final states (see below), formula_7 is the "invariant matrix element" or amplitude connecting the initial state to the final state (usually calculated using Feynman diagrams), formula_8 is an element of the phase space, and formula_9 is the four-momentum of particle "i". The factor "S" is given by formula_10 where "m" is the number of sets of indistinguishable particles in the final state, and formula_11 is the number of particles of type "j", so that formula_12. The phase space can be determined from formula_13 where formula_14 is a four-dimensional Dirac delta function, formula_15 is the (three-)momentum of particle "i", and formula_16 is the energy of particle "i". One may integrate over the phase space to obtain the total decay rate for the specified final state. If a particle has multiple decay branches or "modes" with different final states, its full decay rate is obtained by summing the decay rates for all branches. The branching ratio for each mode is given by its decay rate divided by the full decay rate. Two-body decay. This section uses natural units, where formula_3 Decay rate. Say a parent particle of mass "M" decays into two particles, labeled 1 and 2. In the rest frame of the parent particle, formula_17 which is obtained by requiring that four-momentum be conserved in the decay, i.e. formula_18 Also, in spherical coordinates, formula_19 Using the delta function to perform the formula_20 and formula_21 integrals in the phase-space for a two-body final state, one finds that the decay rate in the rest frame of the parent particle is formula_22 From two different frames. The angle of an emitted particle in the lab frame is related to the angle it has emitted in the center of momentum frame by the equation formula_23 Complex mass and decay rate. This section uses natural units, where formula_3 The mass of an unstable particle is formally a complex number, with the real part being its mass in the usual sense, and the imaginary part being its decay rate in natural units. When the imaginary part is large compared to the real part, the particle is usually thought of as a resonance more than a particle. This is because in quantum field theory a particle of mass M (a real number) is often exchanged between two other particles when there is not enough energy to create it, if the time to travel between these other particles is short enough, of order 1/M, according to the uncertainty principle. For a particle of mass formula_24, the particle can travel for time 1/M, but decays after time of order of formula_25. If formula_26 then the particle usually decays before it completes its travel. Notes. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "P(t) = \\exp\\left(-\\frac{t}{\\gamma \\tau}\\right)" }, { "math_id": 1, "text": "\\tau" }, { "math_id": 2, "text": "\\gamma = \\frac{1}{\\sqrt{1-\\frac{v^2}{c^2}}}" }, { "math_id": 3, "text": "c=\\hbar=1. \\," }, { "math_id": 4, "text": "\\Gamma" }, { "math_id": 5, "text": "p_i" }, { "math_id": 6, "text": "d \\Gamma_n = \\frac{S \\left|\\mathcal{M} \\right|^2}{2M} d \\Phi_n (P; p_1, p_2,\\dots, p_n) \\," }, { "math_id": 7, "text": "\\mathcal{M}\\," }, { "math_id": 8, "text": "d\\Phi_n \\," }, { "math_id": 9, "text": "p_i \\," }, { "math_id": 10, "text": "S = \\prod_{j=1}^m \\frac{1}{k_j!}\\," }, { "math_id": 11, "text": "k_j \\," }, { "math_id": 12, "text": "\\sum_{j=1}^m k_j = n \\," }, { "math_id": 13, "text": "d \\Phi_n (P; p_1, p_2,\\dots, p_n) = (2\\pi)^4 \\delta^4\\left(P - \\sum_{i=1}^n p_i\\right) \\prod_{i=1}^n \\frac{d^3 \\vec{p}_i}{2(2\\pi)^3 E_i}" }, { "math_id": 14, "text": "\\delta^4 \\," }, { "math_id": 15, "text": "\\vec{p}_i \\," }, { "math_id": 16, "text": "E_i \\," }, { "math_id": 17, "text": "|\\vec{p}_1| = |\\vec{p}_2| = \\frac{[(M^2 - (m_1 + m_2)^2)(M^2 - (m_1 - m_2)^2)]^{1/2}}{2M}, \\," }, { "math_id": 18, "text": "(M, \\vec{0}) = (E_1, \\vec{p}_1) + (E_2, \\vec{p}_2).\\," }, { "math_id": 19, "text": "d^3 \\vec{p} = |\\vec{p}\\,|^2\\, d|\\vec{p}\\,|\\, d\\phi\\, d\\left(\\cos \\theta \\right). \\," }, { "math_id": 20, "text": "d^3 \\vec{p}_2" }, { "math_id": 21, "text": "d|\\vec{p}_1|\\," }, { "math_id": 22, "text": "d\\Gamma = \\frac{ \\left| \\mathcal{M} \\right|^2}{32 \\pi^2} \\frac{|\\vec{p}_1|}{M^2}\\, d\\phi_1\\, d\\left( \\cos \\theta_1 \\right). \\," }, { "math_id": 23, "text": "\\tan{\\theta'} = \\frac{\\sin{\\theta}}{\\gamma \\left(\\beta / \\beta' + \\cos{\\theta} \\right)}" }, { "math_id": 24, "text": "\\scriptstyle M+i\\Gamma" }, { "math_id": 25, "text": "\\scriptstyle 1/\\Gamma" }, { "math_id": 26, "text": "\\scriptstyle \\Gamma > M" } ]
https://en.wikipedia.org/wiki?curid=8135659
8136395
Samuel S. Wilks
American mathematician (1906–1964) Samuel Stanley Wilks (June 17, 1906 – March 7, 1964) was an American mathematician and academic who played an important role in the development of mathematical statistics, especially in regard to practical applications. Early life and education. Wilks was born in Little Elm, Texas and raised on a farm. He studied Industrial Arts at the North Texas State Teachers College in Denton, Texas, obtaining his bachelor's degree in 1926. He received his master's degree in mathematics in 1928 from the University of Texas. He obtained his Ph.D. at the University of Iowa under Everett F. Lindquist; his thesis dealt with a problem of statistical measurement in education, and was published in the "Journal of Educational Psychology". Career. Wilks became an instructor in mathematics at Princeton University in 1933; in 1938 he assumed the editorship of the journal "Annals of Mathematical Statistics" in place of Harry C. Carver. Wilks assembled an advisory board for the journal that included major figures in statistics and probability, among them Ronald Fisher, Jerzy Neyman, and Egon Pearson. During World War II he was a consultant with the Office of Naval Research. Both during and after the War he had a profound impact on the application of statistical methods to all aspects of military planning. Wilks was named professor of mathematics and director of the Section of Mathematical Statistics at Princeton in 1944, and became chairman of the Division of Mathematics at the university in 1958. Wilks died in 1964 in Princeton. Work in mathematical statistics. He was noted for his work on multivariate statistics. He also conducted work on unit-weighted regression, proving the idea that under a wide variety of common conditions, almost all sets of weights will yield composites that are very highly correlated (Wilks, 1938), a result that has been dubbed Wilks's theorem (Ree, Carretta, &amp; Earles, 1998). Another result, also called “Wilks' theorem” occurs in the theory of likelihood ratio tests, where Wilks showed the distribution of log likelihood ratios is asymptotically formula_0. From the start of his career, Wilks favored a strong focus on practical applications for the increasingly abstract field of mathematical statistics; he also influenced other researchers, notably John Tukey, in a similar direction. Drawing upon the background of his thesis, Wilks worked with the Educational Testing Service in developing the standardized tests like the SAT that have had a profound effect on American education. He also worked with Walter Shewhart on statistical applications in quality control in manufacturing. Wilks's lambda distribution is a probability distribution related to two independent Wishart distributed variables. It is important in multivariate statistics and likelihood-ratio tests. Honors. The American Statistical Association named its Wilks Memorial Award in his honor. Wilks was elected to the American Philosophical Society in 1948 and the American Academy of Arts and Sciences in 1963. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\chi^2" } ]
https://en.wikipedia.org/wiki?curid=8136395
8136546
Thin plate spline
Thin plate splines (TPS) are a spline-based technique for data interpolation and smoothing. "A spline is a function defined by polynomials in a piecewise manner." They were introduced to geometric design by Duchon. They are an important special case of a polyharmonic spline. Robust Point Matching (RPM) is a common extension and shortly known as the TPS-RPM algorithm. Physical analogy. The name "thin plate spline" refers to a physical analogy involving the bending of a plate or thin sheet of metal. Just as the metal has rigidity, the TPS fit resists bending also, implying a penalty involving the smoothness of the fitted surface. In the physical setting, the deflection is in the formula_0 direction, orthogonal to the plane. In order to apply this idea to the problem of coordinate transformation, one interprets the lifting of the plate as a displacement of the formula_1 or formula_2 coordinates within the plane. In 2D cases, given a set of formula_3 corresponding control points (knots), the TPS warp is described by formula_4 parameters which include 6 global affine motion parameters and formula_5 coefficients for correspondences of the control points. These parameters are computed by solving a linear system, in other words, TPS has a closed-form solution. Smoothness measure. The TPS arises from consideration of the integral of the square of the second derivative—this forms its smoothness measure. In the case where formula_1 is two dimensional, for interpolation, the TPS fits a mapping function formula_6 between corresponding point-sets formula_7 and formula_8 that minimizes the following energy function: formula_9 The smoothing variant, correspondingly, uses a tuning parameter formula_10 to control the rigidity of the deformation, balancing the aforementioned criterion with the measure of goodness of fit, thus minimizing: formula_11 For this variational problem, it can be shown that there exists a unique minimizer formula_12 . The finite element discretization of this variational problem, the method of elastic maps, is used for data mining and nonlinear dimensionality reduction. In simple words, "the first term is defined as the error measurement term and the second regularisation term is a penalty on the smoothness of formula_12." It is in a general case needed to make the mapping unique. Radial basis function. The thin plate spline has a natural representation in terms of radial basis functions. Given a set of control points formula_13, a radial basis function defines a spatial mapping which maps any location formula_1 in space to a new location formula_6, represented by formula_14 where formula_15 denotes the usual Euclidean norm and formula_16 is a set of mapping coefficients. The TPS corresponds to the radial basis kernel formula_17. Spline. Suppose the points are in 2 dimensions (formula_18). One can use "homogeneous coordinates" for the point-set where a point formula_19 is represented as a vector formula_20. The unique minimizer formula_12 is parameterized by formula_21 which consists of two matrices formula_22 and formula_23 (formula_24). formula_25 where d is a formula_26 matrix representing the affine transformation (hence formula_0 is a formula_27 vector) and c is a formula_28 warping coefficient matrix representing the non-affine deformation. The kernel function formula_29 is a formula_30 vector for each point formula_0, where each entry formula_31. Note that for TPS, the control points formula_32 are chosen to be the same as the set of points to be warped formula_8, so we already use formula_8 in the place of the control points. If one substitutes the solution for formula_12, formula_33 becomes: formula_34 where formula_35 and formula_36 are just concatenated versions of the point coordinates formula_37 and formula_38, and formula_39 is a formula_40 matrix formed from the formula_41. Each row of each newly formed matrix comes from one of the original vectors. The matrix formula_39 represents the TPS kernel. Loosely speaking, the TPS kernel contains the information about the point-set's internal structural relationships. When it is combined with the warping coefficients formula_23, a non-rigid warping is generated. A nice property of the TPS is that it can always be decomposed into a global affine and a local non-affine component. Consequently, the TPS smoothness term is solely dependent on the non-affine components. This is a desirable property, especially when compared to other splines, since the global pose parameters included in the affine transformation are not penalized. Applications. TPS has been widely used as the non-rigid transformation model in image alignment and shape matching. An additional application is the analysis and comparisons of archaeological findings in 3D and was implemented for triangular meshes in the GigaMesh Software Framework. The thin plate spline has a number of properties which have contributed to its popularity: However, note that splines already in one dimension can cause severe "overshoots". In 2D such effects can be much more critical, because TPS are not objective.
[ { "math_id": 0, "text": "z" }, { "math_id": 1, "text": "x" }, { "math_id": 2, "text": "y" }, { "math_id": 3, "text": "K" }, { "math_id": 4, "text": "2(K+3)" }, { "math_id": 5, "text": "2K" }, { "math_id": 6, "text": "f(x)" }, { "math_id": 7, "text": "\\{y_i\\}" }, { "math_id": 8, "text": "\\{x_i\\}" }, { "math_id": 9, "text": "\n\tE_{\\mathrm{tps}}(f) = \\sum_{i=1}^K \\|y_i - f(x_i) \\|^2\n" }, { "math_id": 10, "text": "\\lambda" }, { "math_id": 11, "text": "\n\tE_{\\mathrm{tps},\\mathrm{smooth}}(f) = \\sum_{i=1}^K \\|y_i - f(x_i) \\|^2 + \\lambda \\iint\\left[\\left(\\frac{\\partial^2 f}{\\partial x_1^2}\\right)^2 + 2\\left(\\frac{\\partial^2 f}{\\partial x_1 \\partial x_2}\\right)^2 + \\left(\\frac{\\partial^2 f}{\\partial x_2^2}\\right)^2 \\right] \\textrm{d} x_1 \\, \\textrm{d}x_2\n" }, { "math_id": 12, "text": "f" }, { "math_id": 13, "text": "\\{c_{i}, i = 1,2, \\ldots,K\\}" }, { "math_id": 14, "text": "\n\tf(x) = \\sum_{i = 1}^K w_{i}\\varphi(\\left\\| x - c_{i}\\right\\|)\n" }, { "math_id": 15, "text": "\\left\\|\\cdot\\right\\|" }, { "math_id": 16, "text": "\\{w_{i}\\}" }, { "math_id": 17, "text": "\\varphi(r) = r^2 \\log r" }, { "math_id": 18, "text": "D = 2" }, { "math_id": 19, "text": "y_{i}" }, { "math_id": 20, "text": "(1, y_{ix}, y_{iy})" }, { "math_id": 21, "text": "\\alpha" }, { "math_id": 22, "text": "d" }, { "math_id": 23, "text": "c" }, { "math_id": 24, "text": "\\alpha = \\{d,c\\}" }, { "math_id": 25, "text": "\n\tf_{tps}(z, \\alpha) = f_{tps}(z, d, c) = z\\cdot d + \\phi(z) \\cdot c = z\\cdot d + \\sum_{i = 1}^K \\phi_i(z) c_i\t\n" }, { "math_id": 26, "text": "(D+1)\\times(D+1)" }, { "math_id": 27, "text": "1\\times (D+1)" }, { "math_id": 28, "text": "K\\times (D+1)" }, { "math_id": 29, "text": "\\phi(z)" }, { "math_id": 30, "text": "1\\times K" }, { "math_id": 31, "text": "\\phi_i(z) = \\|z - x_i\\|^2 \\log \\|z - x_i\\|" }, { "math_id": 32, "text": "\\{c_i\\}" }, { "math_id": 33, "text": "E_{tps}" }, { "math_id": 34, "text": "\n\tE_{tps}(d,c) = \\|Y - Xd - \\Phi c\\|^2 + \\lambda c^T\\Phi c\n" }, { "math_id": 35, "text": "Y" }, { "math_id": 36, "text": "X" }, { "math_id": 37, "text": "y_i" }, { "math_id": 38, "text": "x_i" }, { "math_id": 39, "text": "\\Phi" }, { "math_id": 40, "text": "(K\\times K)" }, { "math_id": 41, "text": "\\phi (\\|x_i - x_j\\|)" } ]
https://en.wikipedia.org/wiki?curid=8136546
8136831
Hilbert basis (linear programming)
The Hilbert basis of a convex cone "C" is a minimal set of integer vectors in "C" such that every integer vector in "C" is a conical combination of the vectors in the Hilbert basis with integer coefficients. Definition. Given a lattice formula_0 and a convex polyhedral cone with generators formula_1 formula_2 we consider the monoid formula_3. By Gordan's lemma, this monoid is finitely generated, i.e., there exists a finite set of lattice points formula_4 such that every lattice point formula_5 is an integer conical combination of these points: formula_6 The cone "C" is called pointed if formula_7 implies formula_8. In this case there exists a unique minimal generating set of the monoid formula_3—the Hilbert basis of "C". It is given by the set of irreducible lattice points: An element formula_5 is called irreducible if it can not be written as the sum of two non-zero elements, i.e., formula_9 implies formula_10 or formula_11.
[ { "math_id": 0, "text": "L\\subset\\mathbb{Z}^d" }, { "math_id": 1, "text": "a_1,\\ldots,a_n\\in\\mathbb{Z}^d" }, { "math_id": 2, "text": "C=\\{ \\lambda_1 a_1 + \\ldots + \\lambda_n a_n \\mid \\lambda_1,\\ldots,\\lambda_n \\geq 0, \\lambda_1,\\ldots,\\lambda_n \\in\\mathbb{R}\\}\\subset\\mathbb{R}^d," }, { "math_id": 3, "text": "C\\cap L" }, { "math_id": 4, "text": "\\{x_1,\\ldots,x_m\\}\\subset C\\cap L" }, { "math_id": 5, "text": "x\\in C\\cap L" }, { "math_id": 6, "text": " x=\\lambda_1 x_1+\\ldots+\\lambda_m x_m, \\quad\\lambda_1,\\ldots,\\lambda_m\\in\\mathbb{Z}, \\lambda_1,\\ldots,\\lambda_m\\geq0." }, { "math_id": 7, "text": "x,-x\\in C" }, { "math_id": 8, "text": "x=0" }, { "math_id": 9, "text": "x=y+z" }, { "math_id": 10, "text": "y=0" }, { "math_id": 11, "text": "z=0" } ]
https://en.wikipedia.org/wiki?curid=8136831
8139165
Tile tracking
Gaming strategy Tile tracking is a technique most commonly associated with "Scrabble" and similar word games. It refers to the practice of keeping track of letters played on the game board, typically by crossing letters off a score sheet or tracking grid as the tiles are played. Tracking tiles can be an important aid to strategy, especially during the endgame when there are no tiles left to draw, where careful tracking allows each player to deduce the remaining unseen letters on the opponent's final rack. The marking off of each letter from a pre-printed tracking grid as the tiles are played is a standard feature of tournament play. Tracking sheets come in many varieties, and are often customized by players in an attempt to make the manual process of recording, tracking and counting tiles easier, more intuitive, and less prone to error. Tile-tracking tools. Accurate tile-tracking depends upon knowing the total letter distribution and letter frequency of the game and then reproducing each tile in its correct frequency to facilitate the "accounting" of each letter as tiles are played. Pre-printed forms are popular for "Scrabble" games because they eliminate the need to continuously create such a list for each new game. While "Scrabble"-branded 'tracker sheets' are available for purchase, customized tracking sheets of all types can be found freely available for downloading online, usually in .pdf format to facilitate printing. "Scrabble" club websites are the most common source for free pre-printed tracking sheets, and players often create their own tracking sheets. Technology. The introduction of "Scrabble" for computers saw the practical application of automated score-keeping, but the rapid technological advances brought about by computers and the internet seemed only to serve the wider distribution of pre-printed tracking grids via downloads in terms of new tile-tracking tools. Some websites offer online tile-tracking alternatives to paper and pencil. Users are required to manually input played tiles via the keyboard, and the input is then subtracted from a separate letter pool representing total tile distribution. The user then counts or otherwise calculates the status of the remaining tiles. Notes are made on a separate piece of paper as necessary. The user is still tracking and counting tiles manually, and the risk of receiving an inaccurate count through human error remains roughly equal to the non-technological option. The migration of "Scrabble" to mobile devices and the popularity of the digital exclusive "Words with Friends" has seen the introduction of a dedicated tile-tracking app exclusively for games played on mobile app devices that automates the process of tracking tiles and requires no manual input. Strategy. Advantages. The benefits of tracking and counting tiles are widely known among competitive "Scrabble" players and tile tracking is considered a standard part of By tracking played tiles, players can learn more about what tiles remain unseen (either in the bag or on their opponent's rack), and can use that information to make strategic decisions about what tiles to hold, which squares to block, and which tiles to play to create advantages. Tile tracking provides much of the data required to make many of the strategic decisions a player makes in the course of a game, and it has a key role in other strategic elements, including rack and board management. It is considered especially critical in the end game (when there are no tiles left in the bag and seven or fewer tiles on each player's rack). At this point, ‘"Scrabble" is chess’ If both players have tracked the tiles correctly, each knows what tiles sit in the other's final rack. Assuming a close game, the win will go to the player who can plan and calculate the best move while taking into consideration the other player's possible responses—winning a close game by blocking an opponent's big play or setting up a high-scoring play the opponent cannot block. "Scrabble" champion John Holgate notes that tile-tracking is particularly important in tight "bobbing" finishes when the bag is empty, and that many games are lost through "just not knowing what tiles your opponent has on her rack. You simply cannot calculate possible permutations if you are unsure about which letters are relevant." Tracking can also tell a player if the bag is ‘vowel-heavy’ or 'consonant-heavy', how many S's or blanks are unseen, if an opponent is likely to have a bingo on their rack, or which tiles to play or conserve. For example, if Alice has the option of playing J(I)NN or J(I)LL, and four N's and zero L's remain unseen, then assuming both plays are otherwise equal (in terms of score, openings for the opponent), J(I)NN is likely the better play. Letter frequency lists for both "Scrabble" and "Words with Friends" are easily accessible (some versions of "Scrabble" have the tile distribution directly on the board) and, unlike word lists, using them is not against tournament rules. Whether done mentally, using a paper and pencil to track tiles, accessing a website program or a tile-counting app on mobile devices, every player has the same level of access to the same amount of readily available data——and even those unfamiliar with tile tracking as a studied technique are 'tracking tiles' every time they note that the 'Q' is still unseen or when they count the number of 'S's on the board before playing a word that can take an S hook to the opponent's advantage. Disadvantages. Manual tile tracking can take away game time that would otherwise be used for finding words and making decisions about where to play them. Inaccurate tile tracking can lead to mistakes, such as setting up a spot for the player's 'S' when the opponent also has an 'S', or failing to block a winning play from the opponent. John Holgate, five-time winner of the Australian Championship recalls winning a game in the 1993 World Championship because his opponent inadvertently crossed off two ‘S’s with one stroke and failed to block the last S-hook. The traditional method of manual tracking and counting tiles is generally understood to be a tedious and time-consuming practice. As a result, some players track tiles in a simplified manner, usually by mentally tracking and counting the letters considered ‘key’ in any game: the Q, J, Z, X, the esses and the blanks. Often, it can take several months to a year for players to track all 100 tiles consistently without affecting their game play. Outside the "Scrabble" community. The idea that an opponent can 'know what's in your rack' or have knowledge to what tiles remain unseen has been labeled in some discussions and reader comments related to Words with Friends as 'outside help' or 'borderline cheating.' While the practice of tile-tracking is considered an acceptable part of "Scrabble" and sanctioned by NSA and NASPA rules, there is much colloquial evidence to suggest that tile-tracking as both a legitimate technique and a strategic tool is not widely known outside of the "Scrabble" community. The similarities to the technique to 'card counting' is credited as contributing to some of the confusion among novice players. Zynga's Words with Friends online game includes tile-tracking as a paid feature, known as 'Peeks'. This received some criticism as being a form of "cheating" that the Zynga both condones and profits from. The author concludes the article lamenting that Zynga's strategy was a threat to the integrity of the game, because of what it might cause 'loyal Scrabblers' to 'think' about the Words with Friends player: "I seriously doubt that loyal Scrabblers are going to be happy when they find out that the reason their friend has been winning lately is because he paid an extra $10 to have an advantage". Examples. The value of tracking and counting tiles becomes apparent once a player understands that the game of "Scrabble" is as much about math as it is about vocabulary. Once possible plays have been discovered, the strategic decisions to be made have been described as being as "dark and complex as a forest." References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\frac{2}{10} = 20\\% " }, { "math_id": 1, "text": "1 - \\frac{\\binom{8}{2}}{\\binom{10}{2}} \\approx 37.8 \\%" }, { "math_id": 2, "text": "1 - \\frac{\\binom{8}{3}}{\\binom{10}{3}} \\approx 53.3 \\%" } ]
https://en.wikipedia.org/wiki?curid=8139165
8140539
Adherent point
Point that belongs to the closure of some given subset of a topological space In mathematics, an adherent point (also closure point or point of closure or contact point) of a subset formula_0 of a topological space formula_1 is a point formula_2 in formula_3 such that every neighbourhood of formula_2 (or equivalently, every open neighborhood of formula_2) contains at least one point of formula_4 A point formula_5 is an adherent point for formula_0 if and only if formula_2 is in the closure of formula_6 thus formula_7 if and only if for all open subsets formula_8 if formula_9 This definition differs from that of a limit point of a set, in that for a limit point it is required that every neighborhood of formula_2 contains at least one point of formula_0 different from formula_10 Thus every limit point is an adherent point, but the converse is not true. An adherent point of formula_0 is either a limit point of formula_0 or an element of formula_0 (or both). An adherent point which is not a limit point is an isolated point. Intuitively, having an open set formula_0 defined as the area within (but not including) some boundary, the adherent points of formula_0 are those of formula_0 including the boundary. Examples and sufficient conditions. If formula_11 is a non-empty subset of formula_12 which is bounded above, then the supremum formula_13 is adherent to formula_14 In the interval formula_15 formula_16 is an adherent point that is not in the interval, with usual topology of formula_17 A subset formula_11 of a metric space formula_18 contains all of its adherent points if and only if formula_11 is (sequentially) closed in formula_19 Adherent points and subspaces. Suppose formula_5 and formula_20 where formula_3 is a topological subspace of formula_21 (that is, formula_3 is endowed with the subspace topology induced on it by formula_21). Then formula_2 is an adherent point of formula_11 in formula_3 if and only if formula_2 is an adherent point of formula_11 in formula_22 Consequently, formula_2 is an adherent point of formula_11 in formula_3 if and only if this is true of formula_2 in every (or alternatively, in some) topological superspace of formula_24 Adherent points and sequences. If formula_11 is a subset of a topological space then the limit of a convergent sequence in formula_11 does not necessarily belong to formula_25 however it is always an adherent point of formula_14 Let formula_26 be such a sequence and let formula_2 be its limit. Then by definition of limit, for all neighbourhoods formula_23 of formula_2 there exists formula_27 such that formula_28 for all formula_29 In particular, formula_30 and also formula_31 so formula_2 is an adherent point of formula_14 In contrast to the previous example, the limit of a convergent sequence in formula_11 is not necessarily a limit point of formula_11; for example consider formula_32 as a subset of formula_17 Then the only sequence in formula_11 is the constant sequence formula_33 whose limit is formula_34 but formula_35 is not a limit point of formula_36 it is only an adherent point of formula_14 Notes. &lt;templatestyles src="Reflist/styles.css" /&gt; &lt;templatestyles src="Reflist/styles.css" /&gt; Citations. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "A" }, { "math_id": 1, "text": "X," }, { "math_id": 2, "text": "x" }, { "math_id": 3, "text": "X" }, { "math_id": 4, "text": "A." }, { "math_id": 5, "text": "x \\in X" }, { "math_id": 6, "text": "A," }, { "math_id": 7, "text": "x \\in \\operatorname{Cl}_X A" }, { "math_id": 8, "text": "U \\subseteq X," }, { "math_id": 9, "text": "x \\in U \\text{ then } U \\cap A \\neq \\varnothing." }, { "math_id": 10, "text": "x." }, { "math_id": 11, "text": "S" }, { "math_id": 12, "text": "\\R" }, { "math_id": 13, "text": "\\sup S" }, { "math_id": 14, "text": "S." }, { "math_id": 15, "text": "(a, b]," }, { "math_id": 16, "text": "a" }, { "math_id": 17, "text": "\\R." }, { "math_id": 18, "text": "M" }, { "math_id": 19, "text": "M." }, { "math_id": 20, "text": "S \\subseteq X \\subseteq Y," }, { "math_id": 21, "text": "Y" }, { "math_id": 22, "text": "Y." }, { "math_id": 23, "text": "U" }, { "math_id": 24, "text": "X." }, { "math_id": 25, "text": "S," }, { "math_id": 26, "text": "\\left(x_n\\right)_{n \\in \\N}" }, { "math_id": 27, "text": "n \\in \\N" }, { "math_id": 28, "text": "x_n \\in U" }, { "math_id": 29, "text": "n \\geq N." }, { "math_id": 30, "text": "x_N \\in U" }, { "math_id": 31, "text": "x_N \\in S," }, { "math_id": 32, "text": "S = \\{ 0 \\}" }, { "math_id": 33, "text": "0, 0, \\ldots" }, { "math_id": 34, "text": "0," }, { "math_id": 35, "text": "0" }, { "math_id": 36, "text": "S;" } ]
https://en.wikipedia.org/wiki?curid=8140539
8140616
Dvoretzky–Kiefer–Wolfowitz inequality
Statistical inequality In the theory of probability and statistics, the Dvoretzky–Kiefer–Wolfowitz–Massart inequality (DKW inequality) provides a bound on the worst case distance of an empirically determined distribution function from its associated population distribution function. It is named after Aryeh Dvoretzky, Jack Kiefer, and Jacob Wolfowitz, who in 1956 proved the inequality formula_0 with an unspecified multiplicative constant "C" in front of the exponent on the right-hand side. In 1990, Pascal Massart proved the inequality with the sharp constant "C" = 2, confirming a conjecture due to Birnbaum and McCarty. In 2021, Michael Naaman proved the multivariate version of the DKW inequality and generalized Massart's tightness result to the multivariate case, which results in a sharp constant of twice the dimension "k" of the space in which the observations are found: "C" = 2"k". The DKW inequality. Given a natural number "n", let "X"1, "X"2, …, "Xn" be real-valued independent and identically distributed random variables with cumulative distribution function "F"(·). Let "Fn" denote the associated empirical distribution function defined by formula_1 so formula_2 is the "probability" that a "single" random variable formula_3 is smaller than formula_4, and formula_5 is the "fraction" of random variables that are smaller than formula_4. The Dvoretzky–Kiefer–Wolfowitz inequality bounds the probability that the random function "Fn" differs from "F" by more than a given constant "ε" &gt; 0 anywhere on the real line. More precisely, there is the one-sided estimate formula_6 which also implies a two-sided estimate formula_7 This strengthens the Glivenko–Cantelli theorem by quantifying the rate of convergence as "n" tends to infinity. It also estimates the tail probability of the Kolmogorov–Smirnov statistic. The inequalities above follow from the case where "F" corresponds to be the uniform distribution on [0,1] as "Fn" has the same distributions as "Gn"("F") where "Gn" is the empirical distribution of "U"1, "U"2, …, "Un" where these are independent and Uniform(0,1), and noting that formula_8 with equality if and only if "F" is continuous. Multivariate case. In the multivariate case, "X"1, "X"2, …, "Xn" is an i.i.d. sequence of "k"-dimensional vectors. If "Fn" is the multivariate empirical cdf, then formula_9 for every "ε", "n", "k" &gt; 0. The ("n" + 1) term can be replaced with a 2 for any sufficiently large "n". Kaplan–Meier estimator. The Dvoretzky–Kiefer–Wolfowitz inequality is obtained for the Kaplan–Meier estimator which is a right-censored data analog of the empirical distribution function formula_10 for every formula_11 and for some constant formula_12, where formula_13 is the Kaplan–Meier estimator, and formula_14 is the censoring distribution function. Building CDF bands. The Dvoretzky–Kiefer–Wolfowitz inequality is one method for generating CDF-based confidence bounds and producing a confidence band, which is sometimes called the Kolmogorov–Smirnov confidence band. The purpose of this confidence interval is to contain the entire CDF at the specified confidence level, while alternative approaches attempt to only achieve the confidence level on each individual point, which can allow for a tighter bound. The DKW bounds runs parallel to, and is equally above and below, the empirical CDF. The equally spaced confidence interval around the empirical CDF allows for different rates of violations across the support of the distribution. In particular, it is more common for a CDF to be outside of the CDF bound estimated using the DKW inequality near the median of the distribution than near the endpoints of the distribution. The interval that contains the true CDF, formula_2, with probability formula_15 is often specified as formula_16 which is also a special case of the asymptotic procedure for the multivariate case, whereby one uses the following critical value formula_17 for the multivariate test; one may replace 2"k" with "k"("n" + 1) for a test that holds for all "n"; moreover, the multivariate test described by Naaman can be generalized to account for heterogeneity and dependence. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\n\\Pr\\Bigl(\\sup_{x\\in\\mathbb R} |F_n(x) - F(x)| > \\varepsilon \\Bigr) \\le Ce^{-2n\\varepsilon^2}\\qquad \\text{for every }\\varepsilon>0.\n" }, { "math_id": 1, "text": "\n F_n(x) = \\frac1n \\sum_{i=1}^n \\mathbf{1}_{\\{X_i\\leq x\\}},\\qquad x\\in\\mathbb{R}.\n " }, { "math_id": 2, "text": "F(x)" }, { "math_id": 3, "text": "X" }, { "math_id": 4, "text": "x" }, { "math_id": 5, "text": "F_n(x)" }, { "math_id": 6, "text": "\n \\Pr\\Bigl(\\sup_{x\\in\\mathbb R} \\bigl(F_n(x) - F(x)\\bigr) > \\varepsilon \\Bigr) \\le e^{-2n\\varepsilon^2}\\qquad \\text{for every }\\varepsilon\\geq\\sqrt{\\tfrac{1}{2n}\\ln2},\n " }, { "math_id": 7, "text": "\n \\Pr\\Bigl(\\sup_{x\\in\\mathbb R} |F_n(x) - F(x)| > \\varepsilon \\Bigr) \\le 2e^{-2n\\varepsilon^2}\\qquad \\text{for every }\\varepsilon>0.\n " }, { "math_id": 8, "text": "\n \\sup_{x\\in\\mathbb R} |F_n(x) - F(x)| \\; \\stackrel{d}{=} \\; \\sup_{x \\in \\mathbb R} | G_n (F(x)) - F(x) | \\le \\sup_{0 \\le t \\le 1} | G_n (t) -t | ,\n " }, { "math_id": 9, "text": "\n \\Pr\\Bigl(\\sup_{t\\in\\mathbb R^k} |F_n(t) - F(t)| > \\varepsilon \\Bigr) \\le (n+1)ke^{-2n\\varepsilon^2} \n " }, { "math_id": 10, "text": "\n \\Pr\\Bigl(\\sqrt n\\sup_{t\\in[0,\\infty)} |(1-G(t))(F_n(t) - F(t))| > \\varepsilon \\Bigr) \\le 2.5 e^{-2\\varepsilon^2 + C\\varepsilon} \n " }, { "math_id": 11, "text": "\\varepsilon > 0" }, { "math_id": 12, "text": "C <\\infty" }, { "math_id": 13, "text": "F_n" }, { "math_id": 14, "text": "G" }, { "math_id": 15, "text": "1-\\alpha" }, { "math_id": 16, "text": "\n F_n(x) - \\varepsilon \\le F(x) \\le F_n(x) + \\varepsilon \\; \\text{ where } \\varepsilon = \\sqrt{\\frac{\\ln\\frac{2}{\\alpha}}{2n}}\n " }, { "math_id": 17, "text": "\n \\frac{d(\\alpha,k)}{\\sqrt n} = \\sqrt{\\frac{\\ln\\frac{2k}{\\alpha}}{2n}}\n " } ]
https://en.wikipedia.org/wiki?curid=8140616
814148
Partial trace
Function over linear operators In linear algebra and functional analysis, the partial trace is a generalization of the trace. Whereas the trace is a scalar valued function on operators, the partial trace is an operator-valued function. The partial trace has applications in quantum information and decoherence which is relevant for quantum measurement and thereby to the decoherent approaches to interpretations of quantum mechanics, including consistent histories and the relative state interpretation. Details. Suppose formula_0, formula_1 are finite-dimensional vector spaces over a field, with dimensions formula_2 and formula_3, respectively. For any space formula_4, let formula_5 denote the space of linear operators on formula_4. The partial trace over formula_1 is then written as formula_6, where formula_7 denotes the Kronecker product. It is defined as follows: For formula_8, let formula_9, and formula_10, be bases for "V" and "W" respectively; then "T" has a matrix representation formula_11 relative to the basis formula_12 of formula_13. Now for indices "k", "i" in the range 1, ..., "m", consider the sum formula_14 This gives a matrix "b""k","i". The associated linear operator on "V" is independent of the choice of bases and is by definition the partial trace. Among physicists, this is often called "tracing out" or "tracing over" "W" to leave only an operator on "V" in the context where "W" and "V" are Hilbert spaces associated with quantum systems (see below). Invariant definition. The partial trace operator can be defined invariantly (that is, without reference to a basis) as follows: it is the unique linear map formula_15 such that formula_16 To see that the conditions above determine the partial trace uniquely, let formula_17 form a basis for formula_0, let formula_18 form a basis for formula_1, let formula_19 be the map that sends formula_20 to formula_21 (and all other basis elements to zero), and let formula_22 be the map that sends formula_23 to formula_24. Since the vectors formula_25 form a basis for formula_26, the maps formula_27 form a basis for formula_28. From this abstract definition, the following properties follow: formula_29 formula_30 Category theoretic notion. It is the partial trace of linear transformations that is the subject of Joyal, Street, and Verity's notion of Traced monoidal category. A traced monoidal category is a monoidal category formula_31 together with, for objects "X", "Y", "U" in the category, a function of Hom-sets, formula_32 satisfying certain axioms. Another case of this abstract notion of partial trace takes place in the category of finite sets and bijections between them, in which the monoidal product is disjoint union. One can show that for any finite sets, "X","Y","U" and bijection formula_33 there exists a corresponding "partially traced" bijection formula_34. Partial trace for operators on Hilbert spaces. The partial trace generalizes to operators on infinite dimensional Hilbert spaces. Suppose "V", "W" are Hilbert spaces, and let formula_35 be an orthonormal basis for "W". Now there is an isometric isomorphism formula_36 Under this decomposition, any operator formula_37 can be regarded as an infinite matrix of operators on "V" formula_38 where formula_39. First suppose "T" is a non-negative operator. In this case, all the diagonal entries of the above matrix are non-negative operators on "V". If the sum formula_40 converges in the strong operator topology of L("V"), it is independent of the chosen basis of "W". The partial trace Tr"W"("T") is defined to be this operator. The partial trace of a self-adjoint operator is defined if and only if the partial traces of the positive and negative parts are defined. Computing the partial trace. Suppose "W" has an orthonormal basis, which we denote by ket vector notation as formula_41. Then formula_42 The superscripts in parentheses do not represent matrix components, but instead label the matrix itself. Partial trace and invariant integration. In the case of finite dimensional Hilbert spaces, there is a useful way of looking at partial trace involving integration with respect to a suitably normalized Haar measure μ over the unitary group U("W") of "W". Suitably normalized means that μ is taken to be a measure with total mass dim("W"). Theorem. Suppose "V", "W" are finite dimensional Hilbert spaces. Then formula_43 commutes with all operators of the form formula_44 and hence is uniquely of the form formula_45. The operator "R" is the partial trace of "T". Partial trace as a quantum operation. The partial trace can be viewed as a quantum operation. Consider a quantum mechanical system whose state space is the tensor product formula_46 of Hilbert spaces. A mixed state is described by a density matrix ρ, that is a non-negative trace-class operator of trace 1 on the tensor product formula_47 The partial trace of ρ with respect to the system "B", denoted by formula_48, is called the reduced state of ρ on system "A". In symbols, formula_49 To show that this is indeed a sensible way to assign a state on the "A" subsystem to ρ, we offer the following justification. Let "M" be an observable on the subsystem "A", then the corresponding observable on the composite system is formula_50. However one chooses to define a reduced state formula_51, there should be consistency of measurement statistics. The expectation value of "M" after the subsystem "A" is prepared in formula_48 and that of formula_50 when the composite system is prepared in ρ should be the same, i.e. the following equality should hold: formula_52 We see that this is satisfied if formula_48 is as defined above via the partial trace. Furthermore, such operation is unique. Let "T(H)" be the Banach space of trace-class operators on the Hilbert space "H". It can be easily checked that the partial trace, viewed as a map formula_53 is completely positive and trace-preserving. The density matrix ρ is Hermitian, positive semi-definite, and has a trace of 1. It has a spectral decomposition: formula_54 Its easy to see that the partial trace formula_48 also satisfies these conditions. For example, for any pure state formula_55 in formula_56, we have formula_57 Note that the term formula_58 represents the probability of finding the state formula_55 when the composite system is in the state formula_59. This proves the positive semi-definiteness of formula_48. The partial trace map as given above induces a dual map formula_60 between the C*-algebras of bounded operators on formula_61 and formula_46 given by formula_62 formula_60 maps observables to observables and is the Heisenberg picture representation of formula_63. Comparison with classical case. Suppose instead of quantum mechanical systems, the two systems "A" and "B" are classical. The space of observables for each system are then abelian C*-algebras. These are of the form "C"("X") and "C"("Y") respectively for compact spaces "X", "Y". The state space of the composite system is simply formula_64 A state on the composite system is a positive element ρ of the dual of C("X" × "Y"), which by the Riesz-Markov theorem corresponds to a regular Borel measure on "X" × "Y". The corresponding reduced state is obtained by projecting the measure ρ to "X". Thus the partial trace is the quantum mechanical equivalent of this operation. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "V" }, { "math_id": 1, "text": "W" }, { "math_id": 2, "text": "m" }, { "math_id": 3, "text": "n" }, { "math_id": 4, "text": "A" }, { "math_id": 5, "text": "L(A)" }, { "math_id": 6, "text": "\\operatorname{Tr}_W: \\operatorname{L}(V \\otimes W) \\to \\operatorname{L}(V)" }, { "math_id": 7, "text": "\\otimes" }, { "math_id": 8, "text": " T\\in \\operatorname{L}(V \\otimes W)" }, { "math_id": 9, "text": "e_1, \\ldots, e_m " }, { "math_id": 10, "text": "f_1, \\ldots, f_n " }, { "math_id": 11, "text": " \\{a_{k \\ell, i j}\\} \\quad 1 \\leq k, i \\leq m, \\quad 1 \\leq \\ell,j \\leq n " }, { "math_id": 12, "text": " e_k \\otimes f_\\ell " }, { "math_id": 13, "text": " V \\otimes W" }, { "math_id": 14, "text": " b_{k, i} = \\sum_{j=1}^n a_{k j, i j} " }, { "math_id": 15, "text": " \\operatorname{Tr}_W: \\operatorname{L}(V \\otimes W) \\rightarrow \\operatorname{L}(V) " }, { "math_id": 16, "text": " \\operatorname{Tr}_W(R \\otimes S) = \\operatorname{Tr}(S) \\, R \\quad \\forall R \\in \\operatorname{L}(V) \\quad \\forall S \\in \\operatorname{L}(W). " }, { "math_id": 17, "text": "v_1, \\ldots, v_m" }, { "math_id": 18, "text": "w_1, \\ldots, w_n" }, { "math_id": 19, "text": "E_{ij} \\colon V \\to V" }, { "math_id": 20, "text": "v_i" }, { "math_id": 21, "text": "v_j" }, { "math_id": 22, "text": "F_{kl} \\colon W \\to W" }, { "math_id": 23, "text": "w_k" }, { "math_id": 24, "text": "w_l" }, { "math_id": 25, "text": "v_i \\otimes w_k" }, { "math_id": 26, "text": "V \\otimes W" }, { "math_id": 27, "text": "E_{ij} \\otimes F_{kl}" }, { "math_id": 28, "text": "\\operatorname{L}(V \\otimes W)" }, { "math_id": 29, "text": " \\operatorname{Tr}_W (I_{V \\otimes W}) = \\dim W \\ I_{V} " }, { "math_id": 30, "text": " \\operatorname{Tr}_W (T (I_V \\otimes S)) = \\operatorname{Tr}_W ((I_V \\otimes S) T) \\quad \\forall S \\in \\operatorname{L}(W) \\quad \\forall T \\in \\operatorname{L}(V \\otimes W)." }, { "math_id": 31, "text": "(C,\\otimes,I)" }, { "math_id": 32, "text": "\\operatorname{Tr}^U_{X,Y}\\colon \\operatorname{Hom}_C(X\\otimes U, Y\\otimes U) \\to \\operatorname{Hom}_C(X,Y)" }, { "math_id": 33, "text": "X+U\\cong Y+U" }, { "math_id": 34, "text": "X\\cong Y" }, { "math_id": 35, "text": " \\{f_i\\}_{i \\in I} " }, { "math_id": 36, "text": " \\bigoplus_{\\ell \\in I} (V \\otimes \\mathbb{C} f_\\ell) \\rightarrow V \\otimes W" }, { "math_id": 37, "text": " T \\in \\operatorname{L}(V \\otimes W)" }, { "math_id": 38, "text": " \\begin{bmatrix} T_{11} & T_{12} & \\ldots & T_{1 j} & \\ldots \\\\\n T_{21} & T_{22} & \\ldots & T_{2 j} & \\ldots \\\\\n \\vdots & \\vdots & & \\vdots \\\\\n T_{k1}& T_{k2} & \\ldots & T_{k j} & \\ldots \\\\\n \\vdots & \\vdots & & \\vdots \n\\end{bmatrix}," }, { "math_id": 39, "text": " T_{k \\ell} \\in \\operatorname{L}(V) " }, { "math_id": 40, "text": " \\sum_{\\ell} T_{\\ell \\ell} " }, { "math_id": 41, "text": " \\{| \\ell \\rangle\\}_\\ell " }, { "math_id": 42, "text": " \\operatorname{Tr}_W\\left(\\sum_{k,\\ell} T^{(k \\ell)} \\, \\otimes \\, | k \\rangle \\langle \\ell |\\right) = \\sum_j T^{(j j)} ." }, { "math_id": 43, "text": " \\int_{\\operatorname{U}(W)} (I_V \\otimes U^*) T (I_V \\otimes U) \\ d \\mu(U) " }, { "math_id": 44, "text": " I_V \\otimes S " }, { "math_id": 45, "text": " R \\otimes I_W " }, { "math_id": 46, "text": "H_A \\otimes H_B" }, { "math_id": 47, "text": " H_A \\otimes H_B ." }, { "math_id": 48, "text": "\\rho ^A" }, { "math_id": 49, "text": "\\rho^A = \\operatorname{Tr}_B \\rho." }, { "math_id": 50, "text": "M \\otimes I" }, { "math_id": 51, "text": "\\rho^A" }, { "math_id": 52, "text": "\\operatorname{Tr}_A ( M \\cdot \\rho^A) = \\operatorname{Tr} ( M \\otimes I \\cdot \\rho)." }, { "math_id": 53, "text": "\\operatorname{Tr}_B : T(H_A \\otimes H_B) \\rightarrow T(H_A)" }, { "math_id": 54, "text": "\\rho=\\sum_{m}p_m|\\Psi_m\\rangle\\langle \\Psi_m|;\\ 0\\leq p_m\\leq 1,\\ \\sum_{m}p_m=1" }, { "math_id": 55, "text": "|\\psi_A\\rangle" }, { "math_id": 56, "text": "H_A" }, { "math_id": 57, "text": "\\langle\\psi_A|\\rho^A|\\psi_A\\rangle=\\sum_{m}p_m\\operatorname{Tr}_B[\\langle\\psi_A|\\Psi_m\\rangle\\langle \\Psi_m|\\psi_A\\rangle]\\geq 0" }, { "math_id": 58, "text": "\\operatorname{Tr}_B[\\langle\\psi_A|\\Psi_m\\rangle\\langle \\Psi_m|\\psi_A\\rangle]" }, { "math_id": 59, "text": "|\\Psi_m\\rangle" }, { "math_id": 60, "text": "\\operatorname{Tr}_B ^*" }, { "math_id": 61, "text": "\\; H_A" }, { "math_id": 62, "text": "\\operatorname{Tr}_B ^* (A) = A \\otimes I." }, { "math_id": 63, "text": "\\operatorname{Tr}_B" }, { "math_id": 64, "text": "C(X) \\otimes C(Y) = C(X \\times Y)." } ]
https://en.wikipedia.org/wiki?curid=814148
8143131
CMA-ES
Evolutionary algorithm Covariance matrix adaptation evolution strategy (CMA-ES) is a particular kind of strategy for numerical optimization. Evolution strategies (ES) are stochastic, derivative-free methods for numerical optimization of non-linear or non-convex continuous optimization problems. They belong to the class of evolutionary algorithms and evolutionary computation. An evolutionary algorithm is broadly based on the principle of biological evolution, namely the repeated interplay of variation (via recombination and mutation) and selection: in each generation (iteration) new individuals (candidate solutions, denoted as formula_0) are generated by variation of the current parental individuals, usually in a stochastic way. Then, some individuals are selected to become the parents in the next generation based on their fitness or objective function value formula_1. Like this, individuals with better and better formula_2-values are generated over the generation sequence. In an evolution strategy, new candidate solutions are usually sampled according to a multivariate normal distribution in formula_3. Recombination amounts to selecting a new mean value for the distribution. Mutation amounts to adding a random vector, a perturbation with zero mean. Pairwise dependencies between the variables in the distribution are represented by a covariance matrix. The covariance matrix adaptation (CMA) is a method to update the covariance matrix of this distribution. This is particularly useful if the function formula_2 is ill-conditioned. Adaptation of the covariance matrix amounts to learning a second order model of the underlying objective function similar to the approximation of the inverse Hessian matrix in the quasi-Newton method in classical optimization. In contrast to most classical methods, fewer assumptions on the underlying objective function are made. Because only a ranking (or, equivalently, sorting) of candidate solutions is exploited, neither derivatives nor even an (explicit) objective function is required by the method. For example, the ranking could come about from pairwise competitions between the candidate solutions in a Swiss-system tournament. Principles. Two main principles for the adaptation of parameters of the search distribution are exploited in the CMA-ES algorithm. First, a maximum-likelihood principle, based on the idea to increase the probability of successful candidate solutions and search steps. The mean of the distribution is updated such that the likelihood of previously successful candidate solutions is maximized. The covariance matrix of the distribution is updated (incrementally) such that the likelihood of previously successful search steps is increased. Both updates can be interpreted as a natural gradient descent. Also, in consequence, the CMA conducts an iterated principal components analysis of successful search steps while retaining "all" principal axes. Estimation of distribution algorithms and the Cross-Entropy Method are based on very similar ideas, but estimate (non-incrementally) the covariance matrix by maximizing the likelihood of successful solution "points" instead of successful search "steps". Second, two paths of the time evolution of the distribution mean of the strategy are recorded, called search or evolution paths. These paths contain significant information about the correlation between consecutive steps. Specifically, if consecutive steps are taken in a similar direction, the evolution paths become long. The evolution paths are exploited in two ways. One path is used for the covariance matrix adaptation procedure in place of single successful search steps and facilitates a possibly much faster variance increase of favorable directions. The other path is used to conduct an additional step-size control. This step-size control aims to make consecutive movements of the distribution mean orthogonal in expectation. The step-size control effectively prevents premature convergence yet allowing fast convergence to an optimum. Algorithm. In the following the most commonly used ("μ"/"μ""w", "λ")-CMA-ES is outlined, where in each iteration step a weighted combination of the "μ" best out of "λ" new candidate solutions is used to update the distribution parameters. The main loop consists of three main parts: 1) sampling of new solutions, 2) re-ordering of the sampled solutions based on their fitness, 3) update of the internal state variables based on the re-ordered samples. A pseudocode of the algorithm looks as follows. set formula_4 // number of samples per iteration, at least two, generally &gt; 4 initialize formula_5, formula_6, formula_7, formula_8, formula_9 // initialize state variables while "not terminate" do // iterate for formula_10 in formula_11 do // sample formula_4 new solutions and evaluate them formula_12sample_multivariate_normal(meanformula_13, covariance_matrixformula_14) formula_15 formula_16 ← formula_17 with formula_18 // sort solutions formula_19 // we need later formula_20 and formula_21 formula_5 ← update_mformula_22 // move mean to better solutions formula_23 ← update_psformula_24 // update isotropic evolution path formula_25 ← update_pcformula_26 // update anisotropic evolution path formula_27 ← update_Cformula_28 // update covariance matrix formula_6 ← update_sigmaformula_29 // update step-size using isotropic path length return formula_5 or formula_30 The order of the five update assignments is relevant: formula_5 must be updated first, formula_23 and formula_25 must be updated before formula_27, and formula_6 must be updated last. The update equations for the five state variables are specified in the following. Given are the search space dimension formula_31 and the iteration step formula_32. The five state variables are formula_33, the distribution mean and current favorite solution to the optimization problem, formula_34, the step-size, formula_35, a symmetric and positive-definite formula_36 covariance matrix with formula_37 and formula_38, two evolution paths, initially set to the zero vector. The iteration starts with sampling formula_39 candidate solutions formula_40 from a multivariate normal distribution formula_41, i.e. for formula_42 formula_43 The second line suggests the interpretation as unbiased perturbation (mutation) of the current favorite solution vector formula_44 (the distribution mean vector). The candidate solutions formula_45 are evaluated on the objective function formula_46 to be minimized. Denoting the formula_2-sorted candidate solutions as formula_47 the new mean value is computed as formula_48 where the positive (recombination) weights formula_49 sum to one. Typically, formula_50 and the weights are chosen such that formula_51. The only feedback used from the objective function here and in the following is an ordering of the sampled candidate solutions due to the indices formula_52. The step-size formula_53 is updated using "cumulative step-size adaptation" (CSA), sometimes also denoted as "path length control". The evolution path (or search path) formula_23 is updated first. formula_54 formula_55 where formula_56 is the backward time horizon for the evolution path formula_23 and larger than one (formula_57 is reminiscent of an exponential decay constant as formula_58 where formula_59 is the associated lifetime and formula_60 the half-life), formula_61 is the variance effective selection mass and formula_62 by definition of formula_63, formula_64 is the unique symmetric square root of the inverse of formula_65, and formula_66 is the damping parameter usually close to one. For formula_67 or formula_68 the step-size remains unchanged. The step-size formula_53 is increased if and only if formula_69 is larger than the expected value formula_70 and decreased if it is smaller. For this reason, the step-size update tends to make consecutive steps formula_71-conjugate, in that after the adaptation has been successful formula_72. Finally, the covariance matrix is updated, where again the respective evolution path is updated first. formula_73 formula_74 where formula_75 denotes the transpose and formula_76 is the backward time horizon for the evolution path formula_25 and larger than one, formula_77 and the indicator function formula_78 evaluates to one iff formula_79 or, in other words, formula_80, which is usually the case, formula_81 makes partly up for the small variance loss in case the indicator is zero, formula_82 is the learning rate for the rank-one update of the covariance matrix and formula_83 is the learning rate for the rank-formula_84 update of the covariance matrix and must not exceed formula_85. The covariance matrix update tends to increase the likelihood for formula_25 and for formula_86 to be sampled from formula_87. This completes the iteration step. The number of candidate samples per iteration, formula_4, is not determined a priori and can vary in a wide range. Smaller values, for example formula_88, lead to more local search behavior. Larger values, for example formula_89 with default value formula_90, render the search more global. Sometimes the algorithm is repeatedly restarted with increasing formula_4 by a factor of two for each restart. Besides of setting formula_4 (or possibly formula_84 instead, if for example formula_4 is predetermined by the number of available processors), the above introduced parameters are not specific to the given objective function and therefore not meant to be modified by the user. Example code in MATLAB/Octave. function xmin=purecmaes % (mu/mu_w, lambda)-CMA-ES % -------------------- Initialization -------------------------------- % User defined input parameters (need to be edited) strfitnessfct = 'frosenbrock'; % name of objective/fitness function N = 20; % number of objective variables/problem dimension xmean = rand(N,1); % objective variables initial point sigma = 0.3; % coordinate wise standard deviation (step size) stopfitness = 1e-10; % stop if fitness &lt; stopfitness (minimization) stopeval = 1e3*N^2; % stop after stopeval number of function evaluations % Strategy parameter setting: Selection lambda = 4+floor(3*log(N)); % population size, offspring number mu = lambda/2; % number of parents/points for recombination weights = log(mu+1/2)-log(1:mu)'; % muXone array for weighted recombination mu = floor(mu); weights = weights/sum(weights); % normalize recombination weights array mueff=sum(weights)^2/sum(weights.^2); % variance-effectiveness of sum w_i x_i % Strategy parameter setting: Adaptation cc = (4+mueff/N) / (N+4 + 2*mueff/N); % time constant for cumulation for C cs = (mueff+2) / (N+mueff+5); % t-const for cumulation for sigma control c1 = 2 / ((N+1.3)^2+mueff); % learning rate for rank-one update of C cmu = min(1-c1, 2 * (mueff-2+1/mueff) / ((N+2)^2+mueff)); % and for rank-mu update damps = 1 + 2*max(0, sqrt((mueff-1)/(N+1))-1) + cs; % damping for sigma % usually close to 1 % Initialize dynamic (internal) strategy parameters and constants pc = zeros(N,1); ps = zeros(N,1); % evolution paths for C and sigma B = eye(N,N); % B defines the coordinate system D = ones(N,1); % diagonal D defines the scaling C = B * diag(D.^2) * B'; % covariance matrix C invsqrtC = B * diag(D.^-1) * B'; % C^-1/2 eigeneval = 0; % track update of B and D chiN=N^0.5*(1-1/(4*N)+1/(21*N^2)); % expectation of % ||N(0,I)|| == norm(randn(N,1)) % -------------------- Generation Loop -------------------------------- counteval = 0; % the next 40 lines contain the 20 lines of interesting code while counteval &lt; stopeval % Generate and evaluate lambda offspring for k=1:lambda arx(:,k) = xmean + sigma * B * (D .* randn(N,1)); % m + sig * Normal(0,C) arfitness(k) = feval(strfitnessfct, arx(:,k)); % objective function call counteval = counteval+1; end % Sort by fitness and compute weighted mean into xmean [arfitness, arindex] = sort(arfitness); % minimization xold = xmean; xmean = arx(:,arindex(1:mu))*weights; % recombination, new mean value % Cumulation: Update evolution paths ps = (1-cs)*ps ... + sqrt(cs*(2-cs)*mueff) * invsqrtC * (xmean-xold) / sigma; hsig = norm(ps)/sqrt(1-(1-cs)^(2*counteval/lambda))/chiN &lt; 1.4 + 2/(N+1); pc = (1-cc)*pc ... + hsig * sqrt(cc*(2-cc)*mueff) * (xmean-xold) / sigma; % Adapt covariance matrix C artmp = (1/sigma) * (arx(:,arindex(1:mu))-repmat(xold,1,mu)); C = (1-c1-cmu) * C ... % regard old matrix + c1 * (pc*pc' ... % plus rank one update + (1-hsig) * cc*(2-cc) * C) ... % minor correction if hsig==0 + cmu * artmp * diag(weights) * artmp'; % plus rank mu update % Adapt step size sigma sigma = sigma * exp((cs/damps)*(norm(ps)/chiN - 1)); % Decomposition of C into B*diag(D.^2)*B' (diagonalization) if counteval - eigeneval &gt; lambda/(c1+cmu)/N/10 % to achieve O(N^2) eigeneval = counteval; C = triu(C) + triu(C,1)'; % enforce symmetry [B,D] = eig(C); % eigen decomposition, B==normalized eigenvectors D = sqrt(diag(D)); % D is a vector of standard deviations now invsqrtC = B * diag(D.^-1) * B'; end % Break, if fitness is good enough or condition exceeds 1e14, better termination methods are advisable if arfitness(1) &lt;= stopfitness || max(D) &gt; 1e7 * min(D) break; end end % while, end generation loop xmin = arx(:, arindex(1)); % Return best point of last iteration. % Notice that xmean is expected to be even % better. end function f=frosenbrock(x) if size(x,1) &lt; 2 error('dimension must be greater one'); end f = 100*sum((x(1:end-1).^2 - x(2:end)).^2) + sum((x(1:end-1)-1).^2); end Theoretical foundations. Given the distribution parameters—mean, variances and covariances—the normal probability distribution for sampling new candidate solutions is the maximum entropy probability distribution over formula_3, that is, the sample distribution with the minimal amount of prior information built into the distribution. More considerations on the update equations of CMA-ES are made in the following. Variable metric. The CMA-ES implements a stochastic variable-metric method. In the very particular case of a convex-quadratic objective function formula_91 the covariance matrix formula_65 adapts to the inverse of the Hessian matrix formula_92, up to a scalar factor and small random fluctuations. More general, also on the function formula_93, where formula_94 is strictly increasing and therefore order preserving, the covariance matrix formula_65 adapts to formula_95, up to a scalar factor and small random fluctuations. For selection ratio formula_96 (and hence population size formula_97), the formula_84 selected solutions yield an empirical covariance matrix reflective of the inverse-Hessian even in evolution strategies without adaptation of the covariance matrix. This result has been proven for formula_98 on a static model, relying on the quadratic approximation. Maximum-likelihood updates. The update equations for mean and covariance matrix maximize a likelihood while resembling an expectation–maximization algorithm. The update of the mean vector formula_5 maximizes a log-likelihood, such that formula_99 where formula_100 denotes the log-likelihood of formula_0 from a multivariate normal distribution with mean formula_5 and any positive definite covariance matrix formula_27. To see that formula_101 is independent of formula_27 remark first that this is the case for any diagonal matrix formula_27, because the coordinate-wise maximizer is independent of a scaling factor. Then, rotation of the data points or choosing formula_27 non-diagonal are equivalent. The rank-formula_84 update of the covariance matrix, that is, the right most summand in the update equation of formula_65, maximizes a log-likelihood in that formula_102 for formula_103 (otherwise formula_27 is singular, but substantially the same result holds for formula_104). Here, formula_105 denotes the likelihood of formula_0 from a multivariate normal distribution with zero mean and covariance matrix formula_27. Therefore, for formula_106 and formula_107, formula_108 is the above maximum-likelihood estimator. See estimation of covariance matrices for details on the derivation. Natural gradient descent in the space of sample distributions. Akimoto "et al." and Glasmachers "et al." discovered independently that the update of the distribution parameters resembles the descent in direction of a sampled natural gradient of the expected objective function value formula_109 (to be minimized), where the expectation is taken under the sample distribution. With the parameter setting of formula_68 and formula_106, i.e. without step-size control and rank-one update, CMA-ES can thus be viewed as an instantiation of Natural Evolution Strategies (NES). The "natural" gradient is independent of the parameterization of the distribution. Taken with respect to the parameters θ of the sample distribution p, the gradient of formula_109 can be expressed as formula_110 where formula_111 depends on the parameter vector formula_112. The so-called score function, formula_113, indicates the relative sensitivity of p w.r.t. θ, and the expectation is taken with respect to the distribution p. The "natural" gradient of formula_109, complying with the Fisher information metric (an informational distance measure between probability distributions and the curvature of the relative entropy), now reads formula_114 where the Fisher information matrix formula_115 is the expectation of the Hessian of −lnp and renders the expression independent of the chosen parameterization. Combining the previous equalities we get formula_116 A Monte Carlo approximation of the latter expectation takes the average over λ samples from p formula_117 where the notation formula_52 from above is used and therefore formula_63 are monotonically decreasing in formula_10. Ollivier "et al." finally found a rigorous derivation for the weights, formula_63, as they are defined in the CMA-ES. The weights are an asymptotically consistent estimator of the CDF of formula_118 at the points of the formula_10th order statistic formula_119, as defined above, where formula_120, composed with a fixed monotonically decreasing transformation formula_121, that is, formula_122. These weights make the algorithm insensitive to the specific formula_2-values. More concisely, using the CDF estimator of formula_2 instead of formula_2 itself let the algorithm only depend on the ranking of formula_2-values but not on their underlying distribution. This renders the algorithm invariant to strictly increasing formula_2-transformations. Now we define formula_123 such that formula_124 is the density of the multivariate normal distribution formula_125. Then, we have an explicit expression for the inverse of the Fisher information matrix where formula_53 is fixed formula_126 and for formula_127 and, after some calculations, the updates in the CMA-ES turn out as formula_128 and formula_129 where mat forms the proper matrix from the respective natural gradient sub-vector. That means, setting formula_130, the CMA-ES updates descend in direction of the approximation formula_131 of the natural gradient while using different step-sizes (learning rates 1 and formula_132) for the orthogonal parameters formula_5 and formula_27 respectively. More recent versions allow a different learning rate for the mean formula_5 as well. The most recent version of CMA-ES also use a different function formula_121 for formula_5 and formula_27 with negative values only for the latter (so-called active CMA). Stationarity or unbiasedness. It is comparatively easy to see that the update equations of CMA-ES satisfy some stationarity conditions, in that they are essentially unbiased. Under neutral selection, where formula_133, we find that formula_134 and under some mild additional assumptions on the initial conditions formula_135 and with an additional minor correction in the covariance matrix update for the case where the indicator function evaluates to zero, we find formula_136 Invariance. Invariance properties imply uniform performance on a class of objective functions. They have been argued to be an advantage, because they allow to generalize and predict the behavior of the algorithm and therefore strengthen the meaning of empirical results obtained on single functions. The following invariance properties have been established for CMA-ES. Any serious parameter optimization method should be translation invariant, but most methods do not exhibit all the above described invariance properties. A prominent example with the same invariance properties is the Nelder–Mead method, where the initial simplex must be chosen respectively. Convergence. Conceptual considerations like the scale-invariance property of the algorithm, the analysis of simpler evolution strategies, and overwhelming empirical evidence suggest that the algorithm converges on a large class of functions fast to the global optimum, denoted as formula_151. On some functions, convergence occurs independently of the initial conditions with probability one. On some functions the probability is smaller than one and typically depends on the initial formula_152 and formula_153. Empirically, the fastest possible convergence rate in formula_32 for rank-based direct search methods can often be observed (depending on the context denoted as "linear convergence" or "log-linear" or "exponential" convergence). Informally, we can write formula_154 for some formula_155, and more rigorously formula_156 or similarly, formula_157 This means that on average the distance to the optimum decreases in each iteration by a "constant" factor, namely by formula_158. The convergence rate formula_159 is roughly formula_160, given formula_4 is not much larger than the dimension formula_31. Even with optimal formula_6 and formula_27, the convergence rate formula_159 cannot largely exceed formula_161, given the above recombination weights formula_63 are all non-negative. The actual linear dependencies in formula_4 and formula_31 are remarkable and they are in both cases the best one can hope for in this kind of algorithm. Yet, a rigorous proof of convergence is missing. Interpretation as coordinate-system transformation. Using a non-identity covariance matrix for the multivariate normal distribution in evolution strategies is equivalent to a coordinate system transformation of the solution vectors, mainly because the sampling equation formula_162 can be equivalently expressed in an "encoded space" as formula_163 The covariance matrix defines a bijective transformation (encoding) for all solution vectors into a space, where the sampling takes place with identity covariance matrix. Because the update equations in the CMA-ES are invariant under linear coordinate system transformations, the CMA-ES can be re-written as an adaptive encoding procedure applied to a simple evolution strategy with identity covariance matrix. This adaptive encoding procedure is not confined to algorithms that sample from a multivariate normal distribution (like evolution strategies), but can in principle be applied to any iterative search method. Performance in practice. In contrast to most other evolutionary algorithms, the CMA-ES is, from the user's perspective, quasi-parameter-free. The user has to choose an initial solution point, formula_164, and the initial step-size, formula_165. Optionally, the number of candidate samples λ (population size) can be modified by the user in order to change the characteristic search behavior (see above) and termination conditions can or should be adjusted to the problem at hand. The CMA-ES has been empirically successful in hundreds of applications and is considered to be useful in particular on non-convex, non-separable, ill-conditioned, multi-modal or noisy objective functions. One survey of Black-Box optimizations found it outranked 31 other optimization algorithms, performing especially strongly on "difficult functions" or larger-dimensional search spaces. The search space dimension ranges typically between two and a few hundred. Assuming a black-box optimization scenario, where gradients are not available (or not useful) and function evaluations are the only considered cost of search, the CMA-ES method is likely to be outperformed by other methods in the following conditions: On separable functions, the performance disadvantage is likely to be most significant in that CMA-ES might not be able to find at all comparable solutions. On the other hand, on non-separable functions that are ill-conditioned or rugged or can only be solved with more than formula_168 function evaluations, the CMA-ES shows most often superior performance. Variations and extensions. The (1+1)-CMA-ES generates only one candidate solution per iteration step which becomes the new distribution mean if it is better than the current mean. For formula_169 the (1+1)-CMA-ES is a close variant of Gaussian adaptation. Some Natural Evolution Strategies are close variants of the CMA-ES with specific parameter settings. Natural Evolution Strategies do not utilize evolution paths (that means in CMA-ES setting formula_170) and they formalize the update of variances and covariances on a Cholesky factor instead of a covariance matrix. The CMA-ES has also been extended to multiobjective optimization as MO-CMA-ES. Another remarkable extension has been the addition of a negative update of the covariance matrix with the so-called active CMA. Using the additional active CMA update is considered as the default variant nowadays. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "x" }, { "math_id": 1, "text": "f(x)" }, { "math_id": 2, "text": "f" }, { "math_id": 3, "text": "\\mathbb{R}^n" }, { "math_id": 4, "text": "\\lambda" }, { "math_id": 5, "text": "m" }, { "math_id": 6, "text": "\\sigma" }, { "math_id": 7, "text": "C=I" }, { "math_id": 8, "text": "p_\\sigma=0" }, { "math_id": 9, "text": "p_c=0" }, { "math_id": 10, "text": "i" }, { "math_id": 11, "text": "\\{1\\ldots\\lambda\\}" }, { "math_id": 12, "text": "x_i ={}" }, { "math_id": 13, "text": "{}=m" }, { "math_id": 14, "text": "{}=\\sigma^2 C " }, { "math_id": 15, "text": "f_i = \\operatorname{fitness}(x_i)" }, { "math_id": 16, "text": "x_{1\\ldots\\lambda}" }, { "math_id": 17, "text": "x_{s(1)\\ldots s(\\lambda)}" }, { "math_id": 18, "text": "s(i) = \\operatorname{argsort}(f_{1\\ldots\\lambda}, i)" }, { "math_id": 19, "text": "m' = m" }, { "math_id": 20, "text": "m - m'" }, { "math_id": 21, "text": "x_i - m'" }, { "math_id": 22, "text": "(x_1, \\ldots ,x_\\lambda)" }, { "math_id": 23, "text": "p_\\sigma" }, { "math_id": 24, "text": "(p_\\sigma,\\sigma^{-1} C^{-1/2} (m - m'))" }, { "math_id": 25, "text": "p_c" }, { "math_id": 26, "text": "(p_c,\\sigma^{-1}(m - m'),\\|p_\\sigma\\|)" }, { "math_id": 27, "text": "C" }, { "math_id": 28, "text": "(C,p_c,(x_1 - m')/\\sigma,\\ldots ,(x_\\lambda - m')/\\sigma)" }, { "math_id": 29, "text": "(\\sigma,\\|p_\\sigma\\|)" }, { "math_id": 30, "text": "x_1" }, { "math_id": 31, "text": "n" }, { "math_id": 32, "text": "k" }, { "math_id": 33, "text": "m_k\\in\\mathbb{R}^n" }, { "math_id": 34, "text": "\\sigma_k>0" }, { "math_id": 35, "text": " C_k" }, { "math_id": 36, "text": "n\\times n" }, { "math_id": 37, "text": " C_0 = I" }, { "math_id": 38, "text": " p_\\sigma\\in\\mathbb{R}^n, p_c\\in\\mathbb{R}^n" }, { "math_id": 39, "text": "\\lambda>1" }, { "math_id": 40, "text": "x_i\\in\\mathbb{R}^n " }, { "math_id": 41, "text": "\\textstyle \\mathcal{N}(m_k,\\sigma_k^2 C_k)" }, { "math_id": 42, "text": "i=1,\\ldots,\\lambda" }, { "math_id": 43, "text": "\n\\begin{align}\n x_i \\ &\\sim\\ \\mathcal{N}(m_k,\\sigma_k^2 C_k) \n \\\\&\\sim\\ m_k + \\sigma_k\\times\\mathcal{N}(0,C_k) \n\\end{align}\n " }, { "math_id": 44, "text": "m_k" }, { "math_id": 45, "text": " x_i" }, { "math_id": 46, "text": "f:\\mathbb{R}^n\\to\\mathbb{R}" }, { "math_id": 47, "text": "\n \\{x_{i:\\lambda} \\mid i=1\\dots\\lambda\\} = \\{x_i\\mid i=1\\dots\\lambda\\} \\text{ and }\n f(x_{1:\\lambda})\\le\\dots\\le f(x_{\\mu:\\lambda})\\le f(x_{\\mu+1:\\lambda}) \\le \\cdots, \n " }, { "math_id": 48, "text": "\n\\begin{align}\n m_{k+1} &= \\sum_{i=1}^\\mu w_i\\, x_{i:\\lambda} \n \\\\ &= m_k + \\sum_{i=1}^\\mu w_i\\, (x_{i:\\lambda} - m_k) \n\\end{align}\n " }, { "math_id": 49, "text": " w_1 \\ge w_2 \\ge \\dots \\ge w_\\mu > 0" }, { "math_id": 50, "text": "\\mu \\le \\lambda/2" }, { "math_id": 51, "text": "\\textstyle \\mu_w := 1 / \\sum_{i=1}^\\mu w_i^2 \\approx \\lambda/4" }, { "math_id": 52, "text": "i:\\lambda" }, { "math_id": 53, "text": "\\sigma_k" }, { "math_id": 54, "text": "\n p_\\sigma \\gets \\underbrace{(1-c_\\sigma)}_{\\!\\!\\!\\!\\!\\text{discount factor}\\!\\!\\!\\!\\!}\\, p_\\sigma \n + \\overbrace{\\sqrt{1 - (1-c_\\sigma)^2}}^{\n \\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\text{complements for discounted variance}\n \\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!} \\underbrace{\\sqrt{\\mu_w} \n \\,C_k^{\\;-1/2} \\, \\frac{\\overbrace{m_{k+1} - m_k}^{\\!\\!\\!\\text{displacement of } m\\!\\!\\!}}{\\sigma_k}}_{\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\n \\text{distributed as } \\mathcal{N}(0,I) \\text{ under neutral selection}\n \\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!}\n " }, { "math_id": 55, "text": "\n \\sigma_{k+1} = \\sigma_k \\times \\exp\\bigg(\\frac{c_\\sigma}{d_\\sigma}\n \\underbrace{\\left(\\frac{\\|p_\\sigma\\|}{\\operatorname E\\|\\mathcal{N}(0,I)\\|} - 1\\right)}_{\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\n \\text{unbiased about 0 under neutral selection}\n \\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\n}\\bigg)\n " }, { "math_id": 56, "text": "c_\\sigma^{-1}\\approx n/3" }, { "math_id": 57, "text": "c_\\sigma \\ll 1" }, { "math_id": 58, "text": "(1-c_\\sigma)^k\\approx\\exp(-c_\\sigma k)" }, { "math_id": 59, "text": "c_\\sigma^{-1}" }, { "math_id": 60, "text": "c_\\sigma^{-1}\\ln(2)\\approx0.7c_\\sigma^{-1}" }, { "math_id": 61, "text": "\\mu_w=\\left(\\sum_{i=1}^\\mu w_i^2\\right)^{-1}" }, { "math_id": 62, "text": "1 \\le \\mu_w \\le \\mu" }, { "math_id": 63, "text": "w_i" }, { "math_id": 64, "text": "C_k^{\\;-1/2} = \\sqrt{C_k}^{\\;-1} = \\sqrt{C_k^{\\;-1}}" }, { "math_id": 65, "text": "C_k" }, { "math_id": 66, "text": "d_\\sigma" }, { "math_id": 67, "text": "d_\\sigma=\\infty" }, { "math_id": 68, "text": "c_\\sigma=0" }, { "math_id": 69, "text": "\\|p_\\sigma\\|" }, { "math_id": 70, "text": "\\begin{align} \\operatorname E\\|\\mathcal{N}(0,I)\\| &= \\sqrt{2}\\,\\Gamma((n+1)/2)/\\Gamma(n/2) \n \\\\&\\approx \\sqrt{n}\\,(1-1/(4\\,n)+1/(21\\,n^2)) \\end{align}" }, { "math_id": 71, "text": "C_k^{-1}" }, { "math_id": 72, "text": "\\textstyle\\left(\\frac{m_{k+2}-m_{k+1}}{\\sigma_{k+1}}\\right)^T\\! C_k^{-1} \\frac{m_{k+1}-m_{k}}{\\sigma_k} \\approx 0" }, { "math_id": 73, "text": "\n p_c \\gets \\underbrace{(1-c_c)}_{\\!\\!\\!\\!\\!\\text{discount factor}\\!\\!\\!\\!\\!}\\, \n p_c + \n \\underbrace{\\mathbf{1}_{[0,\\alpha\\sqrt{n}]}(\\|p_\\sigma\\|)}_{\\text{indicator function}} \n \\overbrace{\\sqrt{1 - (1-c_c)^2}}^{\n \\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\text{complements for discounted variance}\n \\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!}\n \\underbrace{\\sqrt{\\mu_w} \n \\, \\frac{m_{k+1} - m_k}{\\sigma_k}}_{\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\n \\text{distributed as}\\; \\mathcal{N}(0,C_k)\\;\\text{under neutral selection}\n \\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!}\n " }, { "math_id": 74, "text": "\n C_{k+1} = \\underbrace{(1 - c_1 - c_\\mu + c_s)}_{\\!\\!\\!\\!\\!\\text{discount factor}\\!\\!\\!\\!\\!}\n \\, C_k + c_1 \\underbrace{p_c p_c^T}_{\n \\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\n \\text{rank one matrix}\n \\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!} \n + \\,c_\\mu \\underbrace{\\sum_{i=1}^\\mu w_i \\frac{x_{i:\\lambda} - m_k}{\\sigma_k} \n \\left( \\frac{x_{i:\\lambda} - m_k}{\\sigma_k} \\right)^T}_{\n \\operatorname{rank} \\min(\\mu,n) \\text{ matrix}}\n " }, { "math_id": 75, "text": "T" }, { "math_id": 76, "text": "c_c^{-1}\\approx n/4" }, { "math_id": 77, "text": "\\alpha\\approx 1.5" }, { "math_id": 78, "text": "\\mathbf{1}_{[0,\\alpha\\sqrt{n}]}(\\|p_\\sigma\\|)" }, { "math_id": 79, "text": "\\|p_\\sigma\\|\\in[0,\\alpha\\sqrt{n}]" }, { "math_id": 80, "text": "\\|p_\\sigma\\|\\le\\alpha\\sqrt{n}" }, { "math_id": 81, "text": "c_s = (1 - \\mathbf{1}_{[0,\\alpha\\sqrt{n}]}(\\|p_\\sigma\\|)^2) \\,c_1 c_c (2-c_c) " }, { "math_id": 82, "text": "c_1 \\approx 2 / n^2" }, { "math_id": 83, "text": "c_\\mu \\approx \\mu_w / n^2 " }, { "math_id": 84, "text": "\\mu" }, { "math_id": 85, "text": "1 - c_1" }, { "math_id": 86, "text": "(x_{i:\\lambda} - m_k)/\\sigma_k" }, { "math_id": 87, "text": "\\mathcal{N}(0,C_{k+1})" }, { "math_id": 88, "text": "\\lambda=10" }, { "math_id": 89, "text": "\\lambda=10n" }, { "math_id": 90, "text": "\\mu_w \\approx \\lambda/4" }, { "math_id": 91, "text": " f(x) = {\\textstyle\\frac{1}{2}}(x-x^*)^T H (x-x^*)" }, { "math_id": 92, "text": "H" }, { "math_id": 93, "text": "g \\circ f" }, { "math_id": 94, "text": " g " }, { "math_id": 95, "text": "H^{-1}" }, { "math_id": 96, "text": "\\lambda/\\mu\\to\\infty" }, { "math_id": 97, "text": "\\lambda\\to\\infty" }, { "math_id": 98, "text": "\\mu=1" }, { "math_id": 99, "text": " m_{k+1} = \\arg\\max_m \\sum_{i=1}^\\mu w_i \\log p_\\mathcal{N}(x_{i:\\lambda} \\mid m) " }, { "math_id": 100, "text": " \\log p_\\mathcal{N}(x) = \n - \\frac{1}{2} \\log\\det(2\\pi C) - \\frac{1}{2} (x-m)^T C^{-1} (x-m) " }, { "math_id": 101, "text": "m_{k+1}" }, { "math_id": 102, "text": " \\sum_{i=1}^\\mu w_i \\frac{x_{i:\\lambda} - m_k}{\\sigma_k} \n \\left( \\frac{x_{i:\\lambda} - m_k}{\\sigma_k} \\right)^T \n = \\arg\\max_{C} \\sum_{i=1}^\\mu w_i \\log p_\\mathcal{N}\\left(\\left.\\frac{x_{i:\\lambda} - m_k}{\\sigma_k} \\right| C\\right) " }, { "math_id": 103, "text": "\\mu\\ge n" }, { "math_id": 104, "text": "\\mu < n" }, { "math_id": 105, "text": " p_\\mathcal{N}(x | C) " }, { "math_id": 106, "text": "c_1=0" }, { "math_id": 107, "text": "c_\\mu=1" }, { "math_id": 108, "text": "C_{k+1}" }, { "math_id": 109, "text": "E f(x)" }, { "math_id": 110, "text": " \\begin{align} \n {\\nabla}_{\\!\\theta} E(f(x) \\mid \\theta) \n &= \\nabla_{\\!\\theta} \\int_{\\mathbb R^n}f(x) p(x) \\, \\mathrm{d}x\n \\\\ &= \\int_{\\mathbb R^n}f(x) \\nabla_{\\!\\theta} p(x) \\, \\mathrm{d}x\n \\\\ &= \\int_{\\mathbb R^n}f(x) p(x) \\nabla_{\\!\\theta} \\ln p(x) \\, \\mathrm{d}x\n \\\\ &= \\operatorname E(f(x) \\nabla_{\\!\\theta} \\ln p(x\\mid\\theta))\n\\end{align}" }, { "math_id": 111, "text": "p(x)=p(x\\mid\\theta)" }, { "math_id": 112, "text": "\\theta" }, { "math_id": 113, "text": "\\nabla_{\\!\\theta} \\ln p(x\\mid\\theta) = \\frac{\\nabla_{\\!\\theta} p(x)}{p(x)} " }, { "math_id": 114, "text": " \\begin{align} \n \\tilde{\\nabla} \\operatorname E(f(x) \\mid \\theta) \n &= F^{-1}_\\theta \\nabla_{\\!\\theta} \\operatorname E(f(x) \\mid \\theta) \n\\end{align}" }, { "math_id": 115, "text": " F_\\theta " }, { "math_id": 116, "text": " \\begin{align} \n \\tilde{\\nabla} \\operatorname E(f(x) \\mid \\theta) \n &= F^{-1}_\\theta \\operatorname E(f(x) \\nabla_{\\!\\theta} \\ln p(x\\mid\\theta))\n \\\\ &= \\operatorname E(f(x) F^{-1}_\\theta \\nabla_{\\!\\theta} \\ln p(x\\mid\\theta))\n\\end{align}" }, { "math_id": 117, "text": " \\tilde{\\nabla} \\widehat{E}_\\theta(f) := -\\sum_{i=1}^\\lambda \\overbrace{w_i}^{\\!\\!\\!\\!\\text{preference weight}\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!} \\underbrace{F^{-1}_\\theta \\nabla_{\\!\\theta} \\ln p(x_{i:\\lambda}\\mid \\theta)}_{\\!\\!\\!\\!\\!\\text{candidate direction from }x_{i:\\lambda}\\!\\!\\!\\!\\!}\n\\quad\\text{with } w_i = -f(x_{i:\\lambda})/\\lambda" }, { "math_id": 118, "text": "f(X)" }, { "math_id": 119, "text": "f(x_{i:\\lambda})" }, { "math_id": 120, "text": "X\\sim p(.|\\theta)" }, { "math_id": 121, "text": "w" }, { "math_id": 122, "text": "w_i = w\\left(\\frac{\\mathsf{rank}(f(x_{i:\\lambda})) - 1/2}{\\lambda}\\right)" }, { "math_id": 123, "text": "\\theta = [m_k^T \\operatorname{vec}(C_k)^T \\sigma_k]^T \\in \\mathbb{R}^{n+n^2+1}" }, { "math_id": 124, "text": " p(\\cdot\\mid\\theta) " }, { "math_id": 125, "text": "\\mathcal N(m_k,\\sigma_k^2 C_k)" }, { "math_id": 126, "text": "F^{-1}_{\\theta \\mid \\sigma_k} = \\left[\\begin{array}{cc}\\sigma_k^2 C_k&0\\\\ 0&2 C_k\\otimes C_k\\end{array}\\right]" }, { "math_id": 127, "text": "\\ln p(x\\mid\\theta) = \\ln p(x\\mid m_k,\\sigma_k^2 C_k) = -\\frac{1}{2}(x-m_k)^T \\sigma_k^{-2} C_k^{-1} (x-m_k) - \\frac{1}{2}\\ln\\det(2\\pi\\sigma_k^2 C_k)" }, { "math_id": 128, "text": " \\begin{align} \n m_{k+1} \n &= m_k - \\underbrace{[\\tilde{\\nabla} \\widehat{E}_\\theta(f)]_{1,\\dots, n}}_{\n \\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\n \\text{natural gradient for mean}\n \\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\n } \n \\\\\n &= m_k + \\sum_{i=1}^\\lambda w_i (x_{i:\\lambda} - m_k) \n \\end{align} " }, { "math_id": 129, "text": " \\begin{align} \n C_{k+1} \n &= C_k + c_1(p_c p_c^T - C_k) \n - c_\\mu\\operatorname{mat}(\\overbrace{[\\tilde{\\nabla} \\widehat{E}_\\theta(f)]_{n+1,\\dots,n+n^2}}^{\n \\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\n \\text{natural gradient for covariance matrix} \n \\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\n })\\\\\n &= C_k + c_1(p_c p_c^T - C_k) \n + c_\\mu \\sum_{i=1}^\\lambda w_i \\left(\\frac{x_{i:\\lambda} - m_k}{\\sigma_k} \\left(\\frac{x_{i:\\lambda} - m_k}{\\sigma_k}\\right)^T - C_k\\right) \n \\end{align}" }, { "math_id": 130, "text": "c_1=c_\\sigma=0" }, { "math_id": 131, "text": " \\tilde{\\nabla} \\widehat{E}_\\theta(f)" }, { "math_id": 132, "text": "c_\\mu" }, { "math_id": 133, "text": "x_{i:\\lambda} \\sim \\mathcal N(m_k,\\sigma_k^2 C_k)" }, { "math_id": 134, "text": " \\operatorname E(m_{k+1}\\mid m_k) = m_k " }, { "math_id": 135, "text": " \\operatorname E(\\log \\sigma_{k+1} \\mid \\sigma_k) = \\log \\sigma_k " }, { "math_id": 136, "text": " \\operatorname E(C_{k+1} \\mid C_k) = C_k " }, { "math_id": 137, "text": "h:\\mathbb{R}^n\\to\\mathbb{R}" }, { "math_id": 138, "text": "f:x\\mapsto g(h(x))" }, { "math_id": 139, "text": "g:\\mathbb{R}\\to\\mathbb{R}" }, { "math_id": 140, "text": "g" }, { "math_id": 141, "text": "h:\\mathbb{R}^n\\to \\mathbb{R}" }, { "math_id": 142, "text": "\\alpha>0" }, { "math_id": 143, "text": "f:x\\mapsto h(\\alpha x)" }, { "math_id": 144, "text": "\\sigma_0\\propto1/\\alpha" }, { "math_id": 145, "text": "m_0\\propto1/\\alpha" }, { "math_id": 146, "text": "z\\in\\mathbb{R}^n" }, { "math_id": 147, "text": "f:x\\mapsto h(R x)" }, { "math_id": 148, "text": "R" }, { "math_id": 149, "text": "m_0=R^{-1} z" }, { "math_id": 150, "text": "R^{-1}{R^{-1}}^T" }, { "math_id": 151, "text": "x^*" }, { "math_id": 152, "text": "m_0" }, { "math_id": 153, "text": "\\sigma_0" }, { "math_id": 154, "text": "\\|m_k - x^*\\| \\;\\approx\\; \\|m_0 - x^*\\| \\times e^{-ck}\n " }, { "math_id": 155, "text": "c>0" }, { "math_id": 156, "text": "\\frac{1}{k}\\sum_{i=1}^k\\log\\frac{\\|m_i - x^*\\|}{\\|m_{i-1} - x^*\\|} \n \\;=\\; \\frac{1}{k}\\log\\frac{\\|m_k - x^*\\|}{\\|m_{0} - x^*\\|}\n \\;\\to\\; -c < 0 \\quad\\text{for } k\\to\\infty\\;, \n " }, { "math_id": 157, "text": "\\operatorname E\\log\\frac{\\|m_k - x^*\\|}{\\|m_{k-1} - x^*\\|}\n \\;\\to\\; -c < 0 \\quad\\text{for } k\\to\\infty\\;. \n " }, { "math_id": 158, "text": "\\exp(-c)" }, { "math_id": 159, "text": "c" }, { "math_id": 160, "text": "0.1\\lambda/n" }, { "math_id": 161, "text": "0.25\\lambda/n" }, { "math_id": 162, "text": "\n\\begin{align}\n x_i &\\sim\\ m_k + \\sigma_k\\times\\mathcal{N}(0,C_k)\n \\\\\n &\\sim\\ m_k + \\sigma_k \\times C_k^{1/2}\\mathcal{N}(0,I) \n\\end{align}\n" }, { "math_id": 163, "text": "\n \\underbrace{C_k^{-1/2}x_i}_{\\text{represented in the encode space}\n \\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!} \n \\sim\\ \\underbrace{C_k^{-1/2} m_k} {} + \\sigma_k \\times\\mathcal{N}(0,I)\n" }, { "math_id": 164, "text": "m_0\\in\\mathbb{R}^n" }, { "math_id": 165, "text": "\\sigma_0>0" }, { "math_id": 166, "text": "n < 5 " }, { "math_id": 167, "text": "10 n" }, { "math_id": 168, "text": "100 n" }, { "math_id": 169, "text": "c_c=1" }, { "math_id": 170, "text": "c_c=c_\\sigma=1" } ]
https://en.wikipedia.org/wiki?curid=8143131
8143463
Inverse-Wishart distribution
Probability distribution In statistics, the inverse Wishart distribution, also called the inverted Wishart distribution, is a probability distribution defined on real-valued positive-definite matrices. In Bayesian statistics it is used as the conjugate prior for the covariance matrix of a multivariate normal distribution. We say formula_1 follows an inverse Wishart distribution, denoted as formula_2, if its inverse formula_3 has a Wishart distribution formula_4. Important identities have been derived for the inverse-Wishart distribution. Density. The probability density function of the inverse Wishart is: formula_5 where formula_1 and formula_6 are formula_0 positive definite matrices, formula_7 is the determinant, and Γ"p"(·) is the multivariate gamma function. Theorems. Distribution of the inverse of a Wishart-distributed matrix. If formula_8 and formula_9 is of size formula_10, then formula_11 has an inverse Wishart distribution formula_12 . Marginal and conditional distributions from an inverse Wishart-distributed matrix. Suppose formula_13 has an inverse Wishart distribution. Partition the matrices formula_14 and formula_15 conformably with each other formula_16 where formula_17 and formula_18 are formula_19 matrices, then we have Conjugate distribution. Suppose we wish to make inference about a covariance matrix formula_30 whose prior formula_31 has a formula_32 distribution. If the observations formula_33 are independent p-variate Gaussian variables drawn from a formula_34 distribution, then the conditional distribution formula_35 has a formula_36 distribution, where formula_37. Because the prior and posterior distributions are the same family, we say the inverse Wishart distribution is conjugate to the multivariate Gaussian. Due to its conjugacy to the multivariate Gaussian, it is possible to marginalize out (integrate out) the Gaussian's parameter formula_38, using the formula formula_39 and the linear algebra identity formula_40: formula_41 (this is useful because the variance matrix formula_38 is not known in practice, but because formula_6 is known "a priori", and formula_42 can be obtained from the data, the right hand side can be evaluated directly). The inverse-Wishart distribution as a prior can be constructed via existing transferred prior knowledge. Moments. The following is based on Press, S. J. (1982) "Applied Multivariate Analysis", 2nd ed. (Dover Publications, New York), after reparameterizing the degree of freedom to be consistent with the p.d.f. definition above. Let formula_43 with formula_44 and formula_45, so that formula_46. The mean: formula_47 The variance of each element of formula_1: formula_48 The variance of the diagonal uses the same formula as above with formula_49, which simplifies to: formula_50 The covariance of elements of formula_1 are given by: formula_51 The same results are expressed in Kronecker product form by von Rosen as follows: formula_52 where formula_53 formula_54 commutation matrix formula_55 There appears to be a typo in the paper whereby the coefficient of formula_56 is given as formula_57 rather than formula_58, and that the expression for the mean square inverse Wishart, corollary 3.1, should read formula_59 To show how the interacting terms become sparse when the covariance is diagonal, let formula_60 and introduce some arbitrary parameters formula_61: formula_62 where formula_63 denotes the matrix vectorization operator. Then the second moment matrix becomes formula_64 which is non-zero only when involving the correlations of diagonal elements of formula_65, all other elements are mutually uncorrelated, though not necessarily statistically independent. The variances of the Wishart product are also obtained by Cook et al. in the singular case and, by extension, to the full rank case. Muirhead shows in Theorem 3.2.8 that if formula_66 is distributed as formula_67 and formula_68 is an arbitrary vector, independent of formula_69 then formula_70 and formula_71, one degree of freedom being relinquished by estimation of the sample mean in the latter. Similarly, Bodnar et.al. further find that formula_72 and setting formula_73 the marginal distribution of the leading diagonal element is thus formula_74 and by rotating formula_68 end-around a similar result applies to all diagonal elements formula_75. A corresponding result in the complex Wishart case was shown by Brennan and Reed and the uncorrelated inverse complex Wishart formula_76 was shown by Shaman to have diagonal statistical structure in which the leading diagonal elements are correlated, while all other element are uncorrelated. formula_81 i.e., the inverse-gamma distribution, where formula_82 is the ordinary Gamma function. Thus, an arbitrary "p-vector" formula_68 with formula_101 length formula_102 can be rotated into the vector formula_103 without changing the pdf of formula_104, moreover formula_105 can be a permutation matrix which exchanges diagonal elements. It follows that the diagonal elements of formula_99 are identically inverse chi squared distributed, with pdf formula_106 in the previous section though they are not mutually independent. The result is known in optimal portfolio statistics, as in Theorem 2 Corollary 1 of Bodnar et al, where it is expressed in the inverse form formula_107. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "p\\times p" }, { "math_id": 1, "text": "\\mathbf{X}" }, { "math_id": 2, "text": " \\mathbf{X}\\sim \\mathcal{W}^{-1}(\\mathbf\\Psi,\\nu)" }, { "math_id": 3, "text": " \\mathbf{X}^{-1}" }, { "math_id": 4, "text": " \\mathcal{W}(\\mathbf \\Psi^{-1}, \\nu) " }, { "math_id": 5, "text": "\nf_{\\mathbf X}({\\mathbf X}; {\\mathbf \\Psi}, \\nu) = \\frac{\\left|{\\mathbf\\Psi}\\right|^{\\nu/2}}{2^{\\nu p/2}\\Gamma_p(\\frac \\nu 2)} \\left|\\mathbf{X}\\right|^{-(\\nu+p+1)/2} e^{-\\frac{1}{2}\\operatorname{tr}(\\mathbf\\Psi\\mathbf{X}^{-1})}\n" }, { "math_id": 6, "text": "{\\mathbf\\Psi}" }, { "math_id": 7, "text": "| \\cdot |" }, { "math_id": 8, "text": "{\\mathbf X}\\sim \\mathcal{W}({\\mathbf\\Sigma},\\nu)" }, { "math_id": 9, "text": "{\\mathbf\\Sigma}" }, { "math_id": 10, "text": "p \\times p" }, { "math_id": 11, "text": "\\mathbf{A}={\\mathbf X}^{-1}" }, { "math_id": 12, "text": "\\mathbf{A}\\sim \\mathcal{W}^{-1}({\\mathbf\\Sigma}^{-1},\\nu)" }, { "math_id": 13, "text": "{\\mathbf A}\\sim \\mathcal{W}^{-1}({\\mathbf\\Psi},\\nu)" }, { "math_id": 14, "text": " {\\mathbf A} " }, { "math_id": 15, "text": " {\\mathbf\\Psi} " }, { "math_id": 16, "text": "\n {\\mathbf{A}} = \\begin{bmatrix} \\mathbf{A}_{11} & \\mathbf{A}_{12} \\\\ \\mathbf{A}_{21} & \\mathbf{A}_{22} \\end{bmatrix}, \\;\n {\\mathbf{\\Psi}} = \\begin{bmatrix} \\mathbf{\\Psi}_{11} & \\mathbf{\\Psi}_{12} \\\\ \\mathbf{\\Psi}_{21} & \\mathbf{\\Psi}_{22} \\end{bmatrix}\n" }, { "math_id": 17, "text": "{\\mathbf A_{ij}}" }, { "math_id": 18, "text": "{\\mathbf \\Psi_{ij}} " }, { "math_id": 19, "text": " p_{i}\\times p_{j}" }, { "math_id": 20, "text": " \\mathbf A_{11} " }, { "math_id": 21, "text": " \\mathbf A_{11}^{-1} \\mathbf A_{12} " }, { "math_id": 22, "text": " {\\mathbf A}_{22\\cdot 1} " }, { "math_id": 23, "text": "{\\mathbf A_{22\\cdot 1}} = {\\mathbf A}_{22} - {\\mathbf A}_{21}{\\mathbf A}_{11}^{-1}{\\mathbf A}_{12}" }, { "math_id": 24, "text": " {\\mathbf A_{11} } " }, { "math_id": 25, "text": " {\\mathbf A_{11} } \\sim \\mathcal{W}^{-1}({\\mathbf \\Psi_{11} }, \\nu-p_{2}) " }, { "math_id": 26, "text": " {\\mathbf A}_{11}^{-1} {\\mathbf A}_{12} \\mid {\\mathbf A}_{22\\cdot 1} \\sim MN_{p_{1}\\times p_{2}}\n( {\\mathbf \\Psi}_{11}^{-1} {\\mathbf \\Psi}_{12}, {\\mathbf A}_{22\\cdot 1} \\otimes {\\mathbf \\Psi}_{11}^{-1}) " }, { "math_id": 27, "text": " MN_{p\\times q}(\\cdot,\\cdot) " }, { "math_id": 28, "text": " {\\mathbf A}_{22\\cdot 1} \\sim \\mathcal{W}^{-1}({\\mathbf \\Psi}_{22\\cdot 1}, \\nu) " }, { "math_id": 29, "text": "{\\mathbf \\Psi_{22\\cdot 1}} = {\\mathbf \\Psi}_{22} - {\\mathbf \\Psi}_{21}{\\mathbf \\Psi}_{11}^{-1}{\\mathbf \\Psi}_{12}" }, { "math_id": 30, "text": "{\\mathbf{\\Sigma}}" }, { "math_id": 31, "text": "{p(\\mathbf{\\Sigma})}" }, { "math_id": 32, "text": "\\mathcal{W}^{-1}({\\mathbf\\Psi},\\nu)" }, { "math_id": 33, "text": "\\mathbf{X}=[\\mathbf{x}_1,\\ldots,\\mathbf{x}_n]" }, { "math_id": 34, "text": "N(\\mathbf{0},{\\mathbf \\Sigma})" }, { "math_id": 35, "text": "{p(\\mathbf{\\Sigma}\\mid\\mathbf{X})}" }, { "math_id": 36, "text": "\\mathcal{W}^{-1}({\\mathbf A}+{\\mathbf\\Psi},n+\\nu)" }, { "math_id": 37, "text": "{\\mathbf{A}}=\\mathbf{X}\\mathbf{X}^T" }, { "math_id": 38, "text": "\\mathbf{\\Sigma}" }, { "math_id": 39, "text": " p(x) = \\frac{ p(x | \\Sigma) p(\\Sigma)}{p(\\Sigma | x)} " }, { "math_id": 40, "text": " v^T \\Omega v = \\text{tr}( \\Omega v v^T) " }, { "math_id": 41, "text": "f_{\\mathbf X\\,\\mid\\,\\Psi,\\nu} (\\mathbf x) = \\int f_{\\mathbf X\\,\\mid\\,\\mathbf\\Sigma\\,=\\,\\sigma}(\\mathbf x) f_{\\mathbf\\Sigma\\,\\mid\\,\\mathbf\\Psi,\\nu} (\\sigma)\\,d\\sigma = \\frac{|\\mathbf{\\Psi}|^{\\nu/2} \\Gamma_p\\left(\\frac{\\nu+n}{2}\\right)}{\\pi^{np/2}|\\mathbf{\\Psi}+\\mathbf{A}|^{(\\nu+n)/2} \\Gamma_p(\\frac{\\nu}{2})}" }, { "math_id": 42, "text": "{\\mathbf A}" }, { "math_id": 43, "text": " W \\sim \\mathcal{W}(\\mathbf \\Psi^{-1}, \\nu) " }, { "math_id": 44, "text": " \\nu \\ge p " }, { "math_id": 45, "text": " X \\doteq W^{-1}" }, { "math_id": 46, "text": " X \\sim \\mathcal{W}^{-1}(\\mathbf \\Psi, \\nu)" }, { "math_id": 47, "text": " \\operatorname E(\\mathbf X) = \\frac{\\mathbf\\Psi}{\\nu-p-1}." }, { "math_id": 48, "text": "\n\\operatorname{Var}(x_{ij}) = \\frac{(\\nu-p+1)\\psi_{ij}^2 + (\\nu-p-1)\\psi_{ii}\\psi_{jj}}\n{(\\nu-p)(\\nu-p-1)^2(\\nu-p-3)}\n" }, { "math_id": 49, "text": "i=j" }, { "math_id": 50, "text": "\n\\operatorname{Var}(x_{ii}) = \\frac{2\\psi_{ii}^2}{(\\nu-p-1)^2(\\nu-p-3)}.\n" }, { "math_id": 51, "text": "\n\\operatorname{Cov}(x_{ij},x_{k\\ell}) = \\frac{2\\psi_{ij}\\psi_{k\\ell} + (\\nu-p-1) (\\psi_{ik}\\psi_{j\\ell} + \\psi_{i\\ell} \\psi_{kj})}{(\\nu-p)(\\nu-p-1)^2(\\nu-p-3)}\n" }, { "math_id": 52, "text": "\n\\begin{align}\n\\mathbf{E} \\left ( W^{-1} \\otimes W^{-1} \\right ) & = c_1 \\Psi \\otimes \\Psi \n + c_2 Vec (\\Psi) Vec (\\Psi)^T + c_2 K_{pp} \\Psi \\otimes \\Psi \\\\\n\\mathbf{Cov}_\\otimes \\left ( W^{-1} ,W^{-1} \\right ) & = (c_1 - c_3 ) \\Psi \\otimes \\Psi \n + c_2 Vec (\\Psi) Vec (\\Psi)^T + c_2 K_{pp} \\Psi \\otimes \\Psi\n\\end{align}\n" }, { "math_id": 53, "text": "\n\\begin{align}\nc_2 & = \\left [ (\\nu-p)(\\nu-p-1)(\\nu-p-3) \\right ]^{-1} \\\\\nc_1 & = (\\nu-p-2) c_2 \\\\\nc_3 & = (\\nu -p-1)^{-2},\n\\end{align}\n" }, { "math_id": 54, "text": "K_{pp} \\text{ is a } p^2 \\times p^2 " }, { "math_id": 55, "text": "\n\\mathbf{Cov}_\\otimes \\left ( W^{-1},W^{-1} \\right ) = \\mathbf{E} \\left ( W^{-1} \\otimes W^{-1} \\right ) - \\mathbf{E} \\left ( W^{-1} \\right ) \\otimes \\mathbf{E} \\left ( W^{-1} \\right ).\n" }, { "math_id": 56, "text": " K_{pp} \\Psi \\otimes \\Psi " }, { "math_id": 57, "text": " c_1 " }, { "math_id": 58, "text": " c_2" }, { "math_id": 59, "text": "\n\\mathbf{E} \\left [ W^{-1} W^{-1} \\right ] = (c_1+c_2) \\Sigma^{-1} \\Sigma^{-1} + c_2 \\Sigma^{-1} \\mathbf{tr}(\\Sigma^{-1}).\n" }, { "math_id": 60, "text": " \\Psi = \\mathbf I_{3 \\times 3} " }, { "math_id": 61, "text": " u, v, w " }, { "math_id": 62, "text": "\n\\mathbf{E} \\left ( W^{-1} \\otimes W^{-1} \\right ) = u \\Psi \\otimes \\Psi \n + v \\, \\mathrm{vec}(\\Psi) \\, \\mathrm{vec}(\\Psi)^T + w K_{pp} \\Psi \\otimes \\Psi.\n" }, { "math_id": 63, "text": "\\mathrm{vec}" }, { "math_id": 64, "text": "\n\\mathbf{E} \\left ( W^{-1} \\otimes W^{-1} \\right ) = \\begin{bmatrix}\n u+v+w & \\cdot & \\cdot & \\cdot & v & \\cdot & \\cdot & \\cdot & v \\\\\n \\cdot & u & \\cdot & w & \\cdot & \\cdot & \\cdot & \\cdot & \\cdot \\\\\n \\cdot & \\cdot & u & \\cdot & \\cdot & \\cdot & w & \\cdot & \\cdot \\\\\n \\cdot & w & \\cdot & u & \\cdot & \\cdot & \\cdot & \\cdot & \\cdot \\\\\n v & \\cdot & \\cdot & \\cdot & u+v+w & \\cdot & \\cdot & \\cdot & v \\\\\n \\cdot & \\cdot & \\cdot & \\cdot & \\cdot & u & \\cdot & w & \\cdot \\\\\n \\cdot & \\cdot & w & \\cdot & \\cdot & \\cdot & u & \\cdot & \\cdot \\\\\n \\cdot & \\cdot & \\cdot & \\cdot & \\cdot & w & \\cdot & u & \\cdot \\\\\n v & \\cdot & \\cdot & \\cdot & v & \\cdot & \\cdot & \\cdot & u+v+w \\\\\n \\end{bmatrix}\n " }, { "math_id": 65, "text": " W^{-1} " }, { "math_id": 66, "text": " A^{p \\times p} " }, { "math_id": 67, "text": " \\mathcal{W}_p (\\nu,\\Sigma ) " }, { "math_id": 68, "text": " V " }, { "math_id": 69, "text": " A " }, { "math_id": 70, "text": " V^T A V \\sim \\mathcal{ W }_1(\\nu, A^T \\Sigma A) " }, { "math_id": 71, "text": " \\frac { V^T A V }{ V^T \\Sigma V } \\sim \\chi^2_{\\nu-1} " }, { "math_id": 72, "text": " \\frac { V^T A^{-1} V }{ V^T \\Sigma^{-1} V } \\sim \\text{Inv-}\\chi^2_{\\nu - p + 1} " }, { "math_id": 73, "text": " V= (1,\\,0, \\cdots ,0)^T " }, { "math_id": 74, "text": " \\frac { [ A^{-1} ]_{1,1} }{ [ \\Sigma^{-1}]_{1,1} } \\sim \\frac{2^{-k/2}}{\\Gamma(k/2)} x^{-k/2-1} e^{-1/(2 x)}, \\;\\; k = \\nu - p + 1 " }, { "math_id": 75, "text": " [ A^{-1} ]_{i,i} " }, { "math_id": 76, "text": " \\mathcal{W_C}^{-1}(\\mathbf{I},\\nu,p) " }, { "math_id": 77, "text": "p=1" }, { "math_id": 78, "text": "\\alpha = \\nu/2" }, { "math_id": 79, "text": "\\beta = \\mathbf{\\Psi}/2" }, { "math_id": 80, "text": "x=\\mathbf{X}" }, { "math_id": 81, "text": "\np(x\\mid\\alpha, \\beta) = \\frac{\\beta^\\alpha\\, x^{-\\alpha-1} \\exp(-\\beta/x)}{\\Gamma_1(\\alpha)}.\n" }, { "math_id": 82, "text": "\\Gamma_1(\\cdot)" }, { "math_id": 83, "text": " \\alpha = \\frac{\\nu}{2} " }, { "math_id": 84, "text": " \\beta =2 " }, { "math_id": 85, "text": "\\mathcal{GW}^{-1}" }, { "math_id": 86, "text": " p \\times p" }, { "math_id": 87, "text": "\\mathcal{GW}^{-1}(\\mathbf{\\Psi},\\nu,\\mathbf{S})" }, { "math_id": 88, "text": "\\mathbf{Y} = \\mathbf{X}^{1/2}\\mathbf{S}^{-1}\\mathbf{X}^{1/2}" }, { "math_id": 89, "text": "\\mathcal{W}^{-1}(\\mathbf{\\Psi},\\nu)" }, { "math_id": 90, "text": "\\mathbf{X}^{1/2}" }, { "math_id": 91, "text": "\\mathbf{\\Psi},\\mathbf{S}" }, { "math_id": 92, "text": "\\nu" }, { "math_id": 93, "text": "2p" }, { "math_id": 94, "text": "\\mathbf{S}" }, { "math_id": 95, "text": "\\mathcal{GW}^{-1}(\\mathbf{\\Psi},\\nu,\\mathbf{S}) = \\mathcal{W}^{-1}(\\mathbf{\\Psi},\\nu)" }, { "math_id": 96, "text": " \\mathcal{\\Psi} = \\mathbf{I}, \\text{ and } \\mathcal{\\Phi} " }, { "math_id": 97, "text": " \\mathbf{X} " }, { "math_id": 98, "text": " {\\Phi} \\mathbf{X} \\mathcal{\\Phi}^T " }, { "math_id": 99, "text": " \\mathbf{X} " }, { "math_id": 100, "text": " \\mathcal{W}^{-1}(\\mathbf{I},\\nu,p) " }, { "math_id": 101, "text": "l_2" }, { "math_id": 102, "text": "V^TV = 1" }, { "math_id": 103, "text": " \\mathbf{\\Phi}V = [1 \\; 0 \\; 0 \\cdots]^T " }, { "math_id": 104, "text": " V^T \\mathbf{X} V " }, { "math_id": 105, "text": " \\mathbf{\\Phi} " }, { "math_id": 106, "text": " f_{x_{11}} " }, { "math_id": 107, "text": " \\frac{V^T \\mathbf{\\Psi} V}{V^T \\mathbf {X} V} \\sim \\chi^2_{\\nu-p+1} " }, { "math_id": 108, "text": " \\mathbf{X^{p \\times p }} \\sim \\mathcal{W}^{-1}_p\\left({\\mathbf \\Psi}, \\nu \\right)." }, { "math_id": 109, "text": " {\\mathbf \\Theta}^{p \\times p} " }, { "math_id": 110, "text": "\\mathbf{\\Theta}\\mathbf{X}{\\mathbf \\Theta}^T \\sim \\mathcal{W}^{-1}_p\\left({\\mathbf \\Theta}{\\mathbf \\Psi } {\\mathbf \\Theta}^T, \\nu \\right)." }, { "math_id": 111, "text": " {\\mathbf \\Theta} ^{ m \\times p } " }, { "math_id": 112, "text": " m \\times p , \\; \\; m < p " }, { "math_id": 113, "text": " m " }, { "math_id": 114, "text": "\\mathbf{\\Theta}\\mathbf{X}{\\mathbf \\Theta}^T \\sim \\mathcal{W}^{-1}_m \\left({\\mathbf \\Theta}{\\mathbf \\Psi } {\\mathbf \\Theta}^T, \\nu \\right)." } ]
https://en.wikipedia.org/wiki?curid=8143463
8144485
Modified due-date scheduling heuristic
The modified due-date (MDD) scheduling heuristic is a greedy heuristic used to solve the single-machine total weighted tardiness problem (SMTWTP). Presentation. The modified due date scheduling is a scheduling heuristic created in 1982 by Baker and Bertrand, used to solve the NP-hard single machine total-weighted tardiness problem. This problem is centered around reducing the global tardiness of a list of tasks which are characterized by their processing time, due date and weight by re-ordering them. Algorithm. Principle. This heuristic works the same way as other greedy algorithms. At each iteration, it finds the next job to schedule and add it to the list. This operation is repeated until no jobs are left unscheduled. MDD is similar to the earliest due date (EDD) heuristic except that MDD takes into account the partial sequence of job that have been already constructed, whereas EDD only looks at the jobs' due dates. Implementation. Here is an implementation of the MDD algorithm in pseudo-code. It takes in an unsorted list of tasks and return the list sorted by increasing modified due date: function mdd(processed, task) return max(processed + task.processTime, task.dueDate) function mddSort(tasks) unsortedTasks = copy(tasks) sortedTasks = list processed = 0 while unsortedTasks is not empty bestTask = unsortedTasks.getFirst() bestMdd = mdd(processed, bestTask) for task in unsortedTasks mdd = mdd(processed, task) if mdd &lt; bestMdd then bestMdd = mdd bestTask = task sortedTasks.pushBack(bestTask) unsortedTasks.remove(bestTask) processed += bestTask.processTime return sortedTasks Practical example. In this example we will schedule flight departures. Each flight is characterized by: We need to find an order for the flight to take off that will result in the smallest total weighted tardiness. For this example we will use the following values: In the default order, the total weighted tardiness is 136. The first step is to compute the modified due date for each flight. Since the current time is 0 and, in our example, we don’t have any flight whose due date is smaller than its processing time, the mdd of each flight is equal to its due date: The flight with the smallest MDD (Flight n° 3) is then processed, and the new modified due date is computed. The current time is now 5. The operation is repeated until no more flights are left unscheduled. We obtain the following results: In this order, the total weighted tardiness is 92. This example can be generalized to schedule any list of job characterized by a due date and a processing time. Performance. Applying this heuristic will result in a sorted list of tasks which tardiness cannot be reduced by adjacent pair-wise interchange. MDD’s complexity is formula_0. Variations. There is a version of MDD called weighted modified due date (WMDD) which takes into account the weights. In such a case, the evaluation function is replaced by: function wmdd(processed, task) return (1 / task.weight) * max(task.processTime, task.dueDate - processed) References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "O(n)" } ]
https://en.wikipedia.org/wiki?curid=8144485
8145581
Whitening transformation
Decorrelation method that converts a covariance matrix of a set of samples into an identity matrix A whitening transformation or sphering transformation is a linear transformation that transforms a vector of random variables with a known covariance matrix into a set of new variables whose covariance is the identity matrix, meaning that they are uncorrelated and each have variance 1. The transformation is called "whitening" because it changes the input vector into a white noise vector. Several other transformations are closely related to whitening: Definition. Suppose formula_0 is a random (column) vector with non-singular covariance matrix formula_1 and mean formula_2. Then the transformation formula_3 with a whitening matrix formula_4 satisfying the condition formula_5 yields the whitened random vector formula_6 with unit diagonal covariance. There are infinitely many possible whitening matrices formula_4 that all satisfy the above condition. Commonly used choices are formula_7 (Mahalanobis or ZCA whitening), formula_8 where formula_9 is the Cholesky decomposition of formula_10 (Cholesky whitening), or the eigen-system of formula_1 (PCA whitening). Optimal whitening transforms can be singled out by investigating the cross-covariance and cross-correlation of formula_0 and formula_6. For example, the unique optimal whitening transformation achieving maximal component-wise correlation between original formula_0 and whitened formula_6 is produced by the whitening matrix formula_11 where formula_12 is the correlation matrix and formula_13 the variance matrix. Whitening a data matrix. Whitening a data matrix follows the same transformation as for random variables. An empirical whitening transform is obtained by estimating the covariance (e.g. by maximum likelihood) and subsequently constructing a corresponding estimated whitening matrix (e.g. by Cholesky decomposition). High-dimensional whitening. This modality is a generalization of the pre-whitening procedure extended to more general spaces where formula_0 is usually assumed to be a random function or other random objects in a Hilbert space formula_14. One of the main issues of extending whitening to infinite dimensions is that the covariance operator has an unbounded inverse in formula_14. Nevertheless, if one assumes that Picard condition holds for formula_0 in the range space of the covariance operator, whitening becomes possible. A whitening operator can be then defined from the factorization of the Moore–Penrose inverse of the covariance operator, which has effective mapping on Karhunen–Loève type expansions of formula_0. The advantage of these whitening transformations is that they can be optimized according to the underlying topological properties of the data, thus producing more robust whitening representations. High-dimensional features of the data can be exploited through kernel regressors or basis function systems. R implementation. An implementation of several whitening procedures in R, including ZCA-whitening and PCA whitening but also CCA whitening, is available in the "whitening" R package published on CRAN. The R package "pfica" allows the computation of high-dimensional whitening representations using basis function systems (B-splines, Fourier basis, etc.). References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "X" }, { "math_id": 1, "text": "\\Sigma" }, { "math_id": 2, "text": "0" }, { "math_id": 3, "text": "Y = W X" }, { "math_id": 4, "text": "W" }, { "math_id": 5, "text": "W^\\mathrm{T} W = \\Sigma^{-1}" }, { "math_id": 6, "text": "Y" }, { "math_id": 7, "text": "W = \\Sigma^{-1/2}" }, { "math_id": 8, "text": "W = L^T" }, { "math_id": 9, "text": "L" }, { "math_id": 10, "text": " \\Sigma^{-1}" }, { "math_id": 11, "text": "W = P^{-1/2} V^{-1/2}" }, { "math_id": 12, "text": "P" }, { "math_id": 13, "text": "V" }, { "math_id": 14, "text": "H" } ]
https://en.wikipedia.org/wiki?curid=8145581
814816
Newtonian potential
Green's function for Laplacian In mathematics, the Newtonian potential or Newton potential is an operator in vector calculus that acts as the inverse to the negative Laplacian, on functions that are smooth and decay rapidly enough at infinity. As such, it is a fundamental object of study in potential theory. In its general nature, it is a singular integral operator, defined by convolution with a function having a mathematical singularity at the origin, the Newtonian kernel formula_0 which is the fundamental solution of the Laplace equation. It is named for Isaac Newton, who first discovered it and proved that it was a harmonic function in the special case of three variables, where it served as the fundamental gravitational potential in Newton's law of universal gravitation. In modern potential theory, the Newtonian potential is instead thought of as an electrostatic potential. The Newtonian potential of a compactly supported integrable function formula_1 is defined as the convolution formula_2 where the Newtonian kernel formula_0 in dimension formula_3 is defined by formula_4 Here "ω""d" is the volume of the unit "d"-ball (sometimes sign conventions may vary; compare and ). For example, for formula_5 we have formula_6 The Newtonian potential "w" of "f" is a solution of the Poisson equation formula_7 which is to say that the operation of taking the Newtonian potential of a function is a partial inverse to the Laplace operator. Then "w" will be a classical solution, that is twice differentiable, if "f" is bounded and locally Hölder continuous as shown by Otto Hölder. It was an open question whether continuity alone is also sufficient. This was shown to be wrong by Henrik Petrini who gave an example of a continuous "f" for which "w" is not twice differentiable. The solution is not unique, since addition of any harmonic function to "w" will not affect the equation. This fact can be used to prove existence and uniqueness of solutions to the Dirichlet problem for the Poisson equation in suitably regular domains, and for suitably well-behaved functions "f": one first applies a Newtonian potential to obtain a solution, and then adjusts by adding a harmonic function to get the correct boundary data. The Newtonian potential is defined more broadly as the convolution formula_8 when "μ" is a compactly supported Radon measure. It satisfies the Poisson equation formula_9 in the sense of distributions. Moreover, when the measure is positive, the Newtonian potential is subharmonic on R"d". If "f" is a compactly supported continuous function (or, more generally, a finite measure) that is rotationally invariant, then the convolution of "f" with Γ satisfies for "x" outside the support of "f" formula_10 In dimension "d" = 3, this reduces to Newton's theorem that the potential energy of a small mass outside a much larger spherically symmetric mass distribution is the same as if all of the mass of the larger object were concentrated at its center. When the measure "μ" is associated to a mass distribution on a sufficiently smooth hypersurface "S" (a Lyapunov surface of Hölder class "C"1,α) that divides R"d" into two regions "D"+ and "D"−, then the Newtonian potential of "μ" is referred to as a simple layer potential. Simple layer potentials are continuous and solve the Laplace equation except on "S". They appear naturally in the study of electrostatics in the context of the electrostatic potential associated to a charge distribution on a closed surface. If d"μ" = "f" d"H" is the product of a continuous function on "S" with the ("d" − 1)-dimensional Hausdorff measure, then at a point "y" of "S", the normal derivative undergoes a jump discontinuity "f"("y") when crossing the layer. Furthermore, the normal derivative of "w" is a well-defined continuous function on "S". This makes simple layers particularly suited to the study of the Neumann problem for the Laplace equation.
[ { "math_id": 0, "text": "\\Gamma" }, { "math_id": 1, "text": "f" }, { "math_id": 2, "text": "u(x) = \\Gamma * f(x) = \\int_{\\mathbb{R}^d} \\Gamma(x-y)f(y)\\,dy" }, { "math_id": 3, "text": "d" }, { "math_id": 4, "text": "\\Gamma(x) = \\begin{cases} \n\\frac{1}{2\\pi} \\log{ | x | }, & d=2, \\\\\n\\frac{1}{d(2-d)\\omega_d} | x | ^{2-d}, & d \\neq 2.\n\\end{cases} " }, { "math_id": 5, "text": "d = 3" }, { "math_id": 6, "text": "\\Gamma(x) = -1/(4\\pi |x|). " }, { "math_id": 7, "text": "\\Delta w = f, " }, { "math_id": 8, "text": "\\Gamma*\\mu(x) = \\int_{\\mathbb{R}^d}\\Gamma(x-y) \\, d\\mu(y)" }, { "math_id": 9, "text": "\\Delta w = \\mu " }, { "math_id": 10, "text": "f*\\Gamma(x) =\\lambda \\Gamma(x),\\quad \\lambda=\\int_{\\mathbb{R}^d} f(y)\\,dy." } ]
https://en.wikipedia.org/wiki?curid=814816
8149170
Graph embedding
Embedding a graph in a topological space, often Euclidean In topological graph theory, an embedding (also spelled imbedding) of a graph formula_0 on a surface formula_1 is a representation of formula_0 on formula_1 in which points of formula_1 are associated with vertices and simple arcs (homeomorphic images of formula_2) are associated with edges in such a way that: Here a surface is a connected formula_5-manifold. Informally, an embedding of a graph into a surface is a drawing of the graph on the surface in such a way that its edges may intersect only at their endpoints. It is well known that any finite graph can be embedded in 3-dimensional Euclidean space formula_6. A planar graph is one that can be embedded in 2-dimensional Euclidean space formula_7 Often, an embedding is regarded as an equivalence class (under homeomorphisms of formula_1) of representations of the kind just described. Some authors define a weaker version of the definition of "graph embedding" by omitting the non-intersection condition for edges. In such contexts the stricter definition is described as "non-crossing graph embedding". This article deals only with the strict definition of graph embedding. The weaker definition is discussed in the articles "graph drawing" and "crossing number". Terminology. If a graph formula_0 is embedded on a closed surface formula_1, the complement of the union of the points and arcs associated with the vertices and edges of formula_0 is a family of regions (or faces). A 2-cell embedding, cellular embedding or map is an embedding in which every face is homeomorphic to an open disk. A closed 2-cell embedding is an embedding in which the closure of every face is homeomorphic to a closed disk. The genus of a graph is the minimal integer formula_8 such that the graph can be embedded in a surface of genus formula_8. In particular, a planar graph has genus formula_9, because it can be drawn on a sphere without self-crossing. A graph that can be embedded on a torus is called a toroidal graph. The non-orientable genus of a graph is the minimal integer formula_8 such that the graph can be embedded in a non-orientable surface of (non-orientable) genus formula_8. The Euler genus of a graph is the minimal integer formula_8 such that the graph can be embedded in an orientable surface of (orientable) genus formula_10 or in a non-orientable surface of (non-orientable) genus formula_8. A graph is orientably simple if its Euler genus is smaller than its non-orientable genus. The maximum genus of a graph is the maximal integer formula_8 such that the graph can be formula_5-cell embedded in an orientable surface of genus formula_8. Combinatorial embedding. An embedded graph uniquely defines cyclic orders of edges incident to the same vertex. The set of all these cyclic orders is called a rotation system. Embeddings with the same rotation system are considered to be equivalent and the corresponding equivalence class of embeddings is called combinatorial embedding (as opposed to the term topological embedding, which refers to the previous definition in terms of points and curves). Sometimes, the rotation system itself is called a "combinatorial embedding". An embedded graph also defines natural cyclic orders of edges which constitutes the boundaries of the faces of the embedding. However handling these face-based orders is less straightforward, since in some cases some edges may be traversed twice along a face boundary. For example this is always the case for embeddings of trees, which have a single face. To overcome this combinatorial nuisance, one may consider that every edge is "split" lengthwise in two "half-edges", or "sides". Under this convention in all face boundary traversals each half-edge is traversed only once and the two half-edges of the same edge are always traversed in opposite directions. Other equivalent representations for cellular embeddings include the ribbon graph, a topological space formed by gluing together topological disks for the vertices and edges of an embedded graph, and the graph-encoded map, an edge-colored cubic graph with four vertices for each edge of the embedded graph. Computational complexity. The problem of finding the graph genus is NP-hard (the problem of determining whether an formula_8-vertex graph has genus formula_11 is NP-complete). At the same time, the graph genus problem is fixed-parameter tractable, i.e., polynomial time algorithms are known to check whether a graph can be embedded into a surface of a given fixed genus as well as to find the embedding. The first breakthrough in this respect happened in 1979, when algorithms of time complexity "O"("n""O"("g")) were independently submitted to the Annual ACM Symposium on Theory of Computing: one by I. Filotti and G.L. Miller and another one by John Reif. Their approaches were quite different, but upon the suggestion of the program committee they presented a joint paper. However, Wendy Myrvold and William Kocay proved in 2011 that the algorithm given by Filotti, Miller and Reif was incorrect. In 1999 it was reported that the fixed-genus case can be solved in time linear in the graph size and doubly exponential in the genus. Embeddings of graphs into higher-dimensional spaces. It is known that any finite graph can be embedded into a three-dimensional space. One method for doing this is to place the points on any line in space and to draw the edges as curves each of which lies in a distinct halfplane, with all halfplanes having that line as their common boundary. An embedding like this in which the edges are drawn on halfplanes is called a book embedding of the graph. This metaphor comes from imagining that each of the planes where an edge is drawn is like a page of a book. It was observed that in fact several edges may be drawn in the same "page"; the "book thickness" of the graph is the minimum number of halfplanes needed for such a drawing. Alternatively, any finite graph can be drawn with straight-line edges in three dimensions without crossings by placing its vertices in general position so that no four are coplanar. For instance, this may be achieved by placing the "i"th vertex at the point ("i","i"2,"i"3) of the moment curve. An embedding of a graph into three-dimensional space in which no two of the cycles are topologically linked is called a linkless embedding. A graph has a linkless embedding if and only if it does not have one of the seven graphs of the Petersen family as a minor. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "G" }, { "math_id": 1, "text": "\\Sigma" }, { "math_id": 2, "text": "[0,1]" }, { "math_id": 3, "text": "e" }, { "math_id": 4, "text": "e," }, { "math_id": 5, "text": "2" }, { "math_id": 6, "text": "\\mathbb{R}^3" }, { "math_id": 7, "text": "\\mathbb{R}^2." }, { "math_id": 8, "text": "n" }, { "math_id": 9, "text": "0" }, { "math_id": 10, "text": "n/2" }, { "math_id": 11, "text": "g" } ]
https://en.wikipedia.org/wiki?curid=8149170
81511
Stall (fluid dynamics)
Abrupt reduction in lift due to flow separation In fluid dynamics, a stall is a reduction in the lift coefficient generated by a foil as angle of attack exceeds its critical value. The critical angle of attack is typically about 15°, but it may vary significantly depending on the fluid, foil – including its shape, size, and finish – and Reynolds number. Stalls in fixed-wing aircraft are often experienced as a sudden reduction in lift. It may be caused either by the pilot increasing the wing's angle of attack or by a decrease in the critical angle of attack. The latter may be due to slowing down (below stall speed) or the accretion of ice on the wings (especially if the ice is rough). A stall does not mean that the engine(s) have stopped working, or that the aircraft has stopped moving—the effect is the same even in an unpowered glider aircraft. Vectored thrust in aircraft is used to maintain altitude or controlled flight with wings stalled by replacing lost wing lift with engine or propeller thrust, thereby giving rise to post-stall technology. Because stalls are most commonly discussed in connection with aviation, this article discusses stalls as they relate mainly to aircraft, in particular fixed-wing aircraft. The principles of stall discussed here translate to foils in other fluids as well. Formal definition. A stall is a condition in aerodynamics and aviation such that if the angle of attack on an aircraft increases beyond a certain point, then lift begins to decrease. The angle at which this occurs is called the "critical angle of attack". If the angle of attack increases beyond the critical value, the lift decreases and the aircraft descends, further increasing the angle of attack and causing further loss of lift. The critical angle of attack is dependent upon the airfoil section or profile of the wing, its planform, its aspect ratio, and other factors, but is typically in the range of 8 to 20 degrees relative to the incoming wind (relative wind) for most subsonic airfoils. The critical angle of attack is the angle of attack on the lift coefficient versus angle-of-attack (Cl~alpha) curve at which the maximum lift coefficient occurs. Stalling is caused by flow separation which, in turn, is caused by the air flowing against a rising pressure. Whitford describes three types of stall: trailing-edge, leading-edge and thin-aerofoil, each with distinctive Cl~alpha features. For the trailing-edge stall, separation begins at small angles of attack near the trailing edge of the wing while the rest of the flow over the wing remains attached. As angle of attack increases, the separated regions on the top of the wing increase in size as the flow separation moves forward, and this hinders the ability of the wing to create lift. This is shown by the reduction in lift-slope on a Cl~alpha curve as the lift nears its maximum value. The separated flow usually causes buffeting. Beyond the critical angle of attack, separated flow is so dominant that additional increases in angle of attack cause the lift to fall from its peak value. Piston-engined and early jet transports had very good stall behaviour with pre-stall buffet warning and, if ignored, a straight nose-drop for a natural recovery. Wing developments that came with the introduction of turbo-prop engines introduced unacceptable stall behaviour. Leading-edge developments on high-lift wings, and the introduction of rear-mounted engines and high-set tailplanes on the next generation of jet transports, also introduced unacceptable stall behaviour. The probability of achieving the stall speed inadvertently, a potentially hazardous event, had been calculated, in 1965, at about once in every 100,000 flights, often enough to justify the cost of development of warning devices, such as stick shakers, and devices to automatically provide an adequate nose-down pitch, such as stick pushers. When the mean angle of attack of the wings is beyond the stall a spin, which is an autorotation of a stalled wing, may develop. A spin follows departures in roll, yaw and pitch from balanced flight. For example, a roll is naturally damped with an unstalled wing, but with wings stalled the damping moment is replaced with a propelling moment. Variation of lift with angle of attack. The graph shows that the greatest amount of lift is produced as the critical angle of attack is reached (which in early-20th century aviation was called the "burble point"). This angle is 17.5 degrees in this case, but it varies from airfoil to airfoil. In particular, for aerodynamically thick airfoils (thickness to chord ratios of around 10%), the critical angle is higher than with a thin airfoil of the same camber. Symmetric airfoils have lower critical angles (but also work efficiently in inverted flight). The graph shows that, as the angle of attack exceeds the critical angle, the lift produced by the airfoil decreases. The information in a graph of this kind is gathered using a model of the airfoil in a wind tunnel. Because aircraft models are normally used, rather than full-size machines, special care is needed to make sure that data is taken in the same Reynolds number regime (or scale speed) as in free flight. The separation of flow from the upper wing surface at high angles of attack is quite different at low Reynolds number from that at the high Reynolds numbers of real aircraft. In particular at high Reynolds numbers the flow tends to stay attached to the airfoil for longer because the inertial forces are dominant with respect to the viscous forces which are responsible for the flow separation ultimately leading to the aerodynamic stall. For this reason wind tunnel results carried out at lower speeds and on smaller scale models of the real life counterparts often tend to overestimate the aerodynamic stall angle of attack. High-pressure wind tunnels are one solution to this problem. In general, steady operation of an aircraft at an angle of attack above the critical angle is not possible because, after exceeding the critical angle, the loss of lift from the wing causes the nose of the aircraft to fall, reducing the angle of attack again. This nose drop, independent of control inputs, indicates the pilot has actually stalled the aircraft. This graph shows the stall angle, yet in practice most pilot operating handbooks (POH) or generic flight manuals describe stalling in terms of airspeed. This is because all aircraft are equipped with an airspeed indicator, but fewer aircraft have an angle of attack indicator. An aircraft's stalling speed is published by the manufacturer (and is required for certification by flight testing) for a range of weights and flap positions, but the stalling angle of attack is not published. As speed reduces, angle of attack has to increase to keep lift constant until the critical angle is reached. The airspeed at which this angle is reached is the (1g, unaccelerated) stalling speed of the aircraft in that particular configuration. Deploying flaps/slats decreases the stall speed to allow the aircraft to take off and land at a lower speed. Aerodynamic description. Fixed-wing aircraft. A fixed-wing aircraft can be made to stall in any pitch attitude or bank angle or at any airspeed but deliberate stalling is commonly practiced by reducing the speed to the unaccelerated stall speed, at a safe altitude. Unaccelerated (1g) stall speed varies on different fixed-wing aircraft and is represented by colour codes on the airspeed indicator. As the plane flies at this speed, the angle of attack must be increased to prevent any loss of altitude or gain in airspeed (which corresponds to the stall angle described above). The pilot will notice the flight controls have become less responsive and may also notice some buffeting, a result of the turbulent air separated from the wing hitting the tail of the aircraft. In most light aircraft, as the stall is reached, the aircraft will start to descend (because the wing is no longer producing enough lift to support the aircraft's weight) and the nose will pitch down. Recovery from the stall involves lowering the aircraft nose, to decrease the angle of attack and increase the air speed, until smooth air-flow over the wing is restored. Normal flight can be resumed once recovery is complete. The maneuver is normally quite safe, and, if correctly handled, leads to only a small loss in altitude (). It is taught and practised in order for pilots to recognize, avoid, and recover from stalling the aircraft. A pilot is required to demonstrate competency in controlling an aircraft during and after a stall for certification in the United States, and it is a routine maneuver for pilots when getting to know the handling of an unfamiliar aircraft type. The only dangerous aspect of a stall is a lack of altitude for recovery. A special form of asymmetric stall in which the aircraft also rotates about its yaw axis is called a spin. A spin can occur if an aircraft is stalled and there is an asymmetric yawing moment applied to it. This yawing moment can be aerodynamic (sideslip angle, rudder, adverse yaw from the ailerons), thrust related (p-factor, one engine inoperative on a multi-engine non-centreline thrust aircraft), or from less likely sources such as severe turbulence. The net effect is that one wing is stalled before the other and the aircraft descends rapidly while rotating, and some aircraft cannot recover from this condition without correct pilot control inputs (which must stop yaw) and loading. A new solution to the problem of difficult (or impossible) stall-spin recovery is provided by the ballistic parachute recovery system. The most common stall-spin scenarios occur on takeoff (departure stall) and during landing (base to final turn) because of insufficient airspeed during these maneuvers. Stalls also occur during a go-around manoeuvre if the pilot does not properly respond to the out-of-trim situation resulting from the transition from low power setting to high power setting at low speed. Stall speed is increased when the wing surfaces are contaminated with ice or frost creating a rougher surface, and heavier airframe due to ice accumulation. Stalls occur not only at slow airspeed, but at any speed when the wings exceed their critical angle of attack. Attempting to increase the angle of attack at 1g by moving the control column back normally causes the aircraft to climb. However, aircraft often experience higher g-forces, such as when turning steeply or pulling out of a dive. In these cases, the wings are already operating at a higher angle of attack to create the necessary force (derived from lift) to accelerate in the desired direction. Increasing the g-loading still further, by pulling back on the controls, can cause the stalling angle to be exceeded, even though the aircraft is flying at a high speed. These "high-speed stalls" produce the same buffeting characteristics as 1g stalls and can also initiate a spin if there is also any yawing. Characteristics. Different aircraft types have different stalling characteristics but they only have to be good enough to satisfy their particular Airworthiness authority. For example, the Short Belfast heavy freighter had a marginal nose drop which was acceptable to the Royal Air Force. When the aircraft were sold to a civil operator they had to be fitted with a stick pusher to meet the civil requirements. Some aircraft may naturally have very good behaviour well beyond what is required. For example, first generation jet transports have been described as having an immaculate nose drop at the stall. Loss of lift on one wing is acceptable as long as the roll, including during stall recovery, doesn't exceed about 20 degrees, or in turning flight the roll shall not exceed 90 degrees bank. If pre-stall warning followed by nose drop and limited wing drop are naturally not present or are deemed to be unacceptably marginal by an Airworthiness authority the stalling behaviour has to be made good enough with airframe modifications or devices such as a stick shaker and pusher. These are described in "Warning and safety devices". Stall speeds. Stalls depend only on angle of attack, not airspeed. However, the slower an aircraft flies, the greater the angle of attack it needs to produce lift equal to the aircraft's weight. As the speed decreases further, at some point this angle will be equal to the critical (stall) angle of attack. This speed is called the "stall speed". An aircraft flying at its stall speed cannot climb, and an aircraft flying below its stall speed cannot stop descending. Any attempt to do so by increasing angle of attack, without first increasing airspeed, will result in a stall. The actual stall speed will vary depending on the airplane's weight, altitude, configuration, and vertical and lateral acceleration. Propeller slipstream reduces the stall speed by energizing the flow over the wings. Speed definitions vary and include: An airspeed indicator, for the purpose of flight-testing, may have the following markings: the bottom of the white arc indicates VS0 at maximum weight, while the bottom of the green arc indicates VS1 at maximum weight. While an aircraft's VS speed is computed by design, its VS0 and VS1 speeds must be demonstrated empirically by flight testing. In accelerated and turning flight. The normal stall speed, specified by the VS values above, always refers to straight and level flight, where the load factor is equal to 1g. However, if the aircraft is turning or pulling up from a dive, additional lift is required to provide the vertical or lateral acceleration, and so the stall speed is higher. An accelerated stall is a stall that occurs under such conditions. In a banked turn, the lift required is equal to the weight of the aircraft plus extra lift to provide the centripetal force necessary to perform the turn: formula_0 where: formula_1 = lift formula_2 = load factor (greater than 1 in a turn) formula_3 = weight of the aircraft To achieve the extra lift, the lift coefficient, and so the angle of attack, will have to be higher than it would be in straight and level flight at the same speed. Therefore, given that the stall always occurs at the same critical angle of attack, by increasing the load factor (e.g. by tightening the turn) the critical angle will be reached at a higher airspeed: formula_4 where: formula_5 = stall speed formula_6 = stall speed of the aircraft in straight, level flight formula_2 = load factor The table that follows gives some examples of the relation between the angle of bank and the square root of the load factor. It derives from the trigonometric relation (secant) between formula_1 and formula_3. For example, in a turn with bank angle of 45°, Vst is 19% higher than Vs. According to Federal Aviation Administration (FAA) terminology, the above example illustrates a so-called turning flight stall, while the term "accelerated" is used to indicate an "accelerated turning stall" only, that is, a turning flight stall where the airspeed decreases at a given rate. The tendency of powerful propeller aircraft to roll in reaction to engine torque creates a risk of accelerated stalls. When an aircraft such as an Mitsubishi MU-2 is flying close to its stall speed, the sudden application of full power may cause it to roll, creating the same aerodynamic conditions that induce an accelerated stall in turning flight even if the pilot did not deliberately initiate a turn. Pilots of such aircraft are trained to avoid sudden and drastic increases in power at low altitude and low airspeed, as an accelerated stall under these conditions is very difficult to safely recover from. A notable example of an air accident involving a low-altitude turning flight stall is the 1994 Fairchild Air Force Base B-52 crash. Types. Dynamic stall. Dynamic stall is a non-linear unsteady aerodynamic effect that occurs when airfoils rapidly change the angle of attack. The rapid change can cause a strong vortex to be shed from the leading edge of the aerofoil, and travel backwards above the wing. The vortex, containing high-velocity airflows, briefly increases the lift produced by the wing. As soon as it passes behind the trailing edge, however, the lift reduces dramatically, and the wing is in normal stall. Dynamic stall is an effect most associated with helicopters and flapping wings, though also occurs in wind turbines, and due to gusting airflow. During forward flight, some regions of a helicopter blade may incur flow that reverses (compared to the direction of blade movement), and thus includes rapidly changing angles of attack. Oscillating (flapping) wings, such as those of insects like the bumblebee—may rely almost entirely on dynamic stall for lift production, provided the oscillations are fast compared to the speed of flight, and the angle of the wing changes rapidly compared to airflow direction. Stall delay can occur on airfoils subject to a high angle of attack and a three-dimensional flow. When the angle of attack on an airfoil is increasing rapidly, the flow will remain substantially attached to the airfoil to a significantly higher angle of attack than can be achieved in steady-state conditions. As a result, the stall is delayed momentarily and a lift coefficient significantly higher than the steady-state maximum is achieved. The effect was first noticed on propellers. Deep stall. A "deep stall" (or "super-stall") is a dangerous type of stall that affects certain aircraft designs, notably jet aircraft with a T-tail configuration and rear-mounted engines. In these designs, the turbulent wake of a stalled main wing, nacelle-pylon wakes and the wake from the fuselage "blanket" the horizontal stabilizer, rendering the elevators ineffective and preventing the aircraft from recovering from the stall. Aircraft with rear-mounted nacelles may also exhibit a loss of thrust. T-tail propeller aircraft are generally resistant to deep stalls, because the prop wash increases airflow over the wing root, but may be fitted with a precautionary vertical tail booster during flight testing, as happened with the A400M. Trubshaw gives a broad definition of deep stall as penetrating to such angles of attack formula_7 that pitch control effectiveness is reduced by the wing and nacelle wakes. He also gives a definition that relates deep stall to a locked-in condition where recovery is impossible. This is a single value of formula_7, for a given aircraft configuration, where there is no pitching moment, i.e. a trim point. Typical values both for the range of deep stall, as defined above, and the locked-in trim point are given for the Douglas DC-9 Series 10 by Schaufele. These values are from wind-tunnel tests for an early design. The final design had no locked-in trim point, so recovery from the deep stall region was possible, as required to meet certification rules. Normal stall beginning at the "g break" (sudden decrease of the vertical load factor) was at formula_8, deep stall started at about 30°, and the locked-in unrecoverable trim point was at 47°. The very high formula_7 for a deep stall locked-in condition occurs well beyond the normal stall but can be attained very rapidly, as the aircraft is unstable beyond the normal stall and requires immediate action to arrest it. The loss of lift causes high sink rates, which, together with the low forward speed at the normal stall, give a high formula_7 with little or no rotation of the aircraft. BAC 1-11 G-ASHG, during stall flight tests before the type was modified to prevent a locked-in deep-stall condition, descended at over and struck the ground in a flat attitude moving only forward after initial impact. Sketches showing how the wing wake blankets the tail may be misleading if they imply that deep stall requires a high body angle. Taylor and Ray show how the aircraft attitude in the deep stall is relatively flat, even less than during the normal stall, with very high negative flight-path angles. Effects similar to deep stall had been known to occur on some aircraft designs before the term was coined. A prototype Gloster Javelin (serial "WD808") was lost in a crash on 11 June 1953 to a "locked-in" stall. However, Waterton states that the trimming tailplane was found to be the wrong way for recovery. Low-speed handling tests were being done to assess a new wing. Handley Page Victor "XL159" was lost to a "stable stall" on 23 March 1962. It had been clearing the fixed droop leading edge with the test being stall approach, landing configuration, C of G aft. The brake parachute had not been streamed, as it may have hindered rear crew escape. The name "deep stall" first came into widespread use after the crash of the prototype BAC 1-11 G-ASHG on 22 October 1963, which killed its crew. This led to changes to the aircraft, including the installation of a stick shaker (see below) to clearly warn the pilot of an impending stall. Stick shakers are now a standard part of commercial airliners. Nevertheless, the problem continues to cause accidents; on 3 June 1966, a Hawker Siddeley Trident (G-ARPY), was lost to deep stall; deep stall is suspected to be cause of another Trident (the British European Airways Flight 548 "G-ARPI") crash – known as the "Staines Disaster" – on 18 June 1972, when the crew failed to notice the conditions and had disabled the stall-recovery system. On 3 April 1980, a prototype of the Canadair Challenger business jet crashed after initially entering a deep stall from 17,000 ft and having both engines flame-out. It recovered from the deep stall after deploying the anti-spin parachute but crashed after being unable to jettison the chute or relight the engines. One of the test pilots was unable to escape from the aircraft in time and was killed. On 26 July 1993, a Canadair CRJ-100 was lost in flight testing due to a deep stall. It has been reported that a Boeing 727 entered a deep stall in a flight test, but the pilot was able to rock the airplane to increasingly higher bank angles until the nose finally fell through and normal control response was recovered. The crash of West Caribbean Airways Flight 708 in 2005 was also attributed to a deep stall. Deep stalls can occur at apparently normal pitch attitudes, if the aircraft is descending quickly enough. The airflow is coming from below, so the angle of attack is increased. Early speculation on reasons for the crash of Air France Flight 447 blamed an unrecoverable deep stall, since it descended in an almost flat attitude (15°) at an angle of attack of 35° or more. However, it was held in a stalled glide by the pilots, who held the nose up amid all the confusion of what was actually happening to the aircraft. Canard-configured aircraft are also at risk of getting into a deep stall. Two Velocity aircraft crashed due to locked-in deep stalls. Testing revealed that the addition of leading-edge cuffs to the outboard wing prevented the aircraft from getting into a deep stall. The Piper Advanced Technologies PAT-1, N15PT, another canard-configured aircraft, also crashed in an accident attributed to a deep stall. Wind-tunnel testing of the design at the NASA Langley Research Center showed that it was vulnerable to a deep stall. In the early 1980s, a Schweizer SGS 1-36 sailplane was modified for NASA's controlled deep-stall flight program. Tip stall. Wing sweep and taper cause stalling at the tip of a wing before the root. The position of a swept wing along the fuselage has to be such that the lift from the wing root, well forward of the aircraft center of gravity (c.g.), must be balanced by the wing tip, well aft of the c.g. If the tip stalls first the balance of the aircraft is upset causing dangerous nose pitch up. Swept wings have to incorporate features which prevent pitch-up caused by premature tip stall. A swept wing has a higher lift coefficient on its outer panels than on the inner wing, causing them to reach their maximum lift capability first and to stall first. This is caused by the downwash pattern associated with swept/tapered wings. To delay tip stall the outboard wing is given washout to reduce its angle of attack. The root can also be modified with a suitable leading-edge and airfoil section to make sure it stalls before the tip. However, when taken beyond stalling incidence the tips may still become fully stalled before the inner wing despite initial separation occurring inboard. This causes pitch-up after the stall and entry to a super-stall on those aircraft with super-stall characteristics. Span-wise flow of the boundary layer is also present on swept wings and causes tip stall. The amount of boundary layer air flowing outboard can be reduced by generating vortices with a leading-edge device such as a fence, notch, saw tooth or a set of vortex generators behind the leading edge. Warning and safety devices. Fixed-wing aircraft can be equipped with devices to prevent or postpone a stall or to make it less (or in some cases more) severe, or to make recovery easier. Stall warning systems often involve inputs from a broad range of sensors and systems to include a dedicated angle of attack sensor. Blockage, damage, or inoperation of stall and angle of attack (AOA) probes can lead to unreliability of the stall warning and cause the stick pusher, overspeed warning, autopilot, and yaw damper to malfunction. If a forward canard is used for pitch control, rather than an aft tail, the canard is designed to meet the airflow at a slightly greater angle of attack than the wing. Therefore, when the aircraft pitch increases abnormally, the canard will usually stall first, causing the nose to drop and so preventing the wing from reaching its critical AOA. Thus, the risk of main-wing stalling is greatly reduced. However, if the main wing stalls, recovery becomes difficult, as the canard is more deeply stalled, and angle of attack increases rapidly. If an aft tail is used, the wing is designed to stall before the tail. In this case, the wing can be flown at higher lift coefficient (closer to stall) to produce more overall lift. Most military combat aircraft have an angle of attack indicator among the pilot's instruments, which lets the pilot know precisely how close to the stall point the aircraft is. Modern airliner instrumentation may also measure angle of attack, although this information may not be directly displayed on the pilot's display, instead driving a stall warning indicator or giving performance information to the flight computer (for fly-by-wire systems). Flight beyond the stall. As a wing stalls, aileron effectiveness is reduced, rendering the plane difficult to control and increasing the risk of a spin. Post stall, steady flight beyond the stalling angle (where the coefficient of lift is largest) requires engine thrust to replace lift, as well as alternative controls to replace the loss of effectiveness of the ailerons. Short-term stalls at 90–120° (e.g. Pugachev's cobra) are sometimes performed at airshows. The highest angle of attack in sustained flight so far demonstrated was 70° in the X-31 at the Dryden Flight Research Center. Sustained post-stall flight is a type of supermaneuverability. Spoilers. Except for flight training, airplane testing, and aerobatics, a stall is usually an undesirable event. Spoilers (sometimes called lift dumpers), however, are devices that are intentionally deployed to create a carefully controlled flow separation over part of an aircraft's wing to reduce the lift it generates, increase the drag, and allow the aircraft to descend more rapidly without gaining speed. Spoilers are also deployed asymmetrically (one wing only) to enhance roll control. Spoilers can also be used on aborted take-offs and after main wheel contact on landing to increase the aircraft's weight on its wheels for better braking action. Unlike powered airplanes, which can control descent by increasing or decreasing thrust, gliders have to increase drag to increase the rate of descent. In high-performance gliders, spoiler deployment is extensively used to control the approach to landing. Spoilers can also be thought of as "lift reducers" because they reduce the lift of the wing in which the spoiler resides. For example, an uncommanded roll to the left could be reversed by raising the right wing spoiler (or only a few of the spoilers present in large airliner wings). This has the advantage of avoiding the need to increase lift in the wing that is dropping (which may bring that wing closer to stalling). History. German aviator Otto Lilienthal died while flying in 1896 as the result of a stall. Wilbur Wright encountered stalls for the first time in 1901, while flying his second glider. Awareness of Lilienthal's accident and Wilbur's experience motivated the Wright Brothers to design their plane in "canard" configuration. This purportedly made recoveries from stalls easier and more gentle. The design allegedly saved the brothers' lives more than once. Although, canard configurations, without careful design, can actually make a stall unrecoverable. The aircraft engineer Juan de la Cierva worked on his "Autogiro" project to develop a rotary wing aircraft which, he hoped, would be unable to stall and which therefore would be safer than aeroplanes. In developing the resulting "autogyro" aircraft, he solved many engineering problems which made the helicopter possible. Notes. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "L = nW" }, { "math_id": 1, "text": "L" }, { "math_id": 2, "text": "n" }, { "math_id": 3, "text": "W" }, { "math_id": 4, "text": "V_\\text{st} = V_\\text{s} \\sqrt n" }, { "math_id": 5, "text": "V_\\text{st}" }, { "math_id": 6, "text": "V_\\text{s}" }, { "math_id": 7, "text": "\\alpha" }, { "math_id": 8, "text": "\\alpha = 18^\\circ" } ]
https://en.wikipedia.org/wiki?curid=81511
8151109
Polywell
Fusion reactor design The polywell is a proposed design for a fusion reactor using an electric and magnetic field to heat ions to fusion conditions. The design is related to the fusor, the high beta fusion reactor, the magnetic mirror, and the biconic cusp. A set of electromagnets generates a magnetic field that traps electrons. This creates a negative voltage, which attracts positive ions. As the ions accelerate towards the negative center, their kinetic energy rises. Ions that collide at high enough energies can fuse. &lt;templatestyles src="Template:TOC limit/styles.css" /&gt; Mechanism. Fusor Heating. A Farnsworth-Hirsch fusor consists of two wire cages, one inside the other, often referred to as grids, that are placed inside a vacuum chamber. The outer cage has a positive voltage versus the inner cage. A fuel, typically, deuterium gas, is injected into this chamber. It is heated past its ionization temperature, making positive ions. The ions are positive and move towards the negative inner cage. Those that miss the wires of the inner cage fly through the center of the device at high speeds and can fly out the other side of the inner cage. As the ions move outward, a Coulomb force impels them back towards the center. Over time, a core of ionized gas can form inside the inner cage. Ions pass back and forth through the core until they strike either the grid or another nucleus. Most nucleus strikes do not result in fusion. Grid strikes can raise the temperature of the grid as well as eroding it. These strikes conduct mass and energy away from the plasma, as well as spall off metal ions into the gas, which cools it. In fusors, the potential well is made with a wire cage. Because most of the ions and electrons fall onto the cage, fusors suffer from high conduction losses. Hence, no fusor has come close to energy break-even. Diamagnetic Plasma Trapping. The Polywell is attempting to hold a diamagnetic plasma - a material which rejects the outside magnetic fields created by the electromagnets. This kind of behavior is not normal for fusing plasmas. Both the Polywell and the high beta fusion reactor pre-suppose that the plasma self-generated field is so strong that it will reject the outside field. Bussard later called this type of confinement the "Wiffle-Ball". This analogy was used to describe electron trapping inside the field. Marbles can be trapped inside a Wiffle ball, a hollow, perforated sphere; if marbles are put inside, they can roll and sometimes escape through the holes in the sphere. The magnetic topology of a high-beta polywell acts similarly with electrons. In June 2014 EMC2 published a preprint providing (1) x-ray and (2) flux loop measurements that the diamagnetic effect will impact the external field. According to Bussard, typical cusp leakage rate is such that an electron makes 5 to 8 passes before escaping through a cusp in a standard mirror confinement biconic cusp; 10 to 60 passes in a polywell under mirror confinement (low beta) that he called cusp confinement; and several thousand passes in Wiffle-Ball confinement (high beta). In February 2013, Lockheed Martin Skunk Works announced a new compact fusion machine, the high beta fusion reactor, that may be related to the biconic cusp and the polywell, and working at "β" = 1. Other Trapping Mechanisms. Magnetic mirror. Magnetic mirror dominates in low beta designs. Both ions and electrons are reflected from high to low density fields. This is known as the magnetic mirror effect. The polywell's rings are arranged so the densest fields are on the outside, trapping electrons in the center. This can trap particles at low beta values. Cusp confinement. In high beta conditions, the machine may operate with cusp confinement. This is an improvement over the simpler magnetic mirror. The MaGrid has six point cusps, each located in the middle of a ring; and two highly modified line cusps, linking the eight corner cusps located at cube vertices. The key is that these two line cusps are much narrower than the single line cusp in magnetic mirror machines, so the net losses are less. The two line cusps losses are similar to or lower than the six face-centered point cusps. In 1955, Harold Grad theorized that a high-beta plasma pressure combined with a cusped magnetic field would improve plasma confinement. A diamagnetic plasma rejects the external fields and plugs the cusps. This system would be a much better trap. Cusped confinement was explored theoretically and experimentally. However, most cusped experiments failed and disappeared from national programs by 1980. Beta in Magnetic Traps. Magnetic fields exert a pressure on the plasma. Beta is the ratio of plasma pressure to the magnetic field strength. It can be defined separately for electrons and ions. The polywell concerns itself only for the electron beta, whereas the ion beta is of greater interest within Tokamak and other neutral-plasma machines. The two vary by a very large ratio, because of the enormous difference in mass between an electron and any ion. Typically, in other devices the electron beta is neglected, as the ion beta determines more important plasma parameters. This is a significant point of confusion for scientists more familiar with more 'conventional' fusion plasma physics. Note that for the electron beta, only the electron number density and temperature are used, as both of these, but especially the latter, can vary significantly from the ion parameters at the same location. formula_0 Most experiments on polywells involve low-beta plasma regimes (where "β" &lt; 1), where the plasma pressure is weak compared to the magnetic pressure. Several models describe magnetic trapping in polywells. Tests indicated that plasma confinement is enhanced in a magnetic cusp configuration when β (plasma pressure/magnetic field pressure) is of order unity. This enhancement is required for a fusion power reactor based on cusp confinement to be feasible. Design. The main problem with the fusor is that the inner cage conducts away too much energy and mass. The solution, suggested by Robert Bussard and Oleg Lavrentiev, was to replace the negative cage with a "virtual cathode" made of a cloud of electrons. A polywell consists of several parts. These are put inside a vacuum chamber The magnetic energy density required to confine electrons is far smaller than that required to directly confine ions, as is done in other fusion projects such as ITER. Other behavior. Single-electron motion. As an electron enters a magnetic field, it feels a Lorentz force and corkscrews. The radius of this motion is the gyroradius. As it moves it loses some energy as x-rays, every time it changes speed. The electron spins faster and tighter in denser fields, as it enters the MaGrid. Inside the MaGrid, single electrons travel straight through the null point, due to their infinite gyroradius in regions of no magnetic field. Next, they head towards the edges of the MaGrid field and corkscrew tighter along the denser magnetic field lines. This is typical electron cyclotron resonance motion. Their gyroradius shrinks and when they hit a dense magnetic field they can be reflected using the magnetic mirror effect. Electron trapping has been measured in polywells with Langmuir probes. The polywell attempts to confine the ions and electrons through two different means, borrowed from fusors and magnetic mirrors. The electrons are easier to confine magnetically because they have so much less mass than the ions. The machine confines ions using an electric field in the same way a fusor confines the ions: in the polywell, the ions are attracted to the negative electron cloud in the center. In the fusor, they are attracted to a negative wire cage in the center. Plasma recirculation. Plasma recirculation would significantly improve the function of these machines. It has been argued that efficient recirculation is the only way they can be viable. Electrons or ions move through the device without striking a surface, reducing conduction losses. Bussard stressed this; specifically emphasizing that electrons need to move through all cusps of the machine. Models of energy distribution. As of 2015[ [update]] it had not been determined conclusively what the ion or electron energy distribution is. The energy distribution of the plasma can be measured using a Langmuir probe. This probe absorbs charge from the plasma as its voltage changes, making an I-V Curve. From this signal, the energy distribution can be calculated. The energy distribution both drives and is driven by several physical rates, the electron and ion loss rate, the rate of energy loss by radiation, the fusion rate and the rate of non-fusion collisions. The collision rate may vary greatly across the system: Critics claimed that both the electrons and ion populations have bell curve distribution; that the plasma is thermalized. The justification given is that the longer the electrons and ions move inside the polywell, the more interactions they undergo leading to thermalization. This model for the ion distribution is shown in Figure 5. Supporters modeled a nonthermal plasma. The justification is the high amount of scattering in the device center. Without a magnetic field, electrons scatter in this region. They claimed that this scattering leads to a monoenergetic distribution, like the one shown in Figure 6. This argument is supported by 2 dimensional particle-in-cell simulations. Bussard argued that constant electron injection would have the same effect. Such a distribution would help maintain a negative voltage in the center, improving performance. Considerations for net power. Fuel type. Nuclear fusion refers to nuclear reactions that combine lighter nuclei to become heavier nuclei. All chemical elements can be fused; for elements with fewer protons than iron, this process changes mass into energy that can potentially be captured to provide fusion power. The probability of a fusion reaction occurring is controlled by the cross section of the fuel, which is in turn a function of its temperature. The easiest nuclei to fuse are deuterium and tritium. Their fusion occurs when the ions reach 4 keV (kiloelectronvolts), or about 45 million kelvins. The Polywell would achieve this by accelerating an ion with a charge of 1 down a 4,000 volt electric field. The high cost, short half-life and radioactivity of tritium make it difficult to work with. The second easiest reaction is to fuse deuterium with itself. Because of its low cost, deuterium is commonly used by Fusor amateurs. Bussard's polywell experiments were performed using this fuel. Fusion of deuterium or tritium produces a fast neutron, and therefore produces radioactive waste. Bussard's choice was to fuse boron-11 with protons; this reaction is aneutronic (does not produce neutrons). An advantage of p-11B as a fusion fuel is that the primary reactor output would be energetic alpha particles, which can be directly converted to electricity at high efficiency using direct energy conversion. Direct conversion has achieved a 48% power efficiency against 80–90% theoretical efficiency. Lawson criterion. The energy generated by fusion inside a hot plasma cloud can be found with the following equation: formula_1 where: Energy varies with temperature, density, collision speed and fuel. To reach net power production, reactions must occur rapidly enough to make up for energy losses. Plasma clouds lose energy through conduction and radiation. Conduction is when ions, electrons or neutrals touch a surface and escape. Energy is lost with the particle. Radiation is when energy escapes as light. Radiation increases with temperature. To get net power from fusion, these losses must be overcome. This leads to an equation for power output. Net Power = Efficiency × (Fusion − Radiation Loss − Conduction Loss) Lawson used this equation to estimate conditions for net power based on a Maxwellian cloud. However, the Lawson criterion does not apply for Polywells if Bussard's conjecture that the plasma is nonthermal is correct. Lawson stated in his founding report: "It is of course easy to postulate systems in which the velocity distribution of the particle is not Maxwellian. These systems are outside the scope of this report." He also ruled out the possibility of a nonthermal plasma to ignite: "Nothing may be gained by using a system in which electrons are at a lower temperature [than ions]. The energy loss in such a system by transfer to the electrons will always be greater than the energy which would be radiated by the electrons if they were the [same] temperature." Criticism. There are several general criticisms of the Polywell: Rider Critique. Todd Rider (a biological engineer and former student of plasma physics) calculated that X-ray radiation losses with this fuel would exceed fusion power production by at least 20%. Rider's model used the following assumptions: Based on these assumptions, Rider used general equations to estimate the rates of different physical effects. These included the loss of ions to up-scattering, the ion thermalization rate, the energy loss due to X-ray radiation and the fusion rate. His conclusions were that the device suffered from "fundamental flaws". By contrast, Bussard argued that the plasma had a different structure, temperature distribution and well profile. These characteristics have not been fully measured and are central to the device's feasibility. Bussard's calculations indicated that the bremsstrahlung losses would be much smaller. According to Bussard the high speed and therefore low cross section for Coulomb collisions of the ions in the core makes thermalizing collisions very unlikely, while the low speed at the rim means that thermalization there has almost no impact on ion velocity in the core. Bussard calculated that a polywell reactor with a radius of 1.5 meters would produce net power fusing deuterium. Other studies disproved some of the assumptions made by Rider and Nevins, arguing the real fusion rate and the associated recirculating power (needed to overcome the thermalizing effect and sustain the non-Maxwellian ion profile) could be estimated only with a self-consistent collisional treatment of the ion distribution function, lacking in Rider's work. Energy capture. It has been proposed that energy may be extracted from polywells using heat capture or, in the case of aneutronic fusion like D-3He or "p"-11B, direct energy conversion, though that scheme faces challenges. The energetic alpha particles (up to a few MeV) generated by the aneutronic fusion reaction would exit the MaGrid through the six axial cusps as cones (spread ion beams). Direct conversion collectors inside the vacuum chamber would convert the alpha particles' kinetic energy to a high-voltage direct current. The alpha particles must slow down before they contact the collector plates to realize high conversion efficiency. In experiments, direct conversion has demonstrated a conversion efficiency of 48%. History. In the late 1960s several investigations studied polyhedral magnetic fields as a possibility to confine a fusion plasma. The first proposal to combine this configuration with an electrostatic potential well in order to improve electron confinement was made by Oleg Lavrentiev in 1975. The idea was picked up by Robert Bussard in 1983. His 1989 patent application cited Lavrentiev, although in 2006 he appears to claim to have (re)discovered the idea independently. HEPS. Research was funded first by the Defense Threat Reduction Agency beginning in 1987 and later by DARPA. This funding resulted in a machine known as the high energy power source (HEPS) experiment. It was built by Directed Technologies Inc. This machine was a large (1.9 m across) machine, with the rings outside the vacuum chamber. This machine performed poorly because the magnetic fields sent electrons into the walls, driving up conduction losses. These losses were attributed to poor electron injection. The US Navy began providing low-level funding to the project in 1992. Krall published results in 1994. Bussard, who had been an advocate for Tokamak research, turned to advocate for this concept, so that the idea became associated with his name. In 1995 he sent a letter to the US Congress stating that he had only supported Tokamaks in order to get fusion research sponsored by the government, but he now believed that there were better alternatives. EMC2, Inc.. Bussard founded Energy/Matter Conversion Corporation, Inc. (aka EMC2) in 1985 and after the HEPS program ended, the company continued its research. Successive machines were made, evolving from WB-1 to WB-8. The company won an SBIR I grant in 1992–93 and an SBIR II grant in 1994–95, both from the US Navy. In 1993, it received a grant from the Electric Power Research Institute. In 1994, The company received small grants from NASA and LANL. Starting in 1999, the company was primarily funded by the US Navy. WB-1 had six conventional magnets in a cube. This device was 10 cm across. WB-2 used coils of wires to generate the magnetic field. Each electromagnet had a square cross section that created problems. The magnetic fields drove electrons into the metal rings, raising conduction losses and electron trapping. This design also suffered from "funny cusp" losses at the joints between magnets. WB-6 attempted to address these problems, by using circular rings and spacing further apart. The next device, PXL-1, was built in 1996 and 1997. This machine was 26 cm across and used flatter rings to generate the field. From 1998 to 2005 the company built a succession of six machines: WB-3, MPG-1,2, WB-4, PZLx-1, MPG-4 and WB-5. All of these reactors were six magnet designs built as a cube or truncated cube. They ranged from 3 to 40 cm in radius. Initial difficulties in spherical electron confinement led to the 2005 research project's termination. However, Bussard reported a fusion rate of 109 per second running D-D fusion reactions at only 12.5 kV (based on detecting nine neutrons in five tests, giving a wide confidence interval). He stated that the fusion rate achieved by WB-6 was roughly 100,000 times greater than what Farnsworth achieved at similar well depth and drive conditions. By comparison, researchers at University of Wisconsin–Madison reported a neutron rate of up to 5×109 per second at voltages of 120 kV from an electrostatic fusor without magnetic fields. Bussard asserted, by using superconductor coils, that the only significant energy loss channel is through electron losses proportional to the surface area. He also stated that the density would scale with the square of the field (constant beta conditions), and the maximum attainable magnetic field would scale with the radius. Under those conditions, the fusion power produced would scale with the seventh power of the radius, and the energy gain would scale with the fifth power. While Bussard did not publicly document the reasoning underlying this estimate, if true, it would enable a model only ten times larger to be useful as a fusion power plant. WB-6. Funding became tighter and tighter. According to Bussard, "The funds were clearly needed for the more important War in Iraq." An extra $900k of Office of Naval Research funding allowed the program to continue long enough to reach WB-6 testing in November 2005. WB-6 had rings with circular cross sections that space apart at the joints. This reduced the metal surface area unprotected by magnetic fields. These changes dramatically improved system performance, leading to more electron recirculation and better electron confinement, in a progressively tighter core. This machine produced a fusion rate of 109 per second. This is based on a total of nine neutrons in five tests, giving a wide confidence interval. Drive voltage on the WB-6 tests was about 12.5 kV, with a resulting potential well depth of about 10 kV. Thus deuterium ions could have a maximum of 10 keV of kinetic energy in the center. By comparison, a Fusor running deuterium fusion at 10 kV would produce a fusion rate almost too small to detect. Hirsch reported a fusion rate this high only by driving his machine with a 150 kV drop between the inside and outside cages. Hirsch also used deuterium and tritium, a much easier fuel to fuse, because it has a higher nuclear cross section. While the WB-6 pulses were sub-millisecond, Bussard felt the physics should represent steady state. A last-minute test of WB-6 ended prematurely when the insulation on one of the hand-wound electromagnets burned through, destroying the device. Efforts to restart funding. With no more funding during 2006, the project was stalled. This ended the US Navy's 11-year embargo on publication and publicizing between 1994 and 2005. The company's military-owned equipment was transferred to SpaceDev, which hired three of the team's researchers. After the transfer, Bussard tried to attract new investors, giving talks trying to raise interest in his design. He gave a talk at Google entitled, "Should Google Go Nuclear?" He also presented and published an overview at the 57th International Astronautical Congress in October 2006. He presented at an internal Yahoo! Tech Talk on April 10, 2007. and spoke on the internet talk radio show "The Space Show" on May 8, 2007. Bussard had plans for WB-8 that was a higher-order polyhedron, with 12 electromagnets. However, this design was not used in the actual WB-8 machine. Bussard believed that the WB-6 machine had demonstrated progress and that no intermediate-scale models would be needed. He noted, "We are probably the only people on the planet who know how to make a real net power clean fusion system" He proposed to rebuild WB-6 more robustly to verify its performance. After publishing the results, he planned to convene a conference of experts in the field in an attempt to get them behind his design. The first step in that plan was to design and build two more small scale designs (WB-7 and WB-8) to determine which full scale machine would be best. He wrote "The only small scale machine work remaining, which can yet give further improvements in performance, is test of one or two WB-6-scale devices but with "square" or polygonal coils aligned approximately (but slightly offset on the main faces) along the edges of the vertices of the polyhedron. If this is built around a truncated dodecahedron, near-optimum performance is expected; about 3–5 times better than WB-6." Bussard died on October 6, 2007, from multiple myeloma at age 79. In 2007, Steven Chu, Nobel laureate and former United States Secretary of Energy, answered a question about polywell at a tech talk at Google. He said: "So far, there's not enough information so [that] I can give an evaluation of the probability that it might work or not...But I'm trying to get more information." Bridge funding 2007–09. Reassembling team. In August 2007, EMC2 received a $1.8M U.S. Navy contract. Before Bussard's death in October, 2007, Dolly Gray, who co-founded EMC2 with Bussard and served as its president and CEO, helped assemble scientists in Santa Fe to carry on. The group was led by Richard Nebel and included Princeton trained physicist Jaeyoung Park. Both physicists were on leave from LANL. The group also included Mike Wray, the physicist who ran the key 2005 tests; and Kevin Wray, the computer specialist for the operation. WB-7. WB-7 was constructed in San Diego and shipped to the EMC2 testing facility. The device was termed WB-7 and like prior editions, was designed by engineer Mike Skillicorn. This machine has a design similar to WB-6. WB-7 achieved "1st plasma" in early January, 2008. In August 2008, the team finished the first phase of their experiment and submitted the results to a peer review board. Based on this review, federal funders agreed the team should proceed to the next phase. Nebel said "we have had some success", referring to the team's effort to reproduce the promising results obtained by Bussard. "It's kind of a mix", Nebel reported. "We're generally happy with what we've been getting out of it, and we've learned a tremendous amount" he also said. 2008. In September 2008 the Naval Air Warfare Center publicly pre-solicited a contract for research on an Electrostatic "Wiffle Ball" Fusion Device. In October 2008 the US Navy publicly pre-solicited two more contracts with EMC2 the preferred supplier. These two tasks were to develop better instrumentation and to develop an ion injection gun. In December 2008, following many months of review by the expert review panel of the submission of the final WB-7 results, Nebel commented that "There's nothing in [the research] that suggests this will not work", but "That's a very different statement from saying that it will work." 2009 to 2014. 2009. In January 2009 the Naval Air Warfare Center pre-solicited another contract for "modification and testing of plasma wiffleball 7" that appeared to be funding to install the instrumentation developed in a prior contract, install a new design for the connector (joint) between coils, and operate the modified device. The modified unit was called WB-7.1. This pre-solicitation started as a $200k contract but the final award was for $300k. In April 2009, DoD published a plan to provide EMC2 a further $2 million as part of the American Recovery and Reinvestment Act of 2009. The citation in the legislation was labelled as "Plasma Fusion (Polywell) – Demonstrate fusion plasma confinement system for shore and shipboard applications; Joint OSD/USN project." The Recovery Act funded the Navy for $7.86M to construct and test a WB-8. The Navy contract had an option for an additional $4.46M. The new device increased the magnetic field strength eightfold over WB-6. 2010. The team built WB-8 and the computational tools to analyze and understand the data from it. The team relocated to San Diego. 2011. Jaeyoung Park became president. In a May interview, Park commented that "This machine [WB8] should be able to generate 1,000 times more nuclear activity than WB-7, with about eight times more magnetic field" The first WB-8 plasma was generated on November 1, 2010. By the third quarter over 500 high power plasma shots had been conducted. 2012. As of August 15, the Navy agreed to fund EMC2 with an additional $5.3 million over 2 years to work on pumping electrons into the wiffleball. They planned to integrate a pulsed power supply to support the electron guns (100+A, 10kV). WB-8 operated at 0.8 Tesla. Review of the work produced the recommendation to continue and expand the effort, stating: "The experimental results to date were consistent with the underlying theoretical framework of the polywell fusion concept and, in the opinion of the committee, merited continuation and expansion." Going public. 2014. In June EMC2 demonstrated for the first time that the electron cloud becomes diamagnetic in the center of a magnetic cusp configuration when beta is high, resolving an earlier conjecture. Whether the plasma is thermalized remains to be demonstrated experimentally. Park presented these findings at various universities, the Annual 2014 Fusion Power Associates meeting and the 2014 IEC conference. 2015. On January 22, EMC2 presented at Microsoft Research. EMC2 planned a three-year, $30 million commercial research program to prove that the Polywell can work. On March 11, the company filed a patent application that refined the ideas in Bussard's 1985 patent. The article "High-Energy Electron Confinement in a Magnetic Cusp Configuration" was published in Physical Review X. 2016. On April 13, Next Big Future published an article on information of the Wiffle Ball reactor dated to 2013 through the Freedom of Information Act. On May 2, Jaeyoung Park delivered a lecture at Khon Kaen University in Thailand, claiming that the world has so underestimated the timetable and impact that practical and economic fusion power will have, that its ultimate arrival will be highly disruptive. Park stated that he expected to present "final scientific proof of principle for the polywell technology around 2019-2020", and expects "a first generation commercial fusion reactor being developed by 2030 and then mass production and commercialisation of the technology in the 2030s. This is approximately 30 years faster than expected by the International Thermonuclear Energy Reactor (ITER) project. It would also be tens of billions of dollars cheaper." 2018. In May 2018 Park and Nicholas Krall filed WIPO Patent WO/2018/208953. "Generating nuclear fusion reactions with the use of ion beam injection in high pressure magnetic cusp devices," which described the polywell device in detail. University of Sydney experiments. In June 2019, the results of long-running experiments at the University of Sydney (USyd) were published in PhD thesis form by Richard Bowden-Reid. Using an experimental machine built at the university, the team probed the formation of the virtual electrodes. Their work demonstrated that little or no trace of virtual electrode formation could be found. This left a mystery; both their machine and previous experiments showed clear and consistent evidence of the formation of a potential well that was trapping ions, which was previously ascribed to the formation of the electrodes. Exploring this problem, Bowden-Reid developed new field equations for the device that explained the potential well without electrode formation, and demonstrated that this matched both their results and those of previous experiments. Further, exploring the overall mechanism of the virtual electrode concept demonstrated that its interactions with the ions and itself would make it "leak" at a furious rate. Assuming plasma densities and energies required for net energy production, it was calculated that new electrons would have to be supplied at an unfeasible rate of 200,000 amps. &lt;templatestyles src="Template:Blockquote/styles.css" /&gt; Related projects. Prometheus Fusion Perfection. Mark Suppes built a polywell in Brooklyn. He was the first amateur to detect electron trapping using a Langmuir probe inside a polywell. He presented at the 2012 LIFT conference and the 2012 WIRED conference. The project officially ended in July 2013 due to a lack of funding. University of Sydney. The University of Sydney in Australia conducted polywell experiments, leading to five papers in "Physics of Plasmas". They also published two PhD theses and presented their work at IEC Fusion conferences. A May 2010 paper discussed a small device's ability to capture electrons. The paper posited that the machine had an ideal magnetic field strength that maximized its ability to catch electrons. The paper analyzed polywell magnetic confinement using analytical solutions and simulations. The work linked the polywell magnetic confinement to magnetic mirror theory. The 2011 work used Particle-in-cell simulations to model particle motion in polywells with a small electron population. Electrons behaved in a similar manner to particles in the biconic cusp. A 2013 paper measured a negative voltage inside a 4-inch aluminum polywell. Tests included measuring an internal beam of electrons, comparing the machine with and without a magnetic field, measuring the voltage at different locations and comparing voltage changes to the magnetic and electric field strength. A 2015 paper entitled "Fusion in a magnetically-shielded-grid inertial electrostatic confinement device" presented a theory for a gridded inertial electrostatic confinement (IEC) fusion system that shows a net energy gain is possible if the grid is magnetically shielded from ion impact. The analysis indicated that better than break-even performance is possible even in a deuterium-deuterium system at bench-top scales. The proposed device had the unusual property that it can avoid both the cusp losses of traditional magnetic fusion systems and the grid losses of traditional IEC configurations. Iranian Nuclear Science and Technology Research Institute. In November 2012, "Trend News Agency" reported that the Atomic Energy Organization of Iran had allocated "$8 million" to inertial electrostatic confinement research and about half had been spent. The funded group published a paper in the "Journal of Fusion Energy", stating that particle-in-cell simulations of a polywell had been conducted. The study suggested that well depths and ion focus control can be achieved by variations of field strength, and referenced older research with traditional fusors. The group had run a fusor in continuous mode at −140 kV and 70 mA of current, with D-D fuel, producing 2×107 neutrons per second. University of Wisconsin. Researchers performed Vlasov–Poisson, particle-in-cell simulation work on the polywell. This was funded through the National Defense Science and Engineering Graduate Fellowship and was presented at the 2013 American Physical Society conference. Convergent Scientific, Inc.. Convergent Scientific, Inc. (CSI) is an American company founded in December 2010 and based in Huntington Beach, California. They tested their first polywell design, the "Model 1," on steady-state operations from January to late summer 2012. The MaGrid was made of a unique diamond shaped hollow wire, into which an electric current and a liquid coolant flowed. They are making an effort to build a small-scale polywell fusing deuterium. The company filed several patents and in the Fall of 2013, did a series of web-based investor pitches. The presentations mention encountering plasma instabilities including the Diocotron, two stream and Weibel instabilities. The company wants to make and sell Nitrogen-13 for PET scans. Radiant Matter Research. Radiant Matter is a Dutch organization that has built fusors and has plans to build a polywell. ProtonBoron. ProtonBoron is an organization that plans to build a proton-boron polywell. Progressive Fusion Solutions. Progressive Fusion Solutions is an IEC fusion research startup who are researching Fusor and Polywell type devices. Fusion One Corporation. Fusion One Corporation was a US organization founded by Dr. Paul Sieck (former Lead Physicist of EMC2), Dr. Scott Cornish of the University of Sydney, and Randall Volberg. It ran from 2015 to 2017. They developed a magneto-electrostatic reactor named "F1" that was based in-part on the polywell. It introduced a system of externally mounted electromagnet coils with internally mounted cathode repeller surfaces to provide a means of preserving energy and particle losses that would otherwise be lost through the magnetic cusps. In response to Todd Rider's 1995 power balance conclusions, a new analytical model was developed based on this recovery function as well as a more accurate quantum relativistic treatment of the bremsstrahlung losses that was not present in Rider's analysis. Version 1 of the analytical model was developed by Senior Theoretical Physicist Dr Vladimir Mirnov and demonstrated ample multiples of net gain with D-T and sufficient multiples with D-D to be used for generating electricity. These preliminary results were presented at the ARPA-E ALPHA 2017 Annual Review Meeting. Phase 2 of the model removed key assumptions in the Rider analysis by incorporating a self-consistent treatment of the ion energy distribution (Rider assumed a purely Maxwellian distribution) and the power required to maintain the distribution and ion population. The results yielded an energy distribution that was non-thermal but more Maxwellian than monoenergetic. The input power required to maintain the distribution was calculated to be excessive and ion-ion thermalization was a dominant loss channel. With these additions, a pathway to commercial electricity generation was no longer feasible. See also. &lt;templatestyles src="Div col/styles.css"/&gt; References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\beta_e = \\frac{p}{p_{mag}} = \\frac{n_e k_B T_e}{(B^2/2\\mu_0)}" }, { "math_id": 1, "text": "P_\\text{fusion} = n_A n_B \\langle \\sigma v_{A,B} \\rangle E_\\text{fusion}" }, { "math_id": 2, "text": "P_\\text{fusion}" }, { "math_id": 3, "text": "\\langle \\sigma v_{A,B} \\rangle" } ]
https://en.wikipedia.org/wiki?curid=8151109
8155146
Hedetniemi's conjecture
Conjecture in graph theory In graph theory, Hedetniemi's conjecture, formulated by Stephen T. Hedetniemi in 1966, concerns the connection between graph coloring and the tensor product of graphs. This conjecture states that formula_0 Here formula_1 denotes the chromatic number of an undirected finite graph formula_2. The inequality χ("G" × "H") ≤ min {χ("G"), χ("H")} is easy: if "G" is "k"-colored, one can "k"-color "G" × "H" by using the same coloring for each copy of "G" in the product; symmetrically if "H" is "k"-colored. Thus, Hedetniemi's conjecture amounts to the assertion that tensor products cannot be colored with an unexpectedly small number of colors. A counterexample to the conjecture was discovered by Yaroslav Shitov (2019) (see ), thus disproving the conjecture in general. Known cases. Any graph with a nonempty set of edges requires at least two colors; if "G" and "H" are not 1-colorable, that is, they both contain an edge, then their product also contains an edge, and is hence not 1-colorable either. In particular, the conjecture is true when "G" or "H" is a bipartite graph, since then its chromatic number is either 1 or 2. Similarly, if two graphs "G" and "H" are not 2-colorable, that is, not bipartite, then both contain a cycle of odd length. Since the product of two odd cycle graphs contains an odd cycle, the product "G" × "H" is not 2-colorable either. In other words, if "G" × "H" is 2-colorable, then at least one of "G" and "H" must be 2-colorable as well. The next case was proved long after the conjecture's statement, by : if the product "G" × "H" is 3-colorable, then one of "G" or "H" must also be 3-colorable. In particular, the conjecture is true whenever "G" or "H" is 4-colorable (since then the inequality χ("G" × "H") ≤ min {χ("G"), χ("H")} can only be strict when "G" × "H" is 3-colorable). In the remaining cases, both graphs in the tensor product are at least 5-chromatic and progress has only been made for very restricted situations. Weak Hedetniemi Conjecture. The following function (known as the "Poljak-Rödl function") measures how low the chromatic number of products of "n"-chromatic graphs can be. formula_3 Hedetniemi's conjecture is then equivalent to saying that "f"("n") "n". The Weak Hedetniemi Conjecture instead states merely that the function "f"("n") is unbounded. In other words, if the tensor product of two graphs can be colored with few colors, this should imply some bound on the chromatic number of one of the factors. The main result of , independently improved by Poljak, James H. Schmerl, and Zhu, states that if the function "f"("n") is bounded, then it is bounded by at most 9. Thus a proof of Hedetniemi's conjecture for 10-chromatic graphs would already imply the Weak Hedetniemi Conjecture for all graphs. Multiplicative graphs. The conjecture is studied in the more general context of graph homomorphisms, especially because of interesting relations to the category of graphs (with graphs as objects and homomorphisms as arrows). For any fixed graph "K", one considers graphs "G" that admit a homomorphism to "K", written "G" → "K". These are also called "K"-colorable graphs. This generalizes the usual notion of graph coloring, since it follows from definitions that a "k"-coloring is the same as a "Kk"-coloring (a homomorphism into the complete graph on "k" vertices). A graph "K" is called multiplicative if for any graphs "G", "H", the fact that "G" × "H" → "K" holds implies that "G" → "K" or "H" → "K" holds. As with classical colorings, the reverse implication always holds: if "G" (or "H", symmetrically) is "K"-colorable, then "G" × "H" is easily "K"-colored by using the same values independently of "H". Hedetniemi's conjecture is then equivalent to the statement that each complete graph is multiplicative. The above known cases are equivalent to saying that "K1", "K2", and "K3" are multiplicative. The case of "K4" is widely open. On the other hand, the proof of has been generalized by to show that all cycle graphs are multiplicative. Later, proved more generally that all circular cliques "Kn/k" with "n/k &lt; 4" are multiplicative. In terms of the circular chromatic number "χ"c, this means that if "χ"c("G"×"H") &lt; 4, then "χ"c("G"×"H") min { "χ"c("G"), "χ"c("G")} . has shown that square-free graphs are multiplicative. Examples of non-multiplicative graphs can be constructed from two graphs "G" and "H" that are not comparable in the homomorphism order (that is, neither "G"→"H" nor "H"→"G" holds). In this case, letting "K"="G"×"H", we trivially have "G"×"H"→"K", but neither "G" nor "H" can admit a homomorphism into "K", since composed with the projection "K"→"H" or "K"→"G" it would give a contradiction. Exponential graph. Since the tensor product of graphs is the category-theoretic product in the category of graphs (with graphs as objects and homomorphisms as arrows), the conjecture can be rephrased in terms of the following construction on graphs "K" and "G". The exponential graph "KG" is the graph with all functions "V(G)" → "V(K)" as vertices (not only homomorphisms) and two functions "f","g" adjacent when "f(v)" is adjacent to "g(v')" in "K", for all adjacent vertices "v","v ' " of "G". In particular, there is a loop at a function "f" (it is adjacent to itself) if and only if the function gives a homomorphism from "G" to "K". Seen differently, there is an edge between "f" and "g" whenever the two functions define a homomorphism from "G" × "K"2 (the bipartite double cover of "G") to "K". The exponential graph is the exponential object in the category of graphs. This means homomorphisms from "G" × "H" to a graph "K" correspond to homomorphisms from "H" to "KG". Moreover, there is a homomorphism eval : "G" × "KG" → "K" given by eval("v","f") "f"("v"). These properties allow to conclude that the multiplicativity of "K" is equivalent to the statement: either "G" or "KG" is "K"-colorable, for every graph "G". In other words, Hedetniemi's conjecture can be seen as a statement on exponential graphs: for every integer "k", the graph "KkG" is either "k"-colorable, or it contains a loop (meaning "G" is "k"-colorable). One can also see the homomorphisms eval : "G" × "KkG" → "Kk" as the "hardest" instances of Hedetniemi's conjecture: if the product "G" × "H" was a counterexample, then "G" × "KkG" would also be a counterexample. Generalizations. Generalized to directed graphs, the conjecture has simple counterexamples, as observed by . Here, the chromatic number of a directed graph is just the chromatic number of the underlying graph, but the tensor product has exactly half the number of edges (for directed edges "g→g' " in "G" and "h→h' " in "H", the tensor product "G" × "H" has only one edge, from "(g,h)" to "(g',h')", while the product of the underlying undirected graphs would have an edge between "(g,h')" and "(g',h)" as well). However, the Weak Hedetniemi Conjecture turns out to be equivalent in the directed and undirected settings . The problem cannot be generalized to infinite graphs: gave an example of two infinite graphs, each requiring an uncountable number of colors, such that their product can be colored with only countably many colors. proved that in the constructible universe, for every infinite cardinal formula_4, there exist a pair of graphs of chromatic number greater than formula_4, such that their product can still be colored with only countably many colors. Related problems. A similar equality for the cartesian product of graphs was proven by and rediscovered several times afterwards. An exact formula is also known for the lexicographic product of graphs. introduced two stronger conjectures involving unique colorability.
[ { "math_id": 0, "text": "\\chi (G \\times H ) = \\min\\{\\chi (G) , \\chi (H)\\}." }, { "math_id": 1, "text": "\\chi(G)" }, { "math_id": 2, "text": "G" }, { "math_id": 3, "text": "f(n) = \\min\\{ \\chi (G \\times H) \\colon \\chi (G) = \\chi (H) = n \\}" }, { "math_id": 4, "text": "\\kappa" } ]
https://en.wikipedia.org/wiki?curid=8155146
81560
Zeros and poles
Concept in complex analysis In complex analysis (a branch of mathematics), a pole is a certain type of singularity of a complex-valued function of a complex variable. It is the simplest type of non-removable singularity of such a function (see essential singularity). Technically, a point "z"0 is a pole of a function f if it is a zero of the function 1/"f" and 1/"f" is holomorphic (i.e. complex differentiable) in some neighbourhood of "z"0. A function f is meromorphic in an open set U if for every point z of U there is a neighborhood of z in which at least one of f and 1/"f" is holomorphic. If f is meromorphic in U, then a zero of f is a pole of 1/"f", and a pole of f is a zero of 1/"f". This induces a duality between "zeros" and "poles", that is fundamental for the study of meromorphic functions. For example, if a function is meromorphic on the whole complex plane plus the point at infinity, then the sum of the multiplicities of its poles equals the sum of the multiplicities of its zeros. Definitions. A function of a complex variable z is holomorphic in an open domain U if it is differentiable with respect to z at every point of U. Equivalently, it is holomorphic if it is analytic, that is, if its Taylor series exists at every point of U, and converges to the function in some neighbourhood of the point. A function is meromorphic in U if every point of U has a neighbourhood such that at least one of f and 1/"f" is holomorphic in it. A zero of a meromorphic function f is a complex number z such that "f"("z") = 0. A pole of f is a zero of 1/"f". If f is a function that is meromorphic in a neighbourhood of a point formula_0 of the complex plane, then there exists an integer n such that formula_1 is holomorphic and nonzero in a neighbourhood of formula_0 (this is a consequence of the analytic property). If "n" &gt; 0, then formula_0 is a "pole" of order (or multiplicity) n of f. If "n" &lt; 0, then formula_0 is a zero of order formula_2 of f. "Simple zero" and "simple pole" are terms used for zeroes and poles of order formula_3 "Degree" is sometimes used synonymously to order. This characterization of zeros and poles implies that zeros and poles are isolated, that is, every zero or pole has a neighbourhood that does not contain any other zero and pole. Because of the "order" of zeros and poles being defined as a non-negative number n and the symmetry between them, it is often useful to consider a pole of order n as a zero of order –"n" and a zero of order n as a pole of order –"n". In this case a point that is neither a pole nor a zero is viewed as a pole (or zero) of order 0. A meromorphic function may have infinitely many zeros and poles. This is the case for the gamma function (see the image in the infobox), which is meromorphic in the whole complex plane, and has a simple pole at every non-positive integer. The Riemann zeta function is also meromorphic in the whole complex plane, with a single pole of order 1 at "z" = 1. Its zeros in the left halfplane are all the negative even integers, and the Riemann hypothesis is the conjecture that all other zeros are along Re("z") = 1/2. In a neighbourhood of a point formula_4 a nonzero meromorphic function f is the sum of a Laurent series with at most finite "principal part" (the terms with negative index values): formula_5 where n is an integer, and formula_6 Again, if "n" &gt; 0 (the sum starts with formula_7, the principal part has n terms), one has a pole of order n, and if "n" ≤ 0 (the sum starts with formula_8, there is no principal part), one has a zero of order formula_2. At infinity. A function formula_9 is "meromorphic at infinity" if it is meromorphic in some neighbourhood of infinity (that is outside some disk), and there is an integer n such that formula_10 exists and is a nonzero complex number. In this case, the point at infinity is a pole of order n if "n" &gt; 0, and a zero of order formula_2 if "n" &lt; 0. For example, a polynomial of degree n has a pole of degree n at infinity. The complex plane extended by a point at infinity is called the Riemann sphere. If f is a function that is meromorphic on the whole Riemann sphere, then it has a finite number of zeros and poles, and the sum of the orders of its poles equals the sum of the orders of its zeros. Every rational function is meromorphic on the whole Riemann sphere, and, in this case, the sum of orders of the zeros or of the poles is the maximum of the degrees of the numerator and the denominator. formula_11 is meromorphic on the whole Riemann sphere. It has a pole of order 1 or simple pole at formula_12 and a simple zero at infinity. formula_13 is meromorphic on the whole Riemann sphere. It has a pole of order 2 at formula_14 and a pole of order 3 at formula_15. It has a simple zero at formula_16 and a quadruple zero at infinity. formula_17 is meromorphic in the whole complex plane, but not at infinity. It has poles of order 1 at formula_18. This can be seen by writing the Taylor series of formula_19 around the origin. formula_20 has a single pole at infinity of order 1, and a single zero at the origin. Examples. All above examples except for the third are rational functions. For a general discussion of zeros and poles of such functions, see . Function on a curve. The concept of zeros and poles extends naturally to functions on a "complex curve", that is complex analytic manifold of dimension one (over the complex numbers). The simplest examples of such curves are the complex plane and the Riemann surface. This extension is done by transferring structures and properties through charts, which are analytic isomorphisms. More precisely, let f be a function from a complex curve M to the complex numbers. This function is holomorphic (resp. meromorphic) in a neighbourhood of a point z of M if there is a chart formula_21 such that formula_22 is holomorphic (resp. meromorphic) in a neighbourhood of formula_23 Then, z is a pole or a zero of order n if the same is true for formula_23 If the curve is compact, and the function f is meromorphic on the whole curve, then the number of zeros and poles is finite, and the sum of the orders of the poles equals the sum of the orders of the zeros. This is one of the basic facts that are involved in Riemann–Roch theorem.
[ { "math_id": 0, "text": "z_0" }, { "math_id": 1, "text": "(z-z_0)^n f(z)" }, { "math_id": 2, "text": "|n|" }, { "math_id": 3, "text": "|n|=1." }, { "math_id": 4, "text": "z_0," }, { "math_id": 5, "text": "f(z) = \\sum_{k\\geq -n} a_k (z - z_0)^k," }, { "math_id": 6, "text": "a_{-n}\\neq 0." }, { "math_id": 7, "text": "a_{-|n|} (z - z_0)^{-|n|}" }, { "math_id": 8, "text": "a_{|n|} (z - z_0)^{|n|}" }, { "math_id": 9, "text": " z \\mapsto f(z)" }, { "math_id": 10, "text": "\\lim_{z\\to \\infty}\\frac{f(z)}{z^n}" }, { "math_id": 11, "text": "f(z) = \\frac{3}{z}" }, { "math_id": 12, "text": " z= 0," }, { "math_id": 13, "text": "f(z) = \\frac{z+2}{(z-5)^2(z+7)^3}" }, { "math_id": 14, "text": " z=5," }, { "math_id": 15, "text": " z = -7" }, { "math_id": 16, "text": " z=-2," }, { "math_id": 17, "text": "f(z) = \\frac{z-4}{e^z-1}" }, { "math_id": 18, "text": " z=2\\pi ni\\text{ for } n\\in\\mathbb Z" }, { "math_id": 19, "text": " e^z" }, { "math_id": 20, "text": "f(z) = z" }, { "math_id": 21, "text": "\\phi" }, { "math_id": 22, "text": " f \\circ \\phi^{-1}" }, { "math_id": 23, "text": "\\phi(z)." } ]
https://en.wikipedia.org/wiki?curid=81560
81575
Parlophone
German–British record label Parlophone Records Limited (also known as Parlophone Records and Parlophone) is a record label founded in Germany in 1896 by the Carl Lindström Company as Parlophon. The British branch of the label was founded on 8 August 1923 as the Parlophone Company Limited (the Parlophone Co. Ltd.), which developed a reputation in the 1920s as a jazz record label. On 5 October 1926, the Columbia Graphophone Company acquired Parlophone's business, name, logo, and release library, and merged with the Gramophone Company on 31 March 1931 to become Electric &amp; Musical Industries Limited (EMI). George Martin joined Parlophone in 1950 as assistant to Oscar Preuss (who had set up the London branch of the company in 1923), the label manager, taking over as manager in 1955. Martin produced and released a mix of recordings, including by comedian Peter Sellers, pianist Mrs Mills, and teen idol Adam Faith. In 1962, Martin signed the Beatles, a beat group from Liverpool who earlier that year had been rejected by Decca Records. During the 1960s, when Cilla Black, Billy J. Kramer, the Fourmost, and the Hollies also signed, Parlophone became one of the world's most famous labels. For several years, Parlophone claimed the best-selling UK single, "She Loves You", and the best-selling UK album, "Sgt. Pepper's Lonely Hearts Club Band", both by the Beatles. The label placed seven singles at number 1 during 1964, when it claimed top spot on the UK Albums Chart for 40 weeks. Parlophone continued as a division of EMI until it was merged into the Gramophone Co. on 1 July 1965. On 1 July 1973, the Gramophone Co. was renamed EMI Records Limited. On 28 September 2012, regulators approved Universal Music Group's (UMG) planned acquisition of EMI on condition that its EMI Records group would be divested from the combined group. EMI Records Ltd included Parlophone (except the Beatles' catalogue) and other labels to be divested and were for a short time operated in a single entity known as the Parlophone Label Group (PLG), while UMG pended their sale. Warner Music Group (WMG) acquired Parlophone and [PLG] on 7 February 2013, making Parlophone their third flagship label alongside Warner and Atlantic. PLG was renamed Parlophone Records Limited in May 2013. Parlophone is the oldest of WMG's "flagship" record labels. History. Early years. Parlophone was founded "Parlophon" by Carl Lindström Company in 1896. The name Parlophon was used for gramophones before the company began making records of their own. The label's ₤ trademark is a stylised blackletter "L" (formula_0) that stands for Lindström. (Its resemblance to the British pound sign "£" and the Italian lira sign "₤" is coincidental: both derive from the letter "L" used as an abbreviation for the Ancient Roman unit of measurement .) On 8 August 1923, the British branch of "Parlophone" (with the "e" added) was established, led by A&amp;R manager Oscar Preuss. In its early years, Parlophone established itself as a leading jazz label in Britain. EMI years and initial success. In 1927, the Columbia Graphophone Company acquired a controlling interest in the Carl Lindström Company, including Parlophone. Parlophone became a subsidiary of Electric &amp; Musical Industries (EMI), after Columbia Graphophone merged with the Gramophone Company in 1931. In 1950, Oscar Preuss hired producer George Martin as his assistant. When Preuss retired in 1955, Martin succeeded him as Parlophone's manager. Parlophone specialized in mainly classical music, cast recordings, and regional British music, but Martin also expanded the reach into novelty and comedy records. One notable example is "The Best of Sellers", a collection of sketches and comic songs by Peter Sellers undertaken in the guise of a variety of comic characters. It reached number three in the UK Albums Chart in 1958. Others include the albums of the comedy music double act Flanders and Swann. Musicians signed to the label included Humphrey Lyttelton and the Vipers Skiffle Group. A consistently successful act for Parlophone was teen idol Adam Faith, who was signed to the label in 1959. The label gained significant popularity in 1962 when Martin signed Liverpool band the Beatles. Parlophone gained more attention after signing the Hollies, Ella Fitzgerald, and Gerry and the Pacemakers in the 1960s. Martin left EMI/Parlophone to form Associated Independent Recording (AIR) Studios in 1965. Norman Smith took over as Parlophone director, though EMI chairman Sir Joseph Lockwood unsuccessfully attempted to recruit Joe Meek for the job. Parlophone became dormant (except for Beatles reissues) in 1973 when most of EMI's heritage labels were phased out in favour of EMI Records, only to be revived in 1980. The first single released on the revived label was by British group The Cheaters (Parlophone – R6041). During the next decades the label signed Pet Shop Boys, Duran Duran, Roxette, Radiohead, Supergrass, Guy Berryman, the Chemical Brothers, Blur, Coldplay, Kylie Minogue, Damon Albarn, Conor Maynard, Gabrielle Aplin, and Gorillaz. On 23 April 2008, Miles Leonard was confirmed as the label's president. Acquisition by Warner Music Group. On 28 September 2012, regulators approved Universal Music Group's planned acquisition of Parlophone's parent group EMI for £1.2 billion, subject to conditions imposed by the European Commission requiring that UMG sell off a number of labels, including Parlophone itself (aside from the Beatles' catalogue, which was kept by UMG and moved to Universal's newly formed Calderstone Productions), Chrysalis (aside from Robbie Williams' catalogue), Ensign, Virgin Classics, EMI Classics, worldwide rights to Roulette Records (and its sublabels), and EMI's operations in Portugal, Spain, France, Belgium, Denmark, Norway, Sweden, Czech Republic, Slovakia, and Poland. These labels and catalogues were operated independently from Universal as Parlophone Label Group until a buyer was found. UMG received several offers for PLG, including those from Island founder Chris Blackwell, Simon Fuller, a Sony/BMG consortium, Warner Music Group, and MacAndrews &amp; Forbes. On 7 February 2013, it was confirmed that Warner Music Group would acquire Parlophone Label Group for US$765 million. The deal was approved in May 2013 by the European Union, which saw no concerns about the deal because of WMG's smaller reach compared to the merged UMG and Sony. Warner Music closed the deal on 1 July. Parlophone Label Group was the old EMI Records label that included both the Parlophone and the eponymous EMI labels. The EMI trademark was retained by Universal (as Virgin EMI Records) while the "old" EMI Records became defunct and was renamed "Parlophone Records Ltd." Soon after acquiring Parlophone, WMG signed an agreement with IMPALA and the Merlin Network (two groups which opposed the EMI/Universal deal) to divest $200 million worth of catalogues to independent labels in order to help offset the consolidation triggered by the merger. In April 2016, the back catalogue of British rock band Radiohead, who had sued Parlophone and EMI over a dispute in music royalties, was transferred to XL Recordings. WMG treats Parlophone as its third "frontline" label group alongside Atlantic and Warner. In the US, most of Parlophone's artists are now distributed under Warner Records except Dinosaur Pile-Up, distributed by 300 Elektra Entertainment's Roadrunner Records, Coldplay and Tinie Tempah, both distributed by Atlantic Records, and David Guetta, distributed by Atlantic's electronic music imprint Big Beat Records. Roster. Parlophone's roster includes many popular music artists. Its contemporary HMV was more of a classical music label and ceased issuing popular music recordings in 1967; later known as EMI Classics, it was absorbed into Warner Classics in 2013; English Columbia was replaced by the EMI pop label. Parlophone also operates Regal, a contemporary revival of the historic Columbia Graphophone budget/reissue label founded in 1914. The list records those who achieved notability. The Beatles. The Beatles' albums in the U.K. up to "Sgt. Pepper's Lonely Hearts Club Band" were issued on the Parlophone label. Subsequent releases – "The Beatles" (also known as the "White Album"), "Yellow Submarine", "Abbey Road" and "Let It Be" – were issued on the Beatles' own Apple record label, manufactured and distributed by EMI and bearing Parlophone catalogue numbers. On 6 June 1962, producer George Martin signed the Beatles to Parlophone, in turn, making the Beatles' deal one of the cheapest by Parlophone. Despite the separation of Parlophone from EMI as a condition of EMI's acquisition by UMG, Universal was allowed to keep the Beatles' recorded music catalogue, which is now managed by the subsidiary Calderstone Productions. Parlophone record labels. The labels shown here include those used for 78s and LPs. The label design for 7-inch singles had the same standard template as several other EMI labels, with the large "45" insignia to the right. In recent years, design uniformity has relaxed from release to release. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\mathfrak{L}" } ]
https://en.wikipedia.org/wiki?curid=81575
815799
Bump function
Smooth and compactly supported function In mathematics, a bump function (also called a test function) is a function formula_0 on a Euclidean space formula_1 which is both smooth (in the sense of having continuous derivatives of all orders) and compactly supported. The set of all bump functions with domain formula_1 forms a vector space, denoted formula_2 or formula_3 The dual space of this space endowed with a suitable topology is the space of distributions. Examples. The function formula_4 given by formula_5 is an example of a bump function in one dimension. It is clear from the construction that this function has compact support, since a function of the real line has compact support if and only if it has bounded closed support. The proof of smoothness follows along the same lines as for the related function discussed in the Non-analytic smooth function article. This function can be interpreted as the Gaussian function formula_6 scaled to fit into the unit disc: the substitution formula_7 corresponds to sending formula_8 to formula_9 A simple example of a (square) bump function in formula_10 variables is obtained by taking the product of formula_10 copies of the above bump function in one variable, so formula_11 A radially symmetric bump function in formula_10 variables can be formed by taking the function formula_12 defined by formula_13. This function is supported on the unit ball centered at the origin. For another example, take an formula_14 that is positive on formula_15 and zero elsewhere, for example formula_16. Smooth transition functions Consider the function formula_17 defined for every real number "x". The function formula_18 has a strictly positive denominator everywhere on the real line, hence "g" is also smooth. Furthermore, "g"("x") = 0 for "x" ≤ 0 and "g"("x") = 1 for "x" ≥ 1, hence it provides a smooth transition from the level 0 to the level 1 in the unit interval [0, 1]. To have the smooth transition in the real interval ["a", "b"] with "a" &lt; "b", consider the function formula_19 For real numbers "a" &lt; "b" &lt; "c" &lt; "d", the smooth function formula_20 equals 1 on the closed interval ["b", "c"] and vanishes outside the open interval ("a", "d"), hence it can serve as a bump function. Caution must be taken since, as example, taking formula_21, leads to: formula_22 which is not an infinitely differentiable function (so, is not "smooth"), so the constraints "a" &lt; "b" &lt; "c" &lt; "d" must be strictly fulfilled. Some interesting facts about the function: formula_23 Are that formula_24 make smooth transition curves with "almost" constant slope edges (behaves like inclined straight lines on a non-zero measure interval). A proper example of a smooth Bump function would be: formula_25 A proper example of a smooth transition function will be: formula_26 where could be noticed that it can be represented also through Hyperbolic functions: formula_27 Existence of bump functions. It is possible to construct bump functions "to specifications". Stated formally, if formula_28 is an arbitrary compact set in formula_10 dimensions and formula_29 is an open set containing formula_30 there exists a bump function formula_31 which is formula_32 on formula_28 and formula_33 outside of formula_34 Since formula_29 can be taken to be a very small neighborhood of formula_30 this amounts to being able to construct a function that is formula_32 on formula_28 and falls off rapidly to formula_33 outside of formula_30 while still being smooth. Bump functions defined in terms of convolution The construction proceeds as follows. One considers a compact neighborhood formula_35 of formula_28 contained in formula_36 so formula_37 The characteristic function formula_38 of formula_35 will be equal to formula_32 on formula_35 and formula_33 outside of formula_39 so in particular, it will be formula_32 on formula_28 and formula_33 outside of formula_34 This function is not smooth however. The key idea is to smooth formula_38 a bit, by taking the convolution of formula_38 with a mollifier. The latter is just a bump function with a very small support and whose integral is formula_40 Such a mollifier can be obtained, for example, by taking the bump function formula_41 from the previous section and performing appropriate scalings. Bump functions defined in terms of a function formula_42 with support formula_43 An alternative construction that does not involve convolution is now detailed. It begins by constructing a smooth function formula_0 that is positive on a given open subset formula_44 and vanishes off of formula_34 This function's support is equal to the closure formula_45 of formula_29 in formula_46 so if formula_45 is compact, then formula_47 is a bump function. Start with any smooth function formula_48 that vanishes on the negative reals and is positive on the positive reals (that is, formula_49 on formula_50 and formula_51 on formula_52 where continuity from the left necessitates formula_53); an example of such a function is formula_54 for formula_55 and formula_56 otherwise. Fix an open subset formula_29 of formula_1 and denote the usual Euclidean norm by formula_57 (so formula_1 is endowed with the usual Euclidean metric). The following construction defines a smooth function formula_0 that is positive on formula_29 and vanishes outside of formula_34 So in particular, if formula_29 is relatively compact then this function formula_47 will be a bump function. If formula_58 then let formula_59 while if formula_60 then let formula_61; so assume formula_29 is neither of these. Let formula_62 be an open cover of formula_29 by open balls where the open ball formula_63 has radius formula_64 and center formula_65 Then the map formula_66 defined by formula_67 is a smooth function that is positive on formula_63 and vanishes off of formula_68 For every formula_69 let formula_70 where this supremum is not equal to formula_71 (so formula_72 is a non-negative real number) because formula_73 the partial derivatives all vanish (equal formula_33) at any formula_74 outside of formula_75 while on the compact set formula_76 the values of each of the (finitely many) partial derivatives are (uniformly) bounded above by some non-negative real number. The series formula_78 converges uniformly on formula_1 to a smooth function formula_0 that is positive on formula_29 and vanishes off of formula_34 Moreover, for any non-negative integers formula_79 formula_80 where this series also converges uniformly on formula_1 (because whenever formula_81 then the formula_77th term's absolute value is formula_82). This completes the construction. As a corollary, given two disjoint closed subsets formula_83 of formula_46 the above construction guarantees the existence of smooth non-negative functions formula_84 such that for any formula_85 formula_86 if and only if formula_87 and similarly, formula_88 if and only if formula_89 then the function formula_90 is smooth and for any formula_85 formula_91 if and only if formula_87 formula_92 if and only if formula_89 and formula_93 if and only if formula_94 In particular, formula_95 if and only if formula_96 so if in addition formula_97 is relatively compact in formula_1 (where formula_98 implies formula_99) then formula_14 will be a smooth bump function with support in formula_100 Properties and uses. While bump functions are smooth, the identity theorem prohibits their being analytic unless they vanish identically. Bump functions are often used as mollifiers, as smooth cutoff functions, and to form smooth partitions of unity. They are the most common class of test functions used in analysis. The space of bump functions is closed under many operations. For instance, the sum, product, or convolution of two bump functions is again a bump function, and any differential operator with smooth coefficients, when applied to a bump function, will produce another bump function. If the boundaries of the Bump function domain is formula_101 to fulfill the requirement of "smoothness", it has to preserve the continuity of all its derivatives, which leads to the following requirement at the boundaries of its domain: formula_102 The Fourier transform of a bump function is a (real) analytic function, and it can be extended to the whole complex plane: hence it cannot be compactly supported unless it is zero, since the only entire analytic bump function is the zero function (see Paley–Wiener theorem and Liouville's theorem). Because the bump function is infinitely differentiable, its Fourier transform must decay faster than any finite power of formula_103 for a large angular frequency formula_104 The Fourier transform of the particular bump function formula_105 from above can be analyzed by a saddle-point method, and decays asymptotically as formula_106 for large formula_104 Citations. &lt;templatestyles src="Reflist/styles.css" /&gt; &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "f : \\Reals^n \\to \\Reals" }, { "math_id": 1, "text": "\\Reals^n" }, { "math_id": 2, "text": "\\mathrm{C}^\\infty_0(\\Reals^n)" }, { "math_id": 3, "text": "\\mathrm{C}^\\infty_\\mathrm{c}(\\Reals^n)." }, { "math_id": 4, "text": "\\Psi : \\Reals \\to \\Reals" }, { "math_id": 5, "text": "\\Psi(x) = \n\\begin{cases}\n\\exp\\left( -\\frac{1}{1 - x^2}\\right), & \\text{ if } x \\in (-1,1) \\\\\n0, & \\text{ if } x\\in \\mathbb{R}\\setminus \\{(-1,1)\\} \n\\end{cases}" }, { "math_id": 6, "text": "\\exp\\left(-y^2\\right)" }, { "math_id": 7, "text": "y^2 = {1} / {\\left(1 - x^2\\right)}" }, { "math_id": 8, "text": "x = \\pm 1" }, { "math_id": 9, "text": "y = \\infty." }, { "math_id": 10, "text": "n" }, { "math_id": 11, "text": "\\Phi(x_1, x_2, \\dots, x_n) = \\Psi(x_1) \\Psi(x_2) \\cdots \\Psi(x_n)." }, { "math_id": 12, "text": "\\Psi_n : \\Reals^n \\to \\Reals" }, { "math_id": 13, "text": "\\Psi_n(\\mathbf{x})=\\Psi(|\\mathbf{x}|)" }, { "math_id": 14, "text": "h" }, { "math_id": 15, "text": "(c, d)" }, { "math_id": 16, "text": "h(x) = \\begin{cases}\n\\exp\\left(-\\frac{1}{(x-c)(d-x)}\\right),& c < x < d \\\\\n0,& \\mathrm{otherwise}\n\\end{cases}" }, { "math_id": 17, "text": "f(x)=\\begin{cases}e^{-\\frac{1}{x}}&\\text{if }x>0,\\\\ 0&\\text{if }x\\le0,\\end{cases}" }, { "math_id": 18, "text": "g(x)=\\frac{f(x)}{f(x)+f(1-x)},\\qquad x\\in\\mathbb{R}," }, { "math_id": 19, "text": "\\mathbb{R}\\ni x\\mapsto g\\Bigl(\\frac{x-a}{b-a}\\Bigr)." }, { "math_id": 20, "text": "\\mathbb{R}\\ni x\\mapsto g\\Bigl(\\frac{x-a}{b-a}\\Bigr)\\,g\\Bigl(\\frac{d-x}{d-c}\\Bigr)" }, { "math_id": 21, "text": "\\{a =-1\\} < \\{b = c =0\\} < \\{d=1\\}" }, { "math_id": 22, "text": "q(x)=\\frac{1}{1+e^{\\frac{1-2|x|}{x^2-|x|}}}" }, { "math_id": 23, "text": "q(x,a)=\\frac{1}{1+e^{\\frac{a(1-2|x|)}{x^2-|x|}}}" }, { "math_id": 24, "text": "q\\left(x,\\frac{\\sqrt{3}}{2}\\right)" }, { "math_id": 25, "text": "u(x)=\\begin{cases} 1,\\text{if } x=0, \\\\ 0, \\text{if } |x|\\geq 1, \\\\ \\frac{1}{1+e^{\\frac{1-2|x|}{x^2-|x|}}}, \\text{otherwise}, \\end{cases}" }, { "math_id": 26, "text": "w(x)=\\begin{cases}\\frac{1}{1+e^{\\frac{2x-1}{x^2-x}}}&\\text{if }0<x<1,\\\\ 0&\\text{if } x\\leq 0,\\\\ 1&\\text{if } x\\geq 1,\\end{cases}" }, { "math_id": 27, "text": "\\frac{1}{1+e^{\\frac{2x-1}{x^2-x}}} = \\frac{1}{2}\\left( 1-\\tanh\\left(\\frac{2x-1}{2(x^2-x)} \\right) \\right)" }, { "math_id": 28, "text": "K" }, { "math_id": 29, "text": "U" }, { "math_id": 30, "text": "K," }, { "math_id": 31, "text": "\\phi" }, { "math_id": 32, "text": "1" }, { "math_id": 33, "text": "0" }, { "math_id": 34, "text": "U." }, { "math_id": 35, "text": "V" }, { "math_id": 36, "text": "U," }, { "math_id": 37, "text": "K \\subseteq V^\\circ\\subseteq V \\subseteq U." }, { "math_id": 38, "text": "\\chi_V" }, { "math_id": 39, "text": "V," }, { "math_id": 40, "text": "1." }, { "math_id": 41, "text": "\\Phi" }, { "math_id": 42, "text": "c : \\Reals \\to [0, \\infty)" }, { "math_id": 43, "text": "(-\\infty, 0]" }, { "math_id": 44, "text": "U \\subseteq \\Reals^n" }, { "math_id": 45, "text": "\\overline{U}" }, { "math_id": 46, "text": "\\Reals^n," }, { "math_id": 47, "text": "f" }, { "math_id": 48, "text": "c : \\Reals \\to \\Reals" }, { "math_id": 49, "text": "c = 0" }, { "math_id": 50, "text": "(-\\infty, 0)" }, { "math_id": 51, "text": "c > 0" }, { "math_id": 52, "text": "(0, \\infty)," }, { "math_id": 53, "text": "c(0) = 0" }, { "math_id": 54, "text": "c(x) := e^{-1/x}" }, { "math_id": 55, "text": "x > 0" }, { "math_id": 56, "text": "c(x) := 0" }, { "math_id": 57, "text": "\\|\\cdot\\|" }, { "math_id": 58, "text": "U = \\Reals^n" }, { "math_id": 59, "text": "f = 1" }, { "math_id": 60, "text": "U = \\varnothing" }, { "math_id": 61, "text": "f = 0" }, { "math_id": 62, "text": "\\left(U_k\\right)_{k=1}^\\infty" }, { "math_id": 63, "text": "U_k" }, { "math_id": 64, "text": "r_k > 0" }, { "math_id": 65, "text": "a_k \\in U." }, { "math_id": 66, "text": "f_k : \\Reals^n \\to \\Reals" }, { "math_id": 67, "text": "f_k(x) = c\\left(r_k^2 - \\left\\|x - a_k\\right\\|^2\\right)" }, { "math_id": 68, "text": "U_k." }, { "math_id": 69, "text": "k \\in \\mathbb{N}," }, { "math_id": 70, "text": "M_k = \\sup \\left\\{\\left|\\frac{\\partial^p f_k}{\\partial^{p_1} x_1 \\cdots \\partial^{p_n} x_n}(x)\\right| ~:~ x \\in \\Reals^n \\text{ and } p_1, \\ldots, p_n \\in \\Z \\text{ satisfy } 0 \\leq p_i \\leq k \\text{ and } p = \\sum_i p_i\\right\\}," }, { "math_id": 71, "text": "+\\infty" }, { "math_id": 72, "text": "M_k" }, { "math_id": 73, "text": "\\left(\\Reals^n \\setminus U_k\\right) \\cup \\overline{U_k} = \\Reals^n," }, { "math_id": 74, "text": "x" }, { "math_id": 75, "text": "U_k," }, { "math_id": 76, "text": "\\overline{U_k}," }, { "math_id": 77, "text": "k" }, { "math_id": 78, "text": "f ~:=~ \\sum_{k=1}^{\\infty} \\frac{f_k}{2^k M_k}" }, { "math_id": 79, "text": "p_1, \\ldots, p_n \\in \\Z," }, { "math_id": 80, "text": "\\frac{\\partial^{p_1+\\cdots+p_n}}{\\partial^{p_1} x_1 \\cdots \\partial^{p_n} x_n} f ~=~ \\sum_{k=1}^{\\infty} \\frac{1}{2^k M_k} \\frac{\\partial^{p_1+\\cdots+p_n} f_k}{\\partial^{p_1} x_1 \\cdots \\partial^{p_n} x_n}" }, { "math_id": 81, "text": "k \\geq p_1 + \\cdots + p_n" }, { "math_id": 82, "text": "\\leq \\tfrac{M_k}{2^k M_k} = \\tfrac{1}{2^k}" }, { "math_id": 83, "text": "A, B" }, { "math_id": 84, "text": "f_A, f_B : \\Reals^n \\to [0, \\infty)" }, { "math_id": 85, "text": "x \\in \\Reals^n," }, { "math_id": 86, "text": "f_A(x) = 0" }, { "math_id": 87, "text": "x \\in A," }, { "math_id": 88, "text": "f_B(x) = 0" }, { "math_id": 89, "text": "x \\in B," }, { "math_id": 90, "text": "h ~:=~ \\frac{f_A}{f_A + f_B} : \\Reals^n \\to [0, 1]" }, { "math_id": 91, "text": "h(x) = 0" }, { "math_id": 92, "text": "h(x) = 1" }, { "math_id": 93, "text": "0 < h(x) < 1" }, { "math_id": 94, "text": "x \\not\\in A \\cup B." }, { "math_id": 95, "text": "h(x) \\neq 0" }, { "math_id": 96, "text": "x \\in \\Reals^n \\smallsetminus A," }, { "math_id": 97, "text": "U := \\Reals^n \\smallsetminus A" }, { "math_id": 98, "text": "A \\cap B = \\varnothing" }, { "math_id": 99, "text": "B \\subseteq U" }, { "math_id": 100, "text": "\\overline{U}." }, { "math_id": 101, "text": "\\partial x," }, { "math_id": 102, "text": "\\lim_{x \\to \\partial x^\\pm} \\frac{d^n}{dx^n} f(x) = 0,\\,\\text { for all } n \\geq 0, \\,n \\in \\Z" }, { "math_id": 103, "text": "1/k" }, { "math_id": 104, "text": "|k|." }, { "math_id": 105, "text": "\\Psi(x) = e^{-1/(1-x^2)} \\mathbf{1}_{\\{|x|<1\\}}" }, { "math_id": 106, "text": "|k|^{-3/4} e^{-\\sqrt{|k|}}" } ]
https://en.wikipedia.org/wiki?curid=815799
8158341
Near point
In visual perception, the near point is the closest point at which an object can be placed and still form a focused image on the retina, within the eye's accommodation range. The other limit to the eye's accommodation range is the far point. A normal eye is considered to have a near point at about for a thirty year old. The near point is highly age dependent (see accommodation). A person with hyperopia or presbyopia would have a near point that is farther than normal. Sometimes, near point is given in diopters (see ), which refers to the inverse of the distance. For example a normal eye would have a near point of formula_0. Vision correction. A person with hyperopia has a near point that is further away than the typical near point for someone their age, and hence the person is unable to bring an object at the typical near point distance into sharp focus. A corrective lens can be used to correct hyperopia by imaging an object at the typical near point distance D onto a virtual image at the patient's actual near point, at distance NP. From the thin lens formula, the required lens will have optical power P given by formula_1 The calculation can be further improved by taking into account the distance between the spectacle lens and the human eye, which is usually about 1.5 cm: formula_2 For example, if a person has "NP" = 1 m and the typical near point distance at their age is "D" = 25 cm, then the optical power needed is "P" = +3.24 diopters where one diopter is the reciprocal of one meter. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\frac{1}{11\\ \\text{cm}} = 9\\ \\text{diopters}" }, { "math_id": 1, "text": "P \\approx \\frac{1}{D}-\\frac{1}{\\mathit{NP}}." }, { "math_id": 2, "text": "P = \\frac{1}{D-0.015\\;\\text{m}}-\\frac{1}{\\mathit{NP}-0.015\\;\\text{m}}." } ]
https://en.wikipedia.org/wiki?curid=8158341
815969
Buckling
Sudden change in shape of a structural component under load In structural engineering, buckling is the sudden change in shape (deformation) of a structural component under load, such as the bowing of a column under compression or the wrinkling of a plate under shear. If a structure is subjected to a gradually increasing load, when the load reaches a critical level, a member may suddenly change shape and the structure and component is said to have "buckled". Euler's critical load and Johnson's parabolic formula are used to determine the buckling stress of a column. Buckling may occur even though the stresses that develop in the structure are well below those needed to cause failure in the material of which the structure is composed. Further loading may cause significant and somewhat unpredictable deformations, possibly leading to complete loss of the member's load-carrying capacity. However, if the deformations that occur after buckling do not cause the complete collapse of that member, the member will continue to support the load that caused it to buckle. If the buckled member is part of a larger assemblage of components such as a building, any load applied to the buckled part of the structure beyond that which caused the member to buckle will be redistributed within the structure. Some aircraft are designed for thin skin panels to continue carrying load even in the buckled state. Forms of buckling. Columns. The ratio of the effective length of a column to the least radius of gyration of its cross section is called the slenderness ratio (sometimes expressed with the Greek letter lambda, λ). This ratio affords a means of classifying columns and their failure mode. The slenderness ratio is important for design considerations. All the following are approximate values used for convenience. If the load on a column is applied through the center of gravity (centroid) of its cross section, it is called an axial load. A load at any other point in the cross section is known as an eccentric load. A short column under the action of an axial load will fail by direct compression before it buckles, but a long column loaded in the same manner will fail by springing suddenly outward laterally (buckling) in a bending mode. The buckling mode of deflection is considered a failure mode, and it generally occurs before the axial compression stresses (direct compression) can cause failure of the material by yielding or fracture of that compression member. However, intermediate-length columns will fail by a combination of direct compressive stress and bending. In particular: The theory of the behavior of columns was investigated in 1757 by mathematician Leonhard Euler. He derived the formula, termed Euler's critical load, that gives the maximum axial load that a long, slender, ideal column can carry without buckling. An ideal column is one that is: When the applied load reaches the Euler load, sometimes called the critical load, the column comes to be in a state of unstable equilibrium. At that load, the introduction of the slightest lateral force will cause the column to fail by suddenly "jumping" to a new configuration, and the column is said to have buckled. This is what happens when a person stands on an empty aluminum can and then taps the sides briefly, causing it to then become instantly crushed (the vertical sides of the can may be understood as an infinite series of extremely thin columns). The formula derived by Euler for long slender columns is formula_0 where Examination of this formula reveals the following facts with regard to the load-bearing ability of slender columns. A conclusion from the above is that the buckling load of a column may be increased by changing its material to one with a higher modulus of elasticity (E), or changing the design of the column's cross section so as to increase its moment of inertia. The latter can be done without increasing the weight of the column by distributing the material as far from the principal axis of the column's cross section as possible. For most purposes, the most effective use of the material of a column is that of a tubular section. Another insight that may be gleaned from this equation is the effect of length on critical load. Doubling the unsupported length of the column quarters the allowable load. The restraint offered by the end connections of a column also affects its critical load. If the connections are perfectly rigid (not allowing rotation of its ends), the critical load will be four times that for a similar column where the ends are pinned (allowing rotation of its ends). Since the radius of gyration is defined as the square root of the ratio of the column's moment of inertia about an axis to its cross sectional area, the above Euler formula may be reformatted by substituting the radius of gyration formula_11 for formula_3: formula_12 where formula_13 is the stress that causes buckling in the column, and formula_14 is the slenderness ratio. Since structural columns are commonly of intermediate length, the Euler formula has little practical application for ordinary design. Issues that cause deviation from the pure Euler column behaviour include imperfections in geometry of the column in combination with plasticity/non-linear stress strain behaviour of the column's material. Consequently, a number of empirical column formulae have been developed that agree with test data, all of which embody the slenderness ratio. Due to the uncertainty in the behavior of columns, for design, appropriate safety factors are introduced into these formulae. One such formula is the Perry Robertson formula which estimates the critical buckling load based on an assumed small initial curvature, hence an eccentricity of the axial load. The Rankine Gordon formula, named for William John Macquorn Rankine and Perry Hugesworth Gordon (1899 – 1966), is also based on experimental results and suggests that a column will buckle at a load "F"max given by: formula_15 where formula_16 is the Euler maximum load and formula_1 is the maximum compressive load. This formula typically produces a conservative estimate of formula_17. Self-buckling. A free-standing, vertical column, with density formula_18, Young's modulus formula_2, and cross-sectional area formula_19, will buckle under its own weight if its height exceeds a certain critical value: formula_20 where formula_21 is the acceleration due to gravity, formula_3 is the second moment of area of the beam cross section, and formula_22 is the first zero of the Bessel function of the first kind of order −1/3, which is equal to 1.86635086... Plate buckling. A plate is a 3-dimensional structure defined as having a width of comparable size to its length, with a thickness that is very small in comparison to its other two dimensions. Similar to columns, thin plates experience out-of-plane buckling deformations when subjected to critical loads; however, contrasted to column buckling, plates under buckling loads can continue to carry loads, called local buckling. This phenomenon is incredibly useful in numerous systems, as it allows systems to be engineered to provide greater loading capacities. For a rectangular plate, supported along every edge, loaded with a uniform compressive force per unit length, the derived governing equation can be stated by: formula_23 where The solution to the deflection can be expanded into two harmonic functions shown: formula_28 where The previous equation can be substituted into the earlier differential equation where formula_30 equals 1. formula_33 can be separated providing the equation for the critical compressive loading of a plate: formula_34 where the buckling coefficient formula_35, is given by: formula_36 The buckling coefficient is influenced by the aspect of the specimen, formula_31 / formula_37, and the number of lengthwise curvatures. For an increasing number of such curvatures, the aspect ratio produces a varying buckling coefficient; but each relation provides a minimum value for each formula_29. This minimum value can then be used as a constant, independent from both the aspect ratio and formula_29. Given stress is found by the load per unit area, the following expression is found for the critical stress: formula_38 From the derived equations, it can be seen the close similarities between the critical stress for a column and for a plate. As the width formula_32 shrinks, the plate acts more like a column as it increases the resistance to buckling along the plate's width. The increase of formula_31 allows for an increase of the number of sine waves produced by buckling along the length, but also increases the resistance from the buckling along the width. This creates the preference of the plate to buckle in such a way to equal the number of curvatures both along the width and length. Due to boundary conditions, when a plate is loaded with a critical stress and buckles, the edges perpendicular to the load cannot deform out-of-plane and will therefore continue to carry the stresses. This creates a non-uniform compressive loading along the ends, where the stresses are imposed on half of the effective width on either side of the specimen, given by the following: formula_39 where As the loaded stress increases, the effective width continues to shrink; if the stresses on the ends ever reach the yield stress, the plate will fail. This is what allows the buckled structure to continue supporting loadings. When the axial load over the critical load is plotted against the displacement, the fundamental path is shown. It demonstrates the plate's similarity to a column under buckling; however, past the buckling load, the fundamental path bifurcates into a secondary path that curves upward, providing the ability to be subjected to higher loads past the critical load. Flexural-torsional buckling. Flexural-torsional buckling can be described as a combination of bending and twisting response of a member in compression. Such a deflection mode must be considered for design purposes. This mostly occurs in columns with "open" cross-sections and hence have a low torsional stiffness, such as channels, structural tees, double-angle shapes, and equal-leg single angles. Circular cross sections do not experience such a mode of buckling. Lateral-torsional buckling. When a simply supported beam is loaded in bending, the top side is in compression, and the bottom side is in tension. If the beam is not supported in the lateral direction (i.e., perpendicular to the plane of bending), and the flexural load increases to a critical limit, the beam will experience a lateral deflection of the compression flange as it buckles locally. The lateral deflection of the compression flange is restrained by the beam web and tension flange, but for an open section the twisting mode is more flexible, hence the beam both twists and deflects laterally in a failure mode known as "lateral-torsional buckling". In wide-flange sections (with high lateral bending stiffness), the deflection mode will be mostly twisting in torsion. In narrow-flange sections, the bending stiffness is lower and the column's deflection will be closer to that of lateral bucking deflection mode. The use of closed sections such as square hollow section will mitigate the effects of lateral-torsional buckling by virtue of their high torsional stiffness. "C""b" is a modification factor used in the equation for nominal flexural strength when determining lateral-torsional buckling. The reason for this factor is to allow for non-uniform moment diagrams when the ends of a beam segment are braced. The conservative value for "C""b" can be taken as 1, regardless of beam configuration or loading, but in some cases it may be excessively conservative. "C""b" is always equal to or greater than 1, never less. For cantilevers or overhangs where the free end is unbraced, Cb is equal to 1. Tables of values of "C""b" for simply supported beams exist. If an appropriate value of "C""b" is not given in tables, it can be obtained via the following formula: formula_42 where The result is the same for all unit systems. Plastic buckling. The buckling strength of a member is less than the elastic buckling strength of a structure if the material of the member is stressed beyond the elastic material range and into the non-linear (plastic) material behavior range. When the compression load is near the buckling load, the structure will bend significantly and the material of the column will diverge from a linear stress-strain behavior. The stress-strain behavior of materials is not strictly linear even below the yield point, hence the modulus of elasticity decreases as stress increases, and significantly so as the stresses approach the material's yield strength. This reduced material rigidity reduces the buckling strength of the structure and results in a buckling load less than that predicted by the assumption of linear elastic behavior. A more accurate approximation of the buckling load can be had by the use of the tangent modulus of elasticity, Et, which is less than the elastic modulus, in place of the elastic modulus of elasticity. The tangent is equal to the elastic modulus and then decreases beyond the proportional limit. The tangent modulus is a line drawn tangent to the stress-strain curve at a particular value of strain (in the elastic section of the stress-strain curve, the tangent modulus is equal to the elastic modulus). Plots of the tangent modulus of elasticity for a variety of materials are available in standard references. Crippling. Sections that are made up of flanged plates such as a channel, can still carry load in the corners after the flanges have locally buckled. Crippling is failure of the complete section. Diagonal tension. Because of the thin skins typically used in aerospace applications, skins may buckle at low load levels. However, once buckled, instead of being able to transmit shear forces, they are still able to carry load through "diagonal tension" (DT) stresses in the web. This results in a non-linear behaviour in the load carrying behaviour of these details. The ratio of the actual load to the load at which buckling occurs is known as the "buckling ratio" of a sheet. High buckling ratios may lead to excessive wrinkling of the sheets which may then fail through yielding of the wrinkles. Although they may buckle, thin sheets are designed to not permanently deform and return to an unbuckled state when the applied loading is removed. Repeated buckling may lead to fatigue failures. Sheets under diagonal tension are supported by stiffeners that as a result of sheet buckling carry a distributed load along their length, and may in turn result in these structural members failing under buckling. Thicker plates may only partially form a diagonal tension field and may continue to carry some of the load through shear. This is known as "incomplete diagonal tension" (IDT). This behavior was studied by Wagner and these beams are sometimes known as Wagner beams. Diagonal tension may also result in a pulling force on any fasteners such as rivets that are used to fasten the web to the supporting members. Fasteners and sheets must be designed to resist being pulled off their supports. Dynamic buckling. If a column is loaded suddenly and then the load released, the column can sustain a much higher load than its static (slowly applied) buckling load. This can happen in a long, unsupported column used as a drop hammer. The duration of compression at the impact end is the time required for a stress wave to travel along the column to the other (free) end and back down as a relief wave. Maximum buckling occurs near the impact end at a wavelength much shorter than the length of the rod, and at a stress many times the buckling stress of a statically loaded column. The critical condition for buckling amplitude to remain less than about 25 times the effective rod straightness imperfection at the buckle wavelength is formula_47 where formula_48 is the impact stress, formula_4 is the length of the rod, formula_49 is the elastic wave speed, and formula_50 is the smaller lateral dimension of a rectangular rod. Because the buckle wavelength depends only on formula_48 and formula_50, this same formula holds for thin cylindrical shells of thickness formula_50. Theory. Energy method. Often it is very difficult to determine the exact buckling load in complex structures using the Euler formula, due to the difficulty in determining the constant K. Therefore, maximum buckling load is often approximated using energy conservation and referred to as an energy method in structural analysis. The first step in this method is to assume a displacement mode and a function that represents that displacement. This function must satisfy the most important boundary conditions, such as displacement and rotation. The more accurate the displacement function, the more accurate the result. The method assumes that the system (the column) is a conservative system in which energy is not dissipated as heat, hence the energy added to the column by the applied external forces is stored in the column in the form of strain energy. formula_51 In this method, there are two equations used (for small deformations) to approximate the "strain" energy (the potential energy stored as elastic deformation of the structure) and "applied" energy (the work done on the system by external forces). formula_52 where formula_53 is the displacement function and the subscripts formula_54 and formula_55 refer to the first and second derivatives of the displacement. Single-degree-of-freedom models. Using the concept of "total potential energy", formula_56, it is possible to identify four fundamental forms of buckling found in structural models with one degree of freedom. We start by expressing formula_57 where formula_58 is the strain energy stored in the structure, formula_59 is the applied "conservative" load and formula_60 is the distance moved by formula_59 in its direction. Using the axioms of elastic instability theory, namely that equilibrium is any point where formula_56 is stationary with respect to the coordinate measuring the degree(s) of freedom and that these points are only stable if formula_56 is a local minimum and unstable if otherwise (e.g. maximum or a point of inflection). These four forms of elastic buckling are the "saddle-node bifurcation" or "limit point"; the "supercritical" or "stable-symmetric" bifurcation; the "subcritical" or "unstable-symmetric" bifurcation; and the "transcritical" or "asymmetric" bifurcation. All but the first of these examples is a form of "pitchfork bifurcation". Simple models for each of these types of buckling behaviour are shown in the figures below, along with the associated bifurcation diagrams. Engineering examples. Bicycle wheels. A conventional bicycle wheel consists of a thin rim kept under high compressive stress by the (roughly normal) inward pull of a large number of spokes. It can be considered as a loaded column that has been bent into a circle. If spoke tension is increased beyond a safe level or if part of the rim is subject to a certain lateral force, the wheel spontaneously fails into a characteristic saddle shape (sometimes called a "taco" or a "pringle") like a three-dimensional Euler column. If this is a purely elastic deformation the rim will resume its proper plane shape if spoke tension is reduced or a lateral force from the opposite direction is applied. Roads. Buckling is a failure mode in pavement materials, primarily with concrete, since asphalt is more flexible. Radiant heat from the sun is absorbed in the road surface, causing it to expand, forcing adjacent pieces to push against each other. If the stress is sufficient, the pavement can lift and crack without warning. Traversing a buckled section can be jarring to automobile drivers, described as running over a speed hump at highway speeds. Rail tracks. Similarly, rail tracks also expand when heated, and can fail by buckling, a phenomenon called sun kink. It is more common for rails to move laterally, often pulling the underlying ties (sleepers) along. Sun kink can lead to railroads drastically reducing the speed of trains, leading to delays and cancellations. This is done to avoid derailment. Intensifying heat waves due to climate change doubled the number of hours of heat related delays in 2023, compared to 2018. These accidents were deemed to be sun kink-related ("more information available at List of rail accidents (2000–2009)"): The Federal Railroad Administration issued a Safety Advisory on July 11, 2012 alerting railroad operators to inspect tracks for "buckling-prone conditions." The Advisory included a brief summary of four derailments that had occurred between June 23 to July 4 that appeared to be "heat related incidents." Pipes and pressure vessels. Pipes and pressure vessels subject to external overpressure, caused for example by steam cooling within the pipe and condensing into water with subsequent massive pressure drop, risk buckling due to compressive hoop stresses. Design rules for calculation of the required wall thickness or reinforcement rings are given in various piping and pressure vessel codes. Super- and hypersonic aerospace vehicles. Aerothermal heating can lead to buckling of surface panels on super- and hypersonic aerospace vehicles such as high-speed aircraft, rockets and reentry vehicles. If buckling is caused by aerothermal loads, the situation can be further complicated by enhanced heat transfer in areas where the structure deforms towards the flow-field. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": " F_c = \\frac{\\pi^2 EI}{(KL)^2} " }, { "math_id": 1, "text": "F_c" }, { "math_id": 2, "text": "E" }, { "math_id": 3, "text": "I" }, { "math_id": 4, "text": "L" }, { "math_id": 5, "text": "K" }, { "math_id": 6, "text": "K = 1.0" }, { "math_id": 7, "text": "K = 0.50" }, { "math_id": 8, "text": "K \\approx 0.699" }, { "math_id": 9, "text": "K = 2.0" }, { "math_id": 10, "text": "K L" }, { "math_id": 11, "text": "A r^2" }, { "math_id": 12, "text": "\\sigma = \\frac{F}{A} = \\frac{\\pi^2 E}{(l/r)^2}" }, { "math_id": 13, "text": "\\sigma = F/A" }, { "math_id": 14, "text": "l/r" }, { "math_id": 15, "text": "\\frac{1}{F_\\max} = \\frac{1}{F_e} + \\frac{1}{F_c}" }, { "math_id": 16, "text": "F_e" }, { "math_id": 17, "text": "F_\\max" }, { "math_id": 18, "text": "\\rho" }, { "math_id": 19, "text": "A" }, { "math_id": 20, "text": "h_\\text{crit} = \\left(\\frac{9B^2}{4}\\,\\frac{EI }{\\rho gA}\\right)^\\frac{1}{3}" }, { "math_id": 21, "text": "g" }, { "math_id": 22, "text": "B" }, { "math_id": 23, "text": "\\frac{\\partial^4 w}{\\partial x^4} + 2\\frac{\\partial^4 w}{\\partial x^2 \\partial y^2} + \\frac{\\partial^4 w}{\\partial y^4} = \\frac{12\\left(1 - \\nu^2\\right)}{E t^3}\\left(-N_x \\frac{\\partial^2 w}{\\partial x^2}\\right)" }, { "math_id": 24, "text": "w" }, { "math_id": 25, "text": "N_{x}" }, { "math_id": 26, "text": "\\nu" }, { "math_id": 27, "text": "t" }, { "math_id": 28, "text": "w = \\sum_{m=1}^\\infty \\sum_{n=1}^\\infty w_{mn}\\sin\\left(\\frac{m\\pi x}{a}\\right)\\sin\\left(\\frac{n\\pi y}{b}\\right)" }, { "math_id": 29, "text": "m" }, { "math_id": 30, "text": "n" }, { "math_id": 31, "text": "a" }, { "math_id": 32, "text": "b" }, { "math_id": 33, "text": "N_x" }, { "math_id": 34, "text": "N_{x, cr} = k_{cr} \\frac{\\pi^2 Et^3}{12\\left(1 - \\nu^2\\right)b^2}" }, { "math_id": 35, "text": "k_{cr}" }, { "math_id": 36, "text": "k_{cr} = \\left(\\frac{mb}{a} + \\frac{a}{mb}\\right)^2" }, { "math_id": 37, "text": "{b}" }, { "math_id": 38, "text": "\\sigma_{cr} = k_{cr}\\frac{\\pi^2 E}{12\\left(1 - \\nu^2\\right)\\left(\\frac{b}{t}\\right)^2}" }, { "math_id": 39, "text": "\\frac{b_\\text{eff}}{b} \\approx \\sqrt{\\frac{\\sigma_{cr}}{\\sigma_{y}} \\left(1 - 1.022 \\sqrt{\\frac{\\sigma_{cr}}{\\sigma_y}}\\right)}" }, { "math_id": 40, "text": "b_\\text{eff}" }, { "math_id": 41, "text": "\\sigma_y" }, { "math_id": 42, "text": "C_b = \\frac{12.5M_\\max}{2.5M_\\max + 3M_A + 4M_B + 3M_C}" }, { "math_id": 43, "text": "M_\\max" }, { "math_id": 44, "text": "M_A" }, { "math_id": 45, "text": "M_B" }, { "math_id": 46, "text": "M_C" }, { "math_id": 47, "text": "\\sigma L = \\rho c^2 h" }, { "math_id": 48, "text": "\\sigma" }, { "math_id": 49, "text": "c" }, { "math_id": 50, "text": "h" }, { "math_id": 51, "text": "U_\\text{applied} = U_\\text{strain}" }, { "math_id": 52, "text": "\\begin{align}\n U_\\text{strain} &= \\frac{E}{2} \\int I(x)(w_{xx}(x))^2 \\, \\mathrm{d}x \\\\\n U_\\text{applied} &= \\frac{P_\\text{crit}}{2} \\int (w_{x}(x))^2 \\, \\mathrm{d}x\n\\end{align}" }, { "math_id": 53, "text": "w(x)" }, { "math_id": 54, "text": "x" }, { "math_id": 55, "text": "xx" }, { "math_id": 56, "text": "V" }, { "math_id": 57, "text": "V = U - P\\Delta" }, { "math_id": 58, "text": "U" }, { "math_id": 59, "text": "P" }, { "math_id": 60, "text": "\\Delta" } ]
https://en.wikipedia.org/wiki?curid=815969
8160211
Cerebellar model articulation controller
The cerebellar model arithmetic computer (CMAC) is a type of neural network based on a model of the mammalian cerebellum. It is also known as the cerebellar model articulation controller. It is a type of associative memory. The CMAC was first proposed as a function modeler for robotic controllers by James Albus in 1975 (hence the name), but has been extensively used in reinforcement learning and also as for automated classification in the machine learning community. The CMAC is an extension of the perceptron model. It computes a function for formula_0 input dimensions. The input space is divided up into hyper-rectangles, each of which is associated with a memory cell. The contents of the memory cells are the weights, which are adjusted during training. Usually, more than one quantisation of input space is used, so that any point in input space is associated with a number of hyper-rectangles, and therefore with a number of memory cells. The output of a CMAC is the algebraic sum of the weights in all the memory cells activated by the input point. A change of value of the input point results in a change in the set of activated hyper-rectangles, and therefore a change in the set of memory cells participating in the CMAC output. The CMAC output is therefore stored in a distributed fashion, such that the output corresponding to any point in input space is derived from the value stored in a number of memory cells (hence the name associative memory). This provides generalisation. Building blocks. In the adjacent image, there are two inputs to the CMAC, represented as a 2D space. Two quantising functions have been used to divide this space with two overlapping grids (one shown in heavier lines). A single input is shown near the middle, and this has activated two memory cells, corresponding to the shaded area. If another point occurs close to the one shown, it will share some of the same memory cells, providing generalisation. The CMAC is trained by presenting pairs of input points and output values, and adjusting the weights in the activated cells by a proportion of the error observed at the output. This simple training algorithm has a proof of convergence. It is normal to add a kernel function to the hyper-rectangle, so that points falling towards the edge of a hyper-rectangle have a smaller activation than those falling near the centre. One of the major problems cited in practical use of CMAC is the memory size required, which is directly related to the number of cells used. This is usually ameliorated by using a hash function, and only providing memory storage for the actual cells that are activated by inputs. One-step convergent algorithm. Initially least mean square (LMS) method is employed to update the weights of CMAC. The convergence of using LMS for training CMAC is sensitive to the learning rate and could lead to divergence. In 2004, a recursive least squares (RLS) algorithm was introduced to train CMAC online. It does not need to tune a learning rate. Its convergence has been proved theoretically and can be guaranteed to converge in one step. The computational complexity of this RLS algorithm is O(N3). Hardware implementation infrastructure. Based on QR decomposition, an algorithm (QRLS) has been further simplified to have an O(N) complexity. Consequently, this reduces memory usage and time cost significantly. A parallel pipeline array structure on implementing this algorithm has been introduced. Overall by utilizing QRLS algorithm, the CMAC neural network convergence can be guaranteed, and the weights of the nodes can be updated using one step of training. Its parallel pipeline array structure offers its great potential to be implemented in hardware for large-scale industry usage. Continuous CMAC. Since the rectangular shape of CMAC receptive field functions produce discontinuous staircase function approximation, by integrating CMAC with B-splines functions, continuous CMAC offers the capability of obtaining any order of derivatives of the approximate functions. Deep CMAC. In recent years, numerous studies have confirmed that by stacking several shallow structures into a single deep structure, the overall system could achieve better data representation, and, thus, more effectively deal with nonlinear and high complexity tasks. In 2018, a deep CMAC (DCMAC) framework was proposed and a backpropagation algorithm was derived to estimate the DCMAC parameters. Experimental results of an adaptive noise cancellation task showed that the proposed DCMAC can achieve better noise cancellation performance when compared with that from the conventional single-layer CMAC. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "n" } ]
https://en.wikipedia.org/wiki?curid=8160211
816070
Length of a module
In algebra, integer associated to a module In algebra, the length of a module over a ring formula_0 is a generalization of the dimension of a vector space which measures its size. page 153 It is defined to be the length of the longest chain of submodules. For vector spaces (modules over a field), the length equals the dimension. If formula_0 is an algebra over a field formula_1, the length of a module is at most its dimension as a formula_1-vector space. In commutative algebra and algebraic geometry, a module over a Noetherian commutative ring formula_0 can have finite length only when the module has Krull dimension zero. Modules of finite length are finitely generated modules, but most finitely generated modules have infinite length. Modules of finite length are called Artinian modules and are fundamental to the theory of Artinian rings. The degree of an algebraic variety inside an affine or projective space is the length of the coordinate ring of the zero-dimensional intersection of the variety with a generic linear subspace of complementary dimension. More generally, the intersection multiplicity of several varieties is defined as the length of the coordinate ring of the zero-dimensional intersection. Definition. Length of a module. Let formula_2 be a (left or right) module over some ring formula_0. Given a chain of submodules of formula_2 of the form formula_3 one says that formula_4 is the "length" of the chain. The "length" of formula_2 is the largest length of any of its chains. If no such largest length exists, we say that formula_2 has "infinite length". Clearly, if the length of a chain equals the length of the module, one has formula_5 and formula_6 Length of a ring. The length of a ring formula_0 is the length of the longest chain of ideals; that is, the length of formula_0 considered as a module over itself by left multiplication. By contrast, the Krull dimension of formula_0 is the length of the longest chain of "prime" ideals. Properties. Finite length and finite modules. If an formula_0-module formula_2 has finite length, then it is finitely generated. If "R" is a field, then the converse is also true. Relation to Artinian and Noetherian modules. An formula_0-module formula_2 has finite length if and only if it is both a Noetherian module and an Artinian module (cf. Hopkins' theorem). Since all Artinian rings are Noetherian, this implies that a ring has finite length if and only if it is Artinian. Behavior with respect to short exact sequences. Supposeformula_7is a short exact sequence of formula_0-modules. Then M has finite length if and only if "L" and "N" have finite length, and we have formula_8 In particular, it implies the following two properties Jordan–Hölder theorem. A composition series of the module "M" is a chain of the form formula_9 such that formula_10 A module "M" has finite length if and only if it has a (finite) composition series, and the length of every such composition series is equal to the length of "M". Examples. Finite dimensional vector spaces. Any finite dimensional vector space formula_11 over a field formula_1 has a finite length. Given a basis formula_12 there is the chainformula_13which is of length formula_4. It is maximal because given any chain,formula_14the dimension of each inclusion will increase by at least formula_15. Therefore, its length and dimension coincide. Artinian modules. Over a base ring formula_0, Artinian modules form a class of examples of finite modules. In fact, these examples serve as the basic tools for defining the order of vanishing in intersection theory. Zero module. The zero module is the only one with length 0. Simple modules. Modules with length 1 are precisely the simple modules. Artinian modules over Z. The length of the cyclic group formula_16 (viewed as a module over the integers Z) is equal to the number of prime factors of formula_4, with multiple prime factors counted multiple times. This follows from the fact that the submodules of formula_16 are in one to one correspondence with the positive divisors of formula_4, this correspondence resulting itself from the fact that formula_17 is a principal ideal ring. Use in multiplicity theory. For the needs of intersection theory, Jean-Pierre Serre introduced a general notion of the multiplicity of a point, as the length of an Artinian local ring related to this point. The first application was a complete definition of the intersection multiplicity, and, in particular, a statement of Bézout's theorem that asserts that the sum of the multiplicities of the intersection points of n algebraic hypersurfaces in a n-dimensional projective space is either infinite or is "exactly" the product of the degrees of the hypersurfaces. This definition of multiplicity is quite general, and contains as special cases most of previous notions of algebraic multiplicity. Order of vanishing of zeros and poles. A special case of this general definition of a multiplicity is the order of vanishing of a non-zero algebraic function formula_18 on an algebraic variety. Given an algebraic variety formula_19 and a subvariety formula_11 of codimension 1 the order of vanishing for a polynomial formula_20 is defined asformula_21where formula_22 is the local ring defined by the stalk of formula_23 along the subvariety formula_11 pages 426-227, or, equivalently, the stalk of formula_23 at the generic point of formula_11 page 22. If formula_19 is an affine variety, and formula_11 is defined the by vanishing locus formula_24, then there is the isomorphismformula_25This idea can then be extended to rational functions formula_26 on the variety formula_19 where the order is defined asformula_27 which is similar to defining the order of zeros and poles in complex analysis. Example on a projective variety. For example, consider a projective surface formula_28 defined by a polynomial formula_29, then the order of vanishing of a rational functionformula_30is given byformula_31whereformula_32For example, if formula_33 and formula_34 and formula_35 thenformula_36since formula_37 is a unit in the local ring formula_38. In the other case, formula_39 is a unit, so the quotient module is isomorphic toformula_40so it has length formula_41. This can be found using the maximal proper sequenceformula_42 Zero and poles of an analytic function. The order of vanishing is a generalization of the order of zeros and poles for meromorphic functions in complex analysis. For example, the functionformula_43has zeros of order 2 and 1 at formula_44 and a pole of order formula_15 at formula_45. This kind of information can be encoded using the length of modules. For example, setting formula_46 and formula_47, there is the associated local ring formula_22 is formula_48 and the quotient module formula_49Note that formula_50 is a unit, so this is isomorphic to the quotient moduleformula_51Its length is formula_41 since there is the maximal chainformula_52of submodules. More generally, using the Weierstrass factorization theorem a meromorphic function factors asformula_30which is a (possibly infinite) product of linear polynomials in both the numerator and denominator. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "R" }, { "math_id": 1, "text": "k" }, { "math_id": 2, "text": "M" }, { "math_id": 3, "text": "M_0 \\subsetneq M_1 \\subsetneq \\cdots \\subsetneq M_n," }, { "math_id": 4, "text": "n" }, { "math_id": 5, "text": "M_0=0" }, { "math_id": 6, "text": "M_n=M." }, { "math_id": 7, "text": "0\\rarr L \\rarr M \\rarr N \\rarr 0" }, { "math_id": 8, "text": "\\text{length}_R(M) = \\text{length}_R(L) + \\text{length}_R(N)" }, { "math_id": 9, "text": "0=N_0\\subsetneq N_1 \\subsetneq \\cdots \\subsetneq N_n=M" }, { "math_id": 10, "text": "N_{i+1}/N_i \\text{ is simple for }i=0,\\dots,n-1" }, { "math_id": 11, "text": "V" }, { "math_id": 12, "text": "v_1,\\ldots,v_n" }, { "math_id": 13, "text": "0 \\subset \\text{Span}_k(v_1) \\subset \\text{Span}_k(v_1,v_2) \\subset \\cdots \\subset \\text{Span}_k(v_1,\\ldots, v_n) = V" }, { "math_id": 14, "text": "V_0 \\subset \\cdots \\subset V_m" }, { "math_id": 15, "text": "1" }, { "math_id": 16, "text": "\\mathbb{Z}/n\\mathbb{Z}" }, { "math_id": 17, "text": "\\Z" }, { "math_id": 18, "text": "f \\in R(X)^*" }, { "math_id": 19, "text": "X" }, { "math_id": 20, "text": "f \\in R(X)" }, { "math_id": 21, "text": "\\operatorname{ord}_V(f) = \\text{length}_{\\mathcal{O}_{V,X}}\\left( \\frac{\\mathcal{O}_{V,X}}{(f)} \\right)" }, { "math_id": 22, "text": "\\mathcal{O}_{V,X}" }, { "math_id": 23, "text": "\\mathcal{O}_X" }, { "math_id": 24, "text": "V(f)" }, { "math_id": 25, "text": "\\mathcal{O}_{V,X} \\cong R(X)_{(f)}" }, { "math_id": 26, "text": "F = f/g" }, { "math_id": 27, "text": "\\operatorname{ord}_V(F) := \\operatorname{ord}_V(f) - \\operatorname{ord}_V(g) " }, { "math_id": 28, "text": "Z(h) \\subset \\mathbb{P}^3" }, { "math_id": 29, "text": "h \\in k[x_0,x_1,x_2,x_3]" }, { "math_id": 30, "text": "F = \\frac{f}{g}" }, { "math_id": 31, "text": "\\operatorname{ord}_{Z(h)}(F) = \\operatorname{ord}_{Z(h)}(f) - \\operatorname{ord}_{Z(h)}(g) " }, { "math_id": 32, "text": "\\operatorname{ord}_{Z(h)}(f) = \\text{length}_{\\mathcal{O}_{Z(h),\\mathbb{P}^3}}\\left( \\frac{\\mathcal{O}_{Z(h),\\mathbb{P}^3}}{(f)} \\right)" }, { "math_id": 33, "text": "h = x_0^3 + x_1^3 + x_2^3 + x_2^3" }, { "math_id": 34, "text": "f = x^2 + y^2" }, { "math_id": 35, "text": "g = h^2(x_0 + x_1 - x_3)" }, { "math_id": 36, "text": "\\operatorname{ord}_{Z(h)}(f) = \\text{length}_{\\mathcal{O}_{Z(h),\\mathbb{P}^3}}\\left( \\frac{\\mathcal{O}_{Z(h),\\mathbb{P}^3}}{(x^2 + y^2)} \\right) = 0" }, { "math_id": 37, "text": "x^2 + y^2" }, { "math_id": 38, "text": "\\mathcal{O}_{Z(h),\\mathbb{P}^3}" }, { "math_id": 39, "text": "x_0 + x_1 - x_3" }, { "math_id": 40, "text": "\\frac{\\mathcal{O}_{Z(h), \\mathbb{P}^3}}{(h^2)}" }, { "math_id": 41, "text": "2" }, { "math_id": 42, "text": "(0) \\subset \\frac{\\mathcal{O}_{Z(h), \\mathbb{P}^3}}{(h)} \\subset \\frac{\\mathcal{O}_{Z(h), \\mathbb{P}^3}}{(h^2)}" }, { "math_id": 43, "text": "\\frac{(z-1)^3(z-2)}{(z-1)(z-4i)}" }, { "math_id": 44, "text": "1, 2 \\in \\mathbb{C}" }, { "math_id": 45, "text": "4i \\in \\mathbb{C}" }, { "math_id": 46, "text": "R(X) = \\mathbb{C}[z]" }, { "math_id": 47, "text": "V = V(z-1)" }, { "math_id": 48, "text": "\\mathbb{C}[z]_{(z-1)}" }, { "math_id": 49, "text": "\\frac{\\mathbb{C}[z]_{(z-1)}}{((z-4i)(z-1)^2)}" }, { "math_id": 50, "text": "z-4i" }, { "math_id": 51, "text": "\\frac{\\mathbb{C}[z]_{(z-1)}}{((z-1)^2)}" }, { "math_id": 52, "text": "(0) \\subset \\frac{\\mathbb{C}[z]_{(z-1)}}{((z-1))} \\subset {\\displaystyle {\\frac {\\mathbb {C} [z]_{(z-1)}}{((z-1)^{2})}}}" } ]
https://en.wikipedia.org/wiki?curid=816070
81609
Pinhole camera
Type of camera A pinhole camera is a simple camera without a lens but with a tiny aperture (the so-called "pinhole")—effectively a light-proof box with a small hole in one side. Light from a scene passes through the aperture and projects an inverted image on the opposite side of the box, which is known as the camera obscura effect. The size of the images depends on the distance between the object and the pinhole. A Worldwide Pinhole Photography Day is observed on the last Sunday of April, every year. History. Camera obscura. The camera obscura or pinhole image is a natural optical phenomenon. Early known descriptions are found in the Chinese Mozi writings (circa 500 BCE) and the Aristotelian "Problems" (circa 300 BCE – 600 CE). Ibn al-Haytham (965–1039), an Arab physicist also known as Alhazen, described the camera obscura effect. Over the centuries others started to experiment with it, mainly in dark rooms with a small opening in shutters, mostly to study the nature of light and to safely watch solar eclipses. Giambattista Della Porta wrote in 1558 in his Magia Naturalis about using a concave mirror to project the image onto paper and to use this as a drawing aid. However, at about the same time, the use of a lens instead of a pinhole was introduced. In the 17th century, the camera obscura with a lens became a popular drawing aid that was further developed into a mobile device, first in a little tent and later in a box. The photographic camera, as developed early in the 19th century, was basically an adaptation of the box-type camera obscura with a lens. The term "pin-hole" in the context of optics was found in James Ferguson's 1764 book "Lectures on select subjects in mechanics, hydrostatics, pneumatics, and optics". Early pinhole photography. The first known description of pinhole photography is found in the 1856 book "The Stereoscope" by Scottish inventor David Brewster, including the description of the idea as "a camera without lenses, and with only a pin-hole". Sir William Crookes and William de Wiveleslie Abney were other early photographers to try the pinhole technique. Film and integral photography experiments. According to inventor William Kennedy Dickson, the first experiments directed at moving pictures by Thomas Edison and his researchers took place around 1887 and involved "microscopic pin-point photographs, placed on a cylindrical shell". The size of the cylinder corresponded with their phonograph cylinder as they wanted to combine the moving images with sound recordings. Problems arose in recording clear pictures "with phenomenal speed" and the "coarseness" of the photographic emulsion when the pictures were enlarged. The microscopic pin-point photographs were soon abandoned. In 1893 the Kinetoscope was finally introduced with moving pictures on celluloid film strips. The camera that recorded the images, dubbed "Kinetograph", was fitted with a lens. Eugène Estanave experimented with integral photography, exhibiting a result in 1925 and publishing his findings in "La Nature". After 1930 he chose to continue his experiments with pinholes replacing the lenticular screen. Usage. The image of a pinhole camera may be projected onto a translucent screen for a real-time viewing (used for safe observation of solar eclipses) or to trace the image on paper. But it is more often used without a translucent screen for pinhole photography with photographic film or photographic paper placed on the surface opposite to the pinhole aperture. A common use of pinhole photography is to capture the movement of the sun over a long period of time. This type of photography is called solarigraphy. Pinhole photography is used for artistic reasons, but also for educational purposes to let pupils learn about, and experiment with, the basics of photography. Pinhole cameras with CCDs (charge-coupled devices) are sometimes used for surveillance because they are difficult to detect. Related cameras, image forming devices, or developments from it include Franke's widefield pinhole camera, the pinspeck camera, and the pinhead mirror. Modern manufacturing has enabled the production of high quality pinhole lenses that can be applied to digital cameras. Construction. Pinhole cameras can be handmade by the photographer for a particular purpose. In its simplest form, the photographic pinhole camera can consist of a light-tight box with a pinhole in one end, and a piece of film or photographic paper wedged or taped into the other end. A flap of cardboard with a tape hinge can be used as a shutter. The pinhole may be punched or drilled using a sewing needle or small diameter bit through a piece of tinfoil or thin aluminum or brass sheet. This piece is then taped to the inside of the light-tight box behind a hole cut through the box. A cylindrical oatmeal container may be made into a pinhole camera. The interior of an effective pinhole camera is black to avoid any reflection of the entering light onto the photographic material or viewing screen. Pinhole cameras can be constructed with a sliding film holder or back so the distance between the film and the pinhole can be adjusted. This allows the angle of view of the camera to be changed and also the effective f-stop ratio of the camera. Moving the film closer to the pinhole will result in a wide angle field of view and shorter exposure time. Moving the film farther away from the pinhole will result in a telephoto or narrow-angle view and longer exposure time. Pinhole cameras can also be constructed by replacing the lens assembly in a conventional camera with a pinhole. In particular, compact 35 mm cameras whose lens and focusing assembly have been damaged can be reused as pinhole cameras—maintaining the use of the shutter and film winding mechanisms. As a result of the enormous increase in f-number, while maintaining the same exposure time, one must use a fast film in direct sunshine. Pinholes (homemade or commercial) can be used in place of the lens on an SLR. Use with a digital SLR allows metering and composition by trial and error, and is effectively free, so is a popular way to try pinhole photography. Selection of pinhole size. Up to a certain point, the smaller the hole, the sharper the image, but the dimmer the projected image. Optimally, the size of the aperture should be 1/100 or less of the distance between it and the projected image. Within limits, a small pinhole through a thin surface will result in a sharper image resolution because the projected circle of confusion at the image plane is practically the same size as the pinhole. An extremely small hole, however, can produce significant diffraction effects and a less clear image due to the wave properties of light. Additionally, vignetting occurs as the diameter of the hole approaches the thickness of the material in which it is punched, because the sides of the hole obstruct the light entering at anything other than 90 degrees. The best pinhole is perfectly round (since irregularities cause higher-order diffraction effects) and in an extremely thin piece of material. Industrially produced pinholes benefit from laser etching, but a hobbyist can still produce pinholes of sufficiently high quality for photographic work. A method of calculating the optimal pinhole diameter was first published by Joseph Petzval in 1857. The smallest possible diameter of the image point and therefore the highest possible image resolution and the sharpest image are given when: formula_0 (Where d is the pinhole diameter, f is the distance from pinhole to image plane or “focal length” and λ is the wavelength of light.) The first to apply wave theory to the problem was Lord Rayleigh in 1891. But due to some incorrect and arbitrary deductions he arrived at: formula_1 So his optimal pinhole was approximatively 1/3 bigger than Petzval’s. The correct optimum can be found with Fraunhofer approximation of the diffraction pattern behind a circular aperture at: formula_2 This may be shortened to: formula_3 (When d and f in millimetres and λ = 550 nm = 0.00055 mm, corresponding to yellow-green.) For a pinhole-to-film distance of 1 inch or 25.4 mm, this works out to a pinhole of 0.185 mm (185 microns) in diameter. For f= 50 mm the optimal diameter is 0.259 mm. The depth of field is basically infinite, but this does not mean that no optical blurring occurs. The infinite depth of field means that image blur depends not on object distance but on other factors, such as the distance from the aperture to the film plane, the aperture size, the wavelength(s) of the light source, and motion of the subject or canvas. Additionally, pinhole photography can not avoid the effects of haze. In the 1970s, Young measured the resolution limit of the pinhole camera as a function of pinhole diameter and later published a tutorial in "The Physics Teacher". Partly to enable a variety of diameters and focal lengths, he defined two normalized variables: resolution limit divided by the pinhole radius, and focal length divided by the quantity "s"2/λ, where "s" is the radius of the pinhole and λ is the wavelength of the light, typically about 550 nm. His results are plotted in the figure. On the left-side of the graph, the pinhole is large, and geometric optics applies; the resolution limit is about 1.5 times the radius of the pinhole. (Spurious resolution is also seen in the geometric-optics limit.) On the right-side, the pinhole is small, and Fraunhofer diffraction applies; the resolution limit is given by the far-field diffraction formula shown in the graph and now increases as the pinhole is made smaller. In this formula, the radius of the pinhole is used instead of its diameter, that's why the constant is 0.61 instead of the more usual 1.22. In the region of near-field diffraction (or Fresnel diffraction), the pinhole focuses the light slightly, and the resolution limit is minimized when the focal length "f" (the distance between the pinhole and the film plane) is given by "f" = "s"2/λ. At this focal length, the pinhole focuses the light slightly, and the resolution limit is about 2/3 of the radius of the pinhole. The pinhole, in this case, is equivalent to a Fresnel zone plate with a single zone. The value "s"2/λ is in a sense the natural focal length of the pinhole. The relation "f" = "s"2/"λ" yields an optimum pinhole diameter d = 2√"fλ", so the experimental value differs slightly from the estimate of Petzval, above. Calculating the f-number and required exposure. The f-number of the camera may be calculated by dividing the distance from the pinhole to the imaging plane (the focal length) by the diameter of the pinhole. For example, a camera with a 0.5 mm diameter pinhole, and a 50 mm focal length would have an f-number of 50/0.5, or 100 ("f"/100 in conventional notation). Due to the large f-number of a pinhole camera, exposures will often encounter reciprocity failure. Once exposure time has exceeded about 1 second for film or 30 seconds for paper, one must compensate for the breakdown in linear response of the film/paper to intensity of illumination by using longer exposures. Exposures projected on to modern light-sensitive photographic film can typically range from five seconds up to as much as several hours, with smaller pinholes requiring longer exposures to produce the same size image. Because a pinhole camera requires a lengthy exposure, its shutter may be manually operated, as with a flap made of opaque material to cover and uncover the pinhole. Natural pinhole phenomenon. A pinhole camera effect can sometimes occur naturally. Small "pinholes" formed by the gaps between overlapping tree leaves will create replica images of the sun on flat surfaces. During an eclipse, this produces small crescents in the case of a partial eclipse, or hollow rings in the case of an annular eclipse. Disco balls can also function as natural reflective pinhole cameras (also known as a pinhead mirror). References. &lt;templatestyles src="Reflist/styles.css" /&gt; External links. Media related to at Wikimedia Commons
[ { "math_id": 0, "text": "d=\\sqrt{2}\\sqrt{f\\lambda}=1.41\\sqrt{f\\lambda}" }, { "math_id": 1, "text": "d=2\\sqrt{f\\lambda}" }, { "math_id": 2, "text": "d=\\sqrt{2.44}\\sqrt{f\\lambda}=1.562\\sqrt{f\\lambda}" }, { "math_id": 3, "text": "d=0.0366\\sqrt{f}" } ]
https://en.wikipedia.org/wiki?curid=81609
8161285
High frequency content measure
In signal processing, the high frequency content measure is a simple measure, taken across a signal spectrum (usually a STFT spectrum), that can be used to characterize the amount of high-frequency content in the signal. The magnitudes of the spectral bins are added together, but multiplying each magnitude by the bin "position" (proportional to the frequency). Thus if "X"("k") is a discrete spectrum with "N" unique points, its high frequency content measure is: formula_0 In contrast to perceptual measures, this is not based on any evidence about its relevance to human hearing. Despite that, it can be useful for some applications, such as onset detection. The measure has close similarities to the spectral centroid measure, being essentially the same calculation but without normalization according to overall magnitude.
[ { "math_id": 0, "text": " \n\\mathrm{HFC} = \\sum_{i=0}^{N-1} i|X(i)|\n" } ]
https://en.wikipedia.org/wiki?curid=8161285
8163506
Effective number of bits
A dynamic-range metric for digital systems Effective number of bits (ENOB) is a measure of the dynamic range of an analog-to-digital converter (ADC), digital-to-analog converter, or their associated circuitry. The resolution of an ADC is specified by the number of bits used to represent the analog value. Ideally, a 12-bit ADC will have an effective number of bits of almost 12. However, real signals have noise, and real circuits are imperfect and introduce additional noise and distortion. Those imperfections reduce the number of bits of accuracy in the ADC. The ENOB describes the effective resolution of the system in bits. An ADC may have a 12-bit resolution, but the effective number of bits, when used in a system, may be 9.5. ENOB is also used as a quality measure for other blocks such as sample-and-hold amplifiers. Thus analog blocks may be included in signal-chain calculations. The total ENOB of a chain of blocks is usually less than the ENOB of the worst block. The frequency band of a signal converter where ENOB is still guaranteed is called the effective resolution bandwidth and is limited by dynamic quantization problems. For example, an ADC has some aperture uncertainty. The instant a real ADC samples, its input varies from sample to sample. Because the input signal changes, that time variation translates to an output variation. For example, an ADC may sample 1 ns late. If the input signal is a 1 V sinewave at 1,000,000 radians/second (roughly 160 kHz), the input voltage may change by as much as 1 MV/s. A sampling time error of 1 ns would cause a sampling error of about 1 mV (an error in the 10th bit). If the frequency were 100 times faster (about 16 MHz), then the maximum error would be 100 times greater: about 100 mV on a 1 V signal (an error in the third or fourth bit). Definition. An often used definition for ENOB is formula_0 where This definition compares the SINAD of an ideal ADC or DAC with a word length of ENOB bits with the SINAD of the ADC or DAC being tested. Notes. &lt;templatestyles src="Reflist/styles.css" /&gt; References. &lt;templatestyles src="Reflist/styles.css" /&gt; &lt;templatestyles src="Refbegin/styles.css" /&gt;
[ { "math_id": 0, "text": "\\mathrm{ENOB} = \\frac{\\mathrm{SINAD} - 1.76}{6.02}," } ]
https://en.wikipedia.org/wiki?curid=8163506
8163824
Rose tree
Tree data structure with a variable and unbounded number of branches per node In computing, a rose tree is a term for the value of a tree data structure with a variable and unbounded number of branches per node. The term is mostly used in the functional programming community, e.g., in the context of the Bird–Meertens formalism. Apart from the multi-branching property, the most essential characteristic of rose trees is the coincidence of bisimilarity with identity: two distinct rose trees are never bisimilar. Naming. The name "rose tree" was coined by Lambert Meertens to evoke the similarly named, and similarly structured, common rhododendron. We shall call such trees "rose trees", a literal translation of "rhododendron" (Greek = rose, = tree), because of resemblance to the habitus of this shrub, except that the latter does not grow upside-down on the Northern hemisphere. Recursive definition. Well-founded rose trees can be defined by a recursive construction of entities of the following types: Typically, only some combinations of entity types are used for the construction. The original paper only considers 1+2b ("sequence-forking" rose trees) and 1+2a ("set-forking" rose trees). In later literature, the 1+2b variant is usually introduced by the following definition: &lt;q&gt;A rose tree [...] is either a leaf containing a value, or a node that can have an arbitrary list of subtrees&lt;/q&gt;. The most common definition used in functional programming (particularly in Haskell) combines 3+2b: "&lt;q&gt;An element of Rose α consists of a labelled node together with a list of subtrees&lt;/q&gt;". That is, a rose tree is a pairing entity (type 3) whose branching entity is a sequence (thus of type 2b) of rose trees. Sometimes even the combination 1+3b is considered. The following table provides a summary of the most established combinations of entities. "Notes:" &lt;templatestyles src="Reflist/styles.css" /&gt; General definition. General rose trees can be defined via bisimilarity of accessible pointed multidigraphs with appropriate labelling of nodes and arrows. These structures are generalization of the notion of accessible pointed graph (abbreviated as "apg") from non-well-founded set theory. We will use the "apq" acronym for the below described multidigraph structures. This is meant as an abbreviation of "accessible pointed quiver" where "quiver" is an established synonym for "multidigraph". In a correspondence to the types of entities used in the recursive definition, each node of an apq is assigned a type (1), (2a), (2b), (2c) or (3). The apqs are subject to conditions that mimic the properties of recursively constructed entities. A "bisimilarity" between apqs 𝒳 ("X", ...) and 𝒴 ("Y", ...) is a relation "R" ⊆ "X" × "Y" between nodes such that the roots of 𝒳 and 𝒴 are R-related and for every pair ("x","y") of R-related nodes, the following are satisfied: Two apqs 𝒳 and 𝒴 are said to be "bisimilar" if there exists a bisimilarity relation R for them. This establishes an equivalence relation on the class of all apqs. A "rose tree" is then some fixed representation of the class 𝒞 of apqs that are bisimilar to some given apq 𝒳. If the root node of 𝒳 is of type (1) then 𝒞 {𝒳}, thus 𝒞 can be represented by this root node. Otherwise, 𝒞 is a proper class – in this case the representation can be provided by Scott's trick to be the set of those elements of 𝒞 that have the lowest rank. As a result of the above set-theoretic construction, the class ℛ of all rose trees is defined, depending on the sets V (ground values), Σ (arrow names) and L (node labels) as the definitory constituents. Subsequently, the structure of apqs can be carried over to a labelled multidigraph structure over ℛ. That is, elements of ℛ can themselves be considered as "nodes" with induced type assignment, node labelling and arrows. The class 𝒜 of arrows is a subclass of (ℛ × ℛ) ∪ (ℛ × (formula_0 ∪ "Σ") × ℛ), that is, arrows are either source-target couples or source-label-target triples according to the type of the source. For every element r of ℛ there is an induced apq 𝒳 ("X", "A", "r", ...) such that r is the root node of 𝒳 and the respective sets X and A of nodes and arrows of 𝒳 are formed by those elements of ℛ and 𝒜 that are accessible via a path of arrows starting at r. The induced apq 𝒳 is bisimilar to apqs used for the construction of r. Pathname maps. Rose trees that do not contain set-branching nodes (type 2a) can be represented by pathname maps. A "pathname" is just a finite sequence of arrow labels. For an arrow path "a" ["a"1, ..., "a"n] (a finite sequence of consecutive arrows), the pathname of p is the corresponding sequence "σ"("a") ["σ"("a"1), ..., "σ"("a"n)] of arrow labels. Here it is assumed that each arrow is labelled (σ denotes the labelling function). In general, each arrow path needs to be first reduced by removing all its arrows sourced at pairing nodes (type 3). A pathname p is "resolvable" iff there exists a root-originating arrow path "a" whose pathname is p. Such "a" is uniquely given up to a possible unlabelled last arrow (sourced at a pairing node). The "target node" of a non-empty resolvable path is the target node of the last arrow of the correspondent root-originating arrow path that does not end with an unlabelled arrow. The target of the empty path is the root node. Given a rose tree r that does not contain set-branching nodes, the "pathname map" of r is a map t that assigns each resolvable pathname p its "value" "t"("p") according to the following general scheme: "t"———→ ("V" ∪ {⊥} ∪ "L") × "T" Recall that formula_0 ∪ "Σ" is the set of arrow labels (formula_0 is the set of natural numbers and Σ is the set of arrow names) L is the set of node labels, and V is the set of ground values. The additional symbols ⊥ and T respectively mean an indicator of a resolvable pathname and the set of type tags, "T" {'1', '2b', '2c', '3b', '3c'}. The t map is defined by the following prescription (x denotes the target of p): It can be shown that different rose trees have different pathname maps. For "homogeneous" rose trees there is no need for type tagging, and their pathname map t can be defined as summarized below: In each case, there is a simple axiomatization in terms of pathnames: In particular, a rose tree in the most common "Haskell" sense is just a map from a non-empty prefix-closed and left-sibling-closed set of finite sequences of natural numbers to a set L. Such a definition is mostly used outside the branch of functional programming, see Tree (automata theory). Typically, documents that use this definition do not mention the term "rose tree" at all. "Notes:" &lt;templatestyles src="Reflist/styles.css" /&gt; Examples. The diagrams below show two examples of rose trees together with the correspondent Haskell code. In both cases, the Data.Tree module is used as it is provided by the Haskell containers package. The module introduces rose trees as pairing entities by the following definition: data Tree a = Node { rootLabel :: a, -- ^ label value subForest :: [Tree a] -- ^ zero or more child trees Both examples are contrived so as to demonstrate the concept of "sharing of substructures" which is a distinguished feature of rose trees. In both cases, the labelling function is injective (so that the labels codice_0, codice_1, codice_2 or codice_3 uniquely identify a subtree / node) which does not need to be satisfied in general. The natural numbers (0,1,2 or 3) along the arrows indicate the zero-based position in which a tree appears in the codice_4 sequence of a particular "super-tree". As a consequence of possible repetitions in codice_4, there can be multiple arrows between nodes. In each of the examples, the rose tree in question is labelled by codice_0 and equals the value of the codice_7 variable in the code. In both diagrams, the tree is pointed to by a source-less arrow. Well-founded rose tree import Data.Tree main :: IO () main = do print a Non-well-founded rose tree import Data.Tree main :: IO () main = do let root x = case x of 'a' -&gt; (x,[x,'b']) 'b' -&gt; (x,[x,'c']) 'c' -&gt; (x,[x,'a']) let a = unfoldTree root 'a' putStrLn (take 900 (show a) ++ " ... (and so on)") The first example presents a well-founded rose tree codice_7 obtained by an incremental construction. First codice_9 is constructed, then codice_10 then codice_11 and finally codice_7. The rose tree can be represented by the pathname map shown on the left. The second example presents a non-well-founded rose tree codice_7 built by a breadth-first constructor codice_14. The rose tree is a Moore machine, see notes above. Its pathname map is defined by "t"("p") be respectively equal to 'a' or 'b' or 'c' according to "n" mod 3 where n is the number of occurrences of 1 in p. Relation to tree data structures. The general definition provides a connection to tree data structures: "Rose trees are tree structures modulo bisimilarity." Mapping tree data structures to their values The "tree structures" are those apqs (labelled multidigraphs from the general definition) in which each node is accessible by a "unique" arrow path. Every rose tree is bisimilar to such a tree structure (since every apq is bisimilar to its unfolding) and every such tree structure is bisimilar to "exactly one" rose tree which can therefore be regarded as the "value" of the tree structure. The diagram on the right shows an example of such a structure-to-value mapping. In the upper part of the diagram, a node-labelled ordered tree T is displayed, containing 23 nodes. In the lower part, a rose tree R is shown that is the value of T. There is an induced subtree-to-subvalue mapping which is partially displayed by blue arrows. Observe that the mapping is many-to-one: distinct tree data structures can have the same value. As a particular consequence, a rose tree in general is not a tree in terms of "subvalue" relationship between its subvalues, see #Terminological_controversy. Tree data type. The value mapping described above can be used to clarify the difference between the terms "tree data structure" and "tree data type": "A tree data type is a set of values of tree data structures". Note that there are 2 degrees of discrepancy between the terms. This becomes apparent when one compares a "single" tree data type with a "single" tree data structure. A single tree data type contains (infinitely) many values each of which is represented by (infinitely) many tree data structures. For example, given a set "L" {'a','b','c','d'} of labels, the set of rose trees in the Haskell sense (3b) with labels taken from L is a single tree data type. All the above examples of rose trees belong to this data type. "Notes:" &lt;templatestyles src="Reflist/styles.css" /&gt; Terminological controversy. As it can be observed in the above text and diagrams, the term "rose tree" is controversial. There are two interrelated issues: Interestingly, the term "node" does not appear in the original paper except for a single occurrence of "nodes" in an informal paragraph on page 20. In later literature the word is used abundantly. This can already be observed in the quoted comments to the definitions: In particular, the definition of rose trees in the most common Haskell sense suggests that (within the context of discourse) "node" and "tree" are synonyms. Does it mean that every rose tree is coincident with its root node? If so, is such a property considered specific to rose trees or does it also apply to other trees? Such questions are left unanswered. The (B) problem becomes apparent when looking at the diagrams of the above examples. Both diagrams are faithful in the sense that each node is drawn "exactly once". One can immediately see that the underlying graphs are not trees. Using a quotation from Tree (graph theory) "&lt;q&gt;The various kinds of data structures referred to as trees in computer science have underlying graphs that are trees in graph theory [...]&lt;/q&gt;" one can conclude that rose trees in general are not trees in usual meaning known from computer science. Bayesian rose tree. There is at least one adoption of the term "rose tree" in computer science in which "sharing of substructures" is precluded. The concept of a "Bayesian rose tree" is based on the following definition of rose trees: T is a rose tree if either "T" {x} for some data point x or "T" {"T"1, ... ,"T""n""T"} where "T""i"'s are rose trees over disjoint sets of data points. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\mathbb{N}" } ]
https://en.wikipedia.org/wiki?curid=8163824
81644
Removable singularity
Undefined point on a holomorphic function which can be made regular In complex analysis, a removable singularity of a holomorphic function is a point at which the function is undefined, but it is possible to redefine the function at that point in such a way that the resulting function is regular in a neighbourhood of that point. For instance, the (unnormalized) sinc function, as defined by formula_0 has a singularity at "z" = 0. This singularity can be removed by defining formula_1 which is the limit of sinc as z tends to 0. The resulting function is holomorphic. In this case the problem was caused by sinc being given an indeterminate form. Taking a power series expansion for formula_2 around the singular point shows that formula_3 Formally, if formula_4 is an open subset of the complex plane formula_5, formula_6 a point of formula_7, and formula_8 is a holomorphic function, then formula_9 is called a removable singularity for formula_10 if there exists a holomorphic function formula_11 which coincides with formula_10 on formula_12. We say formula_10 is holomorphically extendable over formula_7 if such a formula_13 exists. Riemann's theorem. Riemann's theorem on removable singularities is as follows: &lt;templatestyles src="Math_theorem/styles.css" /&gt; Theorem —  Let formula_14 be an open subset of the complex plane, formula_15 a point of formula_16 and formula_10 a holomorphic function defined on the set formula_17. The following are equivalent: The implications 1 ⇒ 2 ⇒ 3 ⇒ 4 are trivial. To prove 4 ⇒ 1, we first recall that the holomorphy of a function at formula_9 is equivalent to it being analytic at formula_9 (proof), i.e. having a power series representation. Define formula_19 Clearly, "h" is holomorphic on formula_20, and there exists formula_21 by 4, hence "h" is holomorphic on "D" and has a Taylor series about "a": formula_22 We have "c"0 = "h"("a") = 0 and "c"1 = "h'"("a") = 0; therefore formula_23 Hence, where formula_24, we have: formula_25 However, formula_26 is holomorphic on "D", thus an extension of formula_27. Other kinds of singularities. Unlike functions of a real variable, holomorphic functions are sufficiently rigid that their isolated singularities can be completely classified. A holomorphic function's singularity is either not really a singularity at all, i.e. a removable singularity, or one of the following two types:
[ { "math_id": 0, "text": " \\text{sinc}(z) = \\frac{\\sin z}{z} " }, { "math_id": 1, "text": "\\text{sinc}(0) := 1," }, { "math_id": 2, "text": "\\frac{\\sin(z)}{z}" }, { "math_id": 3, "text": " \\text{sinc}(z) = \\frac{1}{z}\\left(\\sum_{k=0}^{\\infty} \\frac{(-1)^kz^{2k+1}}{(2k+1)!} \\right) = \\sum_{k=0}^{\\infty} \\frac{(-1)^kz^{2k}}{(2k+1)!} = 1 - \\frac{z^2}{3!} + \\frac{z^4}{5!} - \\frac{z^6}{7!} + \\cdots. " }, { "math_id": 4, "text": "U \\subset \\mathbb C" }, { "math_id": 5, "text": "\\mathbb C" }, { "math_id": 6, "text": "a \\in U" }, { "math_id": 7, "text": "U" }, { "math_id": 8, "text": "f: U\\setminus \\{a\\} \\rightarrow \\mathbb C" }, { "math_id": 9, "text": "a" }, { "math_id": 10, "text": "f" }, { "math_id": 11, "text": "g: U \\rightarrow \\mathbb C" }, { "math_id": 12, "text": "U\\setminus \\{a\\}" }, { "math_id": 13, "text": "g" }, { "math_id": 14, "text": "D \\subset \\mathbb C" }, { "math_id": 15, "text": "a \\in D" }, { "math_id": 16, "text": "D" }, { "math_id": 17, "text": "D \\setminus \\{a\\}" }, { "math_id": 18, "text": "\\lim_{z\\to a}(z - a) f(z) = 0" }, { "math_id": 19, "text": "\n h(z) = \\begin{cases}\n (z - a)^2 f(z) & z \\ne a ,\\\\\n 0 & z = a .\n \\end{cases}\n" }, { "math_id": 20, "text": " D \\setminus \\{a\\}" }, { "math_id": 21, "text": "h'(a)=\\lim_{z\\to a}\\frac{(z - a)^2f(z)-0}{z-a}=\\lim_{z\\to a}(z - a) f(z)=0" }, { "math_id": 22, "text": "h(z) = c_0 + c_1(z-a) + c_2 (z - a)^2 + c_3 (z - a)^3 + \\cdots \\, ." }, { "math_id": 23, "text": "h(z) = c_2 (z - a)^2 + c_3 (z - a)^3 + \\cdots \\, ." }, { "math_id": 24, "text": "z \\ne a" }, { "math_id": 25, "text": "f(z) = \\frac{h(z)}{(z - a)^2} = c_2 + c_3 (z - a) + \\cdots \\, ." }, { "math_id": 26, "text": "g(z) = c_2 + c_3 (z - a) + \\cdots \\, ." }, { "math_id": 27, "text": " f " }, { "math_id": 28, "text": "m" }, { "math_id": 29, "text": "\\lim_{z \\rightarrow a}(z-a)^{m+1}f(z)=0" }, { "math_id": 30, "text": "U \\setminus \\{a\\}" } ]
https://en.wikipedia.org/wiki?curid=81644
8169758
Many-sorted logic
Hierarchical typed logic Many-sorted logic can reflect formally our intention not to handle the universe as a homogeneous collection of objects, but to partition it in a way that is similar to types in typeful programming. Both functional and assertive "parts of speech" in the language of the logic reflect this typeful partitioning of the universe, even on the syntax level: substitution and argument passing can be done only accordingly, respecting the "sorts". There are various ways to formalize the intention mentioned above; a "many-sorted logic" is any package of information which fulfils it. In most cases, the following are given: The domain of discourse of any structure of that signature is then fragmented into disjoint subsets, one for every sort. Example. When reasoning about biological organisms, it is useful to distinguish two sorts: formula_0 and formula_1. While a function formula_2 makes sense, a similar function formula_3 usually does not. Many-sorted logic allows one to have terms like formula_4, but to discard terms like formula_5 as syntactically ill-formed. Algebraization. The algebraization of many-sorted logic is explained in an article by Caleiro and Gonçalves, which generalizes abstract algebraic logic to the many-sorted case, but can also be used as introductory material. Order-sorted logic. While "many-sorted" logic requires two distinct sorts to have disjoint universe sets, "order-sorted" logic allows one sort formula_6 to be declared a subsort of another sort formula_7, usually by writing formula_8 or similar syntax. In the above biology example, it is desirable to declare formula_9, formula_10, formula_11, formula_12, formula_13, formula_14, and so on; cf. picture. Wherever a term of some sort formula_15 is required, a term of any subsort of formula_15 may be supplied instead ("Liskov substitution principle"). For example, assuming a function declaration formula_16, and a constant declaration formula_17, the term formula_18 is perfectly valid and has the sort formula_19. In order to supply the information that the mother of a dog is a dog in turn, another declaration formula_20 may be issued; this is called "function overloading", similar to overloading in programming languages. Order-sorted logic can be translated into unsorted logic, using a unary predicate formula_21 for each sort formula_22, and an axiom formula_23 for each subsort declaration formula_24. The reverse approach was successful in automated theorem proving: in 1985, Christoph Walther could solve a then benchmark problem by translating it into order-sorted logic, thereby boiling it down an order of magnitude, as many unary predicates turned into sorts. In order to incorporate order-sorted logic into a clause-based automated theorem prover, a corresponding "order-sorted unification" algorithm is necessary, which requires for any two declared sorts formula_25 their intersection formula_26 to be declared, too: if formula_27 and formula_28 are variables of sort formula_6 and formula_7, respectively, the equation formula_29 has the solution formula_30, where formula_31. Smolka generalized order-sorted logic to allow for parametric polymorphism. In his framework, subsort declarations are propagated to complex type expressions. As a programming example, a parametric sort formula_32 may be declared (with formula_33 being a type parameter as in a C++ template), and from a subsort declaration formula_34 the relation formula_35 is automatically inferred, meaning that each list of integers is also a list of floats. Schmidt-Schauß generalized order-sorted logic to allow for term declarations. As an example, assuming subsort declarations formula_36 and formula_37, a term declaration like formula_38 allows to declare a property of integer addition that could not be expressed by ordinary overloading. References. &lt;templatestyles src="Reflist/styles.css" /&gt; Early papers on many-sorted logic include:
[ { "math_id": 0, "text": "\\mathrm{plant}" }, { "math_id": 1, "text": "\\mathrm{animal}" }, { "math_id": 2, "text": "\\mathrm{mother}\\colon \\mathrm{animal} \\to \\mathrm{animal}" }, { "math_id": 3, "text": "\\mathrm{mother}\\colon \\mathrm{plant} \\to \\mathrm{plant}" }, { "math_id": 4, "text": "\\mathrm{mother}(\\mathrm{lassie})" }, { "math_id": 5, "text": "\\mathrm{mother}(\\mathrm{my\\_favorite\\_oak})" }, { "math_id": 6, "text": "s_1" }, { "math_id": 7, "text": "s_2" }, { "math_id": 8, "text": "s_1 \\subseteq s_2" }, { "math_id": 9, "text": "\\text{dog} \\subseteq \\text{carnivore}" }, { "math_id": 10, "text": "\\text{dog} \\subseteq \\text{mammal}" }, { "math_id": 11, "text": "\\text{carnivore} \\subseteq \\text{animal}" }, { "math_id": 12, "text": "\\text{mammal} \\subseteq \\text{animal}" }, { "math_id": 13, "text": "\\text{animal} \\subseteq \\text{organism}" }, { "math_id": 14, "text": "\\text{plant} \\subseteq \\text{organism}" }, { "math_id": 15, "text": "s" }, { "math_id": 16, "text": "\\text{mother}: \\text{animal} \\longrightarrow \\text{animal}" }, { "math_id": 17, "text": "\\text{lassie}: \\text{dog}" }, { "math_id": 18, "text": "\\text{mother}(\\text{lassie})" }, { "math_id": 19, "text": "\\text{animal}" }, { "math_id": 20, "text": "\\text{mother}: \\text{dog} \\longrightarrow \\text{dog}" }, { "math_id": 21, "text": "p_i(x)" }, { "math_id": 22, "text": "s_i" }, { "math_id": 23, "text": "\\forall x (p_i(x) \\rightarrow p_j(x))" }, { "math_id": 24, "text": "s_i \\subseteq s_j" }, { "math_id": 25, "text": "s_1, s_2" }, { "math_id": 26, "text": "s_1 \\cap s_2" }, { "math_id": 27, "text": "x_1" }, { "math_id": 28, "text": "x_2" }, { "math_id": 29, "text": "x_1 \\stackrel{?}{=}\\,x_2" }, { "math_id": 30, "text": "\\{ x_1 = x, \\; x_2 = x\\}" }, { "math_id": 31, "text": "x: s_1 \\cap s_2" }, { "math_id": 32, "text": "\\text{list}(X)" }, { "math_id": 33, "text": "X" }, { "math_id": 34, "text": "\\text{int} \\subseteq \\text{float}" }, { "math_id": 35, "text": "\\text{list}(\\text{int}) \\subseteq \\text{list}(\\text{float})" }, { "math_id": 36, "text": "\\text{even} \\subseteq \\text{int}" }, { "math_id": 37, "text": "\\text{odd} \\subseteq \\text{int}" }, { "math_id": 38, "text": "\\forall i:\\text{int}. \\; (i+i):\\text{even}" } ]
https://en.wikipedia.org/wiki?curid=8169758
8175877
Vizing's conjecture
Proposition on the domination number of Cartesian products of graphs In graph theory, Vizing's conjecture concerns a relation between the domination number and the cartesian product of graphs. This conjecture was first stated by Vadim G. Vizing (1968), and states that, if γ("G") denotes the minimum number of vertices in a dominating set for the graph G, then formula_0 conjectured a similar bound for the domination number of the tensor product of graphs; however, a counterexample was found by . Since Vizing proposed his conjecture, many mathematicians have worked on it, with partial results described below. For a more detailed overview of these results, see . Examples. A 4-cycle "C"4 has domination number two: any single vertex only dominates itself and its two neighbors, but any pair of vertices dominates the whole graph. The product "C"4 □ "C"4 is a four-dimensional hypercube graph; it has 16 vertices, and any single vertex can only dominate itself and four neighbors, so three vertices could only dominate 15 of the 16 vertices. Therefore, at least four vertices are required to dominate the entire graph, the bound given by Vizing's conjecture. It is possible for the domination number of a product to be much larger than the bound given by Vizing's conjecture. For instance, for a star "K"1,"n", its domination number γ(K1,"n") is one: it is possible to dominate the entire star with a single vertex at its hub. Therefore, for the graph "G" = "K"1,"n" □ "K"1,"n" formed as the product of two stars, Vizing's conjecture states only that the domination number should be at least 1 × 1 = 1. However, the domination number of this graph is actually much higher. It has "n"2 + 2"n" + 1 vertices: "n"2 formed from the product of a leaf in both factors, 2"n" from the product of a leaf in one factor and the hub in the other factor, and one remaining vertex formed from the product of the two hubs. Each leaf-hub product vertex in G dominates exactly n of the leaf-leaf vertices, so n leaf-hub vertices are needed to dominate all of the leaf-leaf vertices. However, no leaf-hub vertex dominates any other such vertex, so even after n leaf-hub vertices are chosen to be included in the dominating set, there remain n more undominated leaf-hub vertices, which can be dominated by the single hub-hub vertex. Thus, the domination number of this graph is γ("K"1,"n" □ "K"1,"n") = "n" + 1 far higher than the trivial bound of one given by Vizing's conjecture. There exist infinite families of graph products for which the bound of Vizing's conjecture is exactly met. For instance, if G and H are both connected graphs, each having at least four vertices and having exactly twice as many total vertices as their domination numbers, then γ("G" □ "H") = γ("G") γ("H"). The graphs G and H with this property consist of the four-vertex cycle "C"4 together with the rooted products of a connected graph and a single edge. Partial results. Clearly, the conjecture holds when either G or H has domination number one: for, the product contains an isomorphic copy of the other factor, dominating which requires at least γ("G")γ("H") vertices. Vizing's conjecture is also known to hold for cycles and for graphs with domination number two. proved that the domination number of the product is at least half as large as the conjectured bound, for all G and H. Upper bounds. observed that formula_1 A dominating set meeting this bound may be formed as the cartesian product of a dominating set in one of "G" or "H" with the set of all vertices in the other graph. Notes. &lt;templatestyles src="Reflist/styles.css" /&gt; References. &lt;templatestyles src="Refbegin/styles.css" /&gt;
[ { "math_id": 0, "text": " \\gamma(G\\,\\Box\\,H) \\ge \\gamma(G)\\gamma(H). \\, " }, { "math_id": 1, "text": " \\gamma(G \\,\\Box\\, H) \\le \\min\\{\\gamma(G) |V(H)|,\\gamma(H)|V(G)|\\}. " } ]
https://en.wikipedia.org/wiki?curid=8175877
81761
Diesel fuel
Liquid fuel used in diesel engines Diesel fuel, also called diesel oil, heavy oil (historically) or simply diesel, is any liquid fuel specifically designed for use in a diesel engine, a type of internal combustion engine in which fuel ignition takes place without a spark as a result of compression of the inlet air and then injection of fuel. Therefore, diesel fuel needs good compression ignition characteristics. The most common type of diesel fuel is a specific fractional distillate of petroleum fuel oil, but alternatives that are not derived from petroleum, such as biodiesel, biomass to liquid (BTL) or gas to liquid (GTL) diesel are increasingly being developed and adopted. To distinguish these types, petroleum-derived diesel is sometimes called petrodiesel in some academic circles. Petrodiesel is a high-volume profitable product produced in crude oil refineries. In many countries, diesel fuel is standardized. For example, in the European Union, the standard for diesel fuel is EN 590. Ultra-low-sulfur diesel (ULSD) is a diesel fuel with substantially lowered sulfur contents. As of 2016, almost all of the petroleum-based diesel fuel available in the United Kingdom, mainland Europe, and North America is of a ULSD type. Before diesel fuel had been standardized, the majority of diesel engines typically ran on cheap fuel oils. These fuel oils are still used in watercraft diesel engines. Despite being specifically designed for diesel engines, diesel fuel can also be used as fuel for several non-diesel engines, for example the Akroyd engine, the Stirling engine, or boilers for steam engines. Diesel often fuels heavy trucks, but diesel exhaust, especially from older engines, can damage health. Names. Diesel fuel has many colloquial names; most commonly, it is simply referred to as "diesel". In the United Kingdom, diesel fuel for road use is commonly called "diesel" or sometimes "white diesel" if required to differentiate it from a reduced-tax agricultural-only product containing an identifying coloured dye known as "red diesel". The official term for white diesel is "DERV", standing for "diesel-engine road vehicle". In Australia, diesel fuel is also known as distillate (not to be confused with "distillate" in an older sense referring to a different motor fuel), and in Indonesia (as well in Israel), it is known as Solar, a trademarked name from the country's national petroleum company Pertamina. The term gas oil (French: "gazole") is sometimes also used to refer to diesel fuel. History. Origins. Diesel fuel originated from experiments conducted by German scientist and inventor Rudolf Diesel for his compression-ignition engine which he invented around 1892. Originally, Diesel did not consider using any specific type of fuel. Instead, he claimed that the operating principle of his rational heat motor would work with any kind of fuel in any state of matter. The first diesel engine prototype and the first functional Diesel engine were only designed for liquid fuels. At first, Diesel tested crude oil from Pechelbronn, but soon replaced it with petrol and kerosene, because crude oil proved to be too viscous, with the main testing fuel for the Diesel engine being kerosene (paraffin). Diesel experimented with types of lamp oil from various sources, as well as types of petrol and ligroin, which all worked well as Diesel engine fuels. Later, Diesel tested coal tar creosote, paraffin oil, crude oil, gasoline and fuel oil, which eventually worked as well. In Scotland and France, shale oil was used as fuel for the first 1898 production Diesel engines because other fuels were too expensive. In 1900, the French Otto society built a Diesel engine for the use with crude oil, which was exhibited at the 1900 Paris Exposition and the 1911 World's Fair in Paris. The engine actually ran on peanut oil instead of crude oil, and no modifications were necessary for peanut oil operation. During his first Diesel engine tests, Diesel also used illuminating gas as fuel, and managed to build functional designs, both with and without pilot injection. According to Diesel, neither was a coal-dust–producing industry existent, nor was fine, high-quality coal-dust commercially available in the late 1890s. This is the reason why the Diesel engine was never designed or planned as a coal-dust engine. Only in December 1899, did Diesel test a coal-dust prototype, which used external mixture formation and liquid fuel pilot injection. This engine proved to be functional, but suffered from piston ring failure after a few minutes due to coal dust deposition. Since the 20th century. Before diesel fuel was standardised, diesel engines typically ran on cheap fuel oils. In the United States, these were distilled from petroleum, whereas in Europe, coal-tar creosote oil was used. Some diesel engines were fuelled with mixtures of fuels, such as petrol, kerosene, rapeseed oil, or lubricating oil which were cheaper because, at the time, they were not being taxed. The introduction of motor-vehicle diesel engines, such as the Mercedes-Benz OM 138, in the 1930s meant that higher-quality fuels with proper ignition characteristics were needed. At first no improvements were made to motor-vehicle diesel fuel quality. After World War II, the first modern high-quality diesel fuels were standardised. These standards were, for instance, the DIN 51601, VTL 9140–001, and NATO F 54 standards. In 1993, the DIN 51601 was rendered obsolete by the new EN 590 standard, which has been used in the European Union ever since. In sea-going watercraft, where diesel propulsion had gained prevalence by the late 1970s due to increasing fuel costs caused by the 1970s energy crisis, cheap heavy fuel oils are still used instead of conventional motor-vehicle diesel fuel. These heavy fuel oils (often called Bunker C) can be used in diesel-powered and steam-powered vessels. Types. Diesel fuel is produced from various sources, the most common being petroleum. Other sources include biomass, animal fat, biogas, natural gas, and coal liquefaction. Petroleum diesel. Petroleum diesel, also called petrodiesel, fossil diesel, or mineral diesel, is the most common type of diesel fuel. It is produced from the fractional distillation of crude oil between at atmospheric pressure, resulting in a mixture of carbon chains that typically contain between 9 and 25 carbon atoms per molecule. Synthetic diesel. Synthetic diesel can be produced from any carbonaceous material, including biomass, biogas, natural gas, coal and many others. The raw material is gasified into synthesis gas, which after purification is converted by the Fischer–Tropsch process to a synthetic diesel. The process is typically referred to as biomass-to-liquid (BTL), gas-to-liquid (GTL) or coal-to-liquid (CTL), depending on the raw material used. Paraffinic synthetic diesel generally has a near-zero content of sulfur and very low aromatics content, reducing unregulated emissions of toxic hydrocarbons, nitrous oxides and particulate matter (PM). Biodiesel. Biodiesel is obtained from vegetable oil or animal fats (biolipids) which are mainly fatty acid methyl esters (FAME), and transesterified with methanol. It can be produced from many types of oils, the most common being rapeseed oil (rapeseed methyl ester, RME) in Europe and soybean oil (soy methyl ester, SME) in the US. Methanol can also be replaced with ethanol for the transesterification process, which results in the production of ethyl esters. The transesterification processes use catalysts, such as sodium or potassium hydroxide, to convert vegetable oil and methanol into biodiesel and the undesirable byproducts glycerine and water, which will need to be removed from the fuel along with methanol traces. Biodiesel can be used pure (B100) in engines where the manufacturer approves such use, but it is more often used as a mix with diesel, BXX where XX is the biodiesel content in percent. FAME used as fuel is specified in DIN EN 14214 and ASTM D6751 standards. Fuel Injection Equipment (FIE) manufacturers have raised several concerns regarding biodiesel, identifying FAME as being the cause of the following problems: corrosion of fuel injection components, low-pressure fuel system blockage, increased dilution and polymerization of engine sump oil, pump seizures due to high fuel viscosity at low temperature, increased injection pressure, elastomeric seal failures and fuel injector spray blockage. Pure biodiesel has an energy content about 5–10% lower than petroleum diesel. The loss in power when using pure biodiesel is 5–7%. Unsaturated fatty acids are the source for the lower oxidation stability. They react with oxygen and form peroxides and result in degradation byproducts, which can cause sludge and lacquer in the fuel system. As biodiesel contains low levels of sulfur, the emissions of sulfur oxides and sulfates, major components of acid rain, are low. Use of biodiesel also results in reductions of unburned hydrocarbons, carbon monoxide (CO), and particulate matter. CO emissions using biodiesel are substantially reduced, on the order of 50% compared to most petrodiesel fuels. The exhaust emissions of particulate matter from biodiesel have been found to be 30% lower than overall particulate matter emissions from petrodiesel. The exhaust emissions of total hydrocarbons (a contributing factor in the localized formation of smog and ozone) are up to 93% lower for biodiesel than diesel fuel. Biodiesel also may reduce health risks associated with petroleum diesel. Biodiesel emissions showed decreased levels of polycyclic aromatic hydrocarbon (PAH) and nitrated PAH compounds, which have been identified as potential carcinogens. In recent testing, PAH compounds were reduced by 75–85%, except for benz(a)anthracene, which was reduced by roughly 50%. Targeted nPAH compounds were also reduced dramatically with biodiesel fuel, with 2-nitrofluorene and 1-nitropyrene reduced by 90%, and the rest of the nPAH compounds reduced to only trace levels. Hydrogenated oils and fats. This category of diesel fuels involves converting the triglycerides in vegetable oil and animal fats into alkanes by refining and hydrogenation, such as Neste Renewable Diesel or H-Bio. The produced fuel has many properties that are similar to synthetic diesel, and are free from the many disadvantages of FAME. DME. Dimethyl ether, DME, is a synthetic, gaseous diesel fuel that results in clean combustion with very little soot and reduced emissions. Storage. In the US, diesel is recommended to be stored in a yellow container to differentiate it from kerosene, which is typically kept in blue containers, and gasoline (petrol), which is typically kept in red containers. In the UK, diesel is normally stored in a black container to differentiate it from unleaded or leaded petrol, which are stored in green and red containers, respectively. Standards. The diesel engine is a multifuel engine and can run on a huge variety of fuels. However, development of high-performance, high-speed diesel engines for cars and lorries in the 1930s meant that a proper fuel specifically designed for such engines was needed: diesel fuel. In order to ensure consistent quality, diesel fuel is standardised; the first standards were introduced after World War II. Typically, a standard defines certain properties of the fuel, such as cetane number, density, flash point, sulphur content, or biodiesel content. Diesel fuel standards include: Diesel fuel Biodiesel fuel Measurements and pricing. Cetane number. The principal measure of diesel fuel quality is its cetane number. A cetane number is a measure of the delay of ignition of a diesel fuel. A higher cetane number indicates that the fuel ignites more readily when sprayed into hot compressed air. European (EN 590 standard) road diesel has a minimum cetane number of 51. Fuels with higher cetane numbers, normally "premium" diesel fuels with additional cleaning agents and some synthetic content, are available in some markets. Fuel value and price. About 86.1% of diesel fuel mass is carbon, and when burned, it offers a net heating value of 43.1 MJ/kg as opposed to 43.2 MJ/kg for gasoline. Due to the higher density, diesel fuel offers a higher volumetric energy density: the density of EN 590 diesel fuel is defined as at , about 9.0-13.9% more than EN 228 gasoline (petrol)'s at 15 °C, which should be put into consideration when comparing volumetric fuel prices. The CO2 emissions from diesel are 73.25 g/MJ, just slightly lower than for gasoline at 73.38 g/MJ. Diesel fuel is generally simpler to refine from petroleum than gasoline, and contains hydrocarbons having a boiling point in the range of . Additional refining is required to remove sulfur, which contributes to a sometimes higher cost. In many parts of the United States and throughout the United Kingdom and Australia, diesel fuel may be priced higher than petrol per gallon or litre. Reasons for higher-priced diesel include the shutdown of some refineries in the Gulf of Mexico, diversion of mass refining capacity to gasoline production, and a recent transfer to ultra-low-sulfur diesel (ULSD), which causes infrastructural complications. In Sweden, a diesel fuel designated as MK-1 (class 1 environmental diesel) is also being sold. This is a ULSD that also has a lower aromatics content, with a limit of 5%. This fuel is slightly more expensive to produce than regular ULSD. In Germany, the fuel tax on diesel fuel is about 28% lower than the petrol fuel tax. Taxation. Diesel fuel is similar to heating oil, which is used in central heating. In Europe, the United States, and Canada, taxes on diesel fuel are higher than on heating oil due to the fuel tax, and in those areas, heating oil is marked with fuel dyes and trace chemicals to prevent and detect tax fraud. "Untaxed" diesel (sometimes called "off-road diesel" or "red diesel" due to its red dye) is available in some countries for use primarily in agricultural applications, such as fuel for tractors, recreational and utility vehicles or other noncommercial vehicles that do not use public roads. This fuel may have sulfur levels that exceed the limits for road use in some countries (e.g. US). This untaxed diesel is dyed red for identification, and using this untaxed diesel fuel for a typically taxed purpose (such as driving use), the user can be fined (e.g. US$10,000 in the US). In the United Kingdom, Belgium and the Netherlands, it is known as red diesel (or gas oil), and is also used in agricultural vehicles, home heating tanks, refrigeration units on vans/trucks which contain perishable items such as food and medicine and for marine craft. Diesel fuel, or marked gas oil is dyed green in the Republic of Ireland and Norway. The term "diesel-engined road vehicle" (DERV) is used in the UK as a synonym for unmarked road diesel fuel. In India, taxes on diesel fuel are lower than on petrol, as the majority of the transportation for grain and other essential commodities across the country runs on diesel. Taxes on biodiesel in the US vary between states. Some states (Texas, for example) have no tax on biodiesel and a reduced tax on biodiesel blends equivalent to the amount of biodiesel in the blend, so that B20 fuel is taxed 20% less than pure petrodiesel. Other states, such as North Carolina, tax biodiesel (in any blended configuration) the same as petrodiesel, although they have introduced new incentives to producers and users of all biofuels. Uses. Diesel fuel is mostly used in high-speed diesel engines, especially motor-vehicle (e.g. car, lorry) diesel engines, but not all diesel engines run on diesel fuel. For example, large two-stroke watercraft engines typically use heavy fuel oils instead of diesel fuel, and certain types of diesel engines, such as MAN M-System engines, are designed to run on petrol with knock resistances of up to 86 RON. On the other hand, gas turbine and some other types of internal combustion engines, and external combustion engines, can also be designed to take diesel fuel. The viscosity requirement of diesel fuel is usually specified at 40 °C. A disadvantage of diesel fuel in cold climates is that its viscosity increases as the temperature decreases, changing it into a gel (see Compression Ignition – Gelling) that cannot flow in fuel systems. Special low-temperature diesel contains additives to keep it liquid at lower temperatures. On-road vehicles. Trucks and buses, which were often otto-powered in the 1920s through 1950s, are now almost exclusively diesel-powered. Due to its ignition characteristics, diesel fuel is thus widely used in these vehicles. Since diesel fuel is not well-suited for otto engines, passenger cars, which often use otto or otto-derived engines, typically run on petrol instead of diesel fuel. However, especially in Europe and India, many passenger cars have, due to better engine efficiency, diesel engines, and thus run on regular diesel fuel. Railroad. Diesel displaced coal and fuel oil for steam-powered vehicles in the latter half of the 20th century, and is now used almost exclusively for the combustion engines of self-powered rail vehicles (locomotives and railcars). Aircraft. In general, diesel engines are not well-suited for planes and helicopters. This is because of the diesel engine's comparatively low power-to-mass ratio, meaning that diesel engines are typically rather heavy, which is a disadvantage in aircraft. Therefore, there is little need for using diesel fuel in aircraft, and diesel fuel is not commercially used as aviation fuel. Instead, petrol (Avgas), and jet fuel (e. g. Jet A-1) are used. However, especially in the 1920s and 1930s, numerous series-production aircraft diesel engines that ran on fuel oils were made, because they had several advantages: their fuel consumption was low, they were reliable, not prone to catching fire, and required minimal maintenance. The introduction of petrol direct injection in the 1930s outweighed these advantages, and aircraft diesel engines quickly fell out of use. With improvements in power-to-mass ratios of diesel engines, several on-road diesel engines have been converted to and certified for aircraft use since the early 21st century. These engines typically run on Jet A-1 aircraft fuel (but can also run on diesel fuel). Jet A-1 has ignition characteristics similar to diesel fuel, and is thus suited for certain (but not all) diesel engines. Military vehicles. Until World War II, several military vehicles, especially those that required high engine performance (armored fighting vehicles, for example the M26 Pershing or Panther tanks), used conventional otto engines and ran on petrol. Ever since World War II, several military vehicles with diesel engines have been made, capable of running on diesel fuel. This is because diesel engines are more fuel efficient, and diesel fuel is less prone to catching fire. Some of these diesel-powered vehicles (such as the Leopard 1 or MAN 630) still ran on petrol, and some military vehicles were still made with otto engines (e. g. Ural-375 or Unimog 404), incapable of running on diesel fuel. Tractors and heavy equipment. Today's tractors and heavy equipment are mostly diesel-powered. Among tractors, only the smaller classes may also offer gasoline-fuelled engines. The dieselization of tractors and heavy equipment began in Germany before World War II but was unusual in the United States until after that war. During the 1950s and 1960s, it progressed in the US as well. Diesel fuel is commonly used in oil and gas extracting equipment, although some locales use electric or natural gas powered equipment. Tractors and heavy equipment were often multifuel in the 1920s through 1940s, running either spark-ignition and low-compression engines, akryod engines, or diesel engines. Thus many farm tractors of the era could burn gasoline, alcohol, kerosene, and any light grade of fuel oil such as heating oil, or tractor vaporising oil, according to whichever was most affordable in a region at any given time. On US farms during this era, the name "distillate" often referred to any of the aforementioned light fuel oils. Spark ignition engines did not start as well on distillate, so typically a small auxiliary gasoline tank was used for cold starting, and the fuel valves were adjusted several minutes later, after warm-up, to transition to distillate. Engine accessories such as vaporizers and radiator shrouds were also used, both with the aim of capturing heat, because when such an engine was run on distillate, it ran better when both it and the air it inhaled were warmer rather than at ambient temperature. Dieselization with dedicated diesel engines (high-compression with mechanical fuel injection and compression ignition) replaced such systems and made more efficient use of the diesel fuel being burned. Other uses. Poor quality diesel fuel has been used as an extraction agent for liquid–liquid extraction of palladium from nitric acid mixtures. Such use has been proposed as a means of separating the fission product palladium from PUREX raffinate which comes from used nuclear fuel. In this system of solvent extraction, the hydrocarbons of the diesel act as the diluent while the dialkyl sulfides act as the extractant. This extraction operates by a solvation mechanism. So far, neither a pilot plant nor full scale plant has been constructed to recover palladium, rhodium or ruthenium from nuclear wastes created by the use of nuclear fuel. Diesel fuel is often used as the main ingredient in oil-base mud drilling fluid. The advantage of using diesel is its low cost and its ability to drill a wide variety of difficult strata, including shale, salt and gypsum formations. Diesel-oil mud is typically mixed with up to 40% brine water. Due to health, safety and environmental concerns, Diesel-oil mud is often replaced with vegetable, mineral, or synthetic food-grade oil-base drilling fluids, although diesel-oil mud is still in widespread use in certain regions. During development of rocket engines in Germany during World War II J-2 Diesel fuel was used as the fuel component in several engines including the BMW 109-718. J-2 diesel fuel was also used as a fuel for gas turbine engines. Chemical analysis. Chemical composition. In the United States, petroleum-derived diesel is composed of about 75% saturated hydrocarbons (primarily paraffins including "n", "iso", and cycloparaffins), and 25% aromatic hydrocarbons (including naphthalenes and alkylbenzenes). The average chemical formula for common diesel fuel is C12H23, ranging approximately from C10H20 to C15H28. Chemical properties. Most diesel fuels freeze at common winter temperatures, while the temperatures greatly vary. Petrodiesel typically freezes around temperatures of , whereas biodiesel freezes between temperatures of . The viscosity of diesel noticeably increases as the temperature decreases, changing it into a gel at temperatures of , that cannot flow in fuel systems. Conventional diesel fuels vaporise at temperatures between 149 °C and 371 °C. Conventional diesel flash points vary between 52 and 96 °C, which makes it safer than petrol and unsuitable for spark-ignition engines. Unlike petrol, the flash point of a diesel fuel has no relation to its performance in an engine nor to its auto ignition qualities. Carbon dioxide formation. As a good approximation the chemical formula of diesel is CnH2n. Diesel is a mixture of different molecules. As carbon has a molar mass of 12 g/mol and hydrogen has a molar mass of about 1 g/mol, so the fraction by weight of carbon in EN 590 diesel fuel is roughly 12/14. The reaction of diesel combustion is given by: 2CnH2n + 3nO2 ⇌ 2nCO2 + 2nH2O Carbon dioxide has a molar mass of 44g/mol as it consists of 2 atoms of oxygen (16 g/mol) and 1 atom of carbon (12 g/mol). So 12 g of carbon yield 44 g of Carbon dioxide. Diesel has a density of 0.838 kg per liter. Putting everything together the mass of carbon dioxide that is produced by burning 1 liter of diesel fuel can be calculated as: formula_0 The figure obtained with this estimation is close to the values found in the literature. For gasoline, with a density of 0.75 kg/L and a ratio of carbon to hydrogen atoms of about 6 to 14, the estimated value of carbon emission if 1 liter of gasoline is burnt gives: formula_1 Hazards. Environment hazards of sulfur. In the past, diesel fuel contained higher quantities of sulfur. European emission standards and preferential taxation have forced oil refineries to dramatically reduce the level of sulfur in diesel fuels. In the European Union, the sulfur content has dramatically reduced during the last 20 years. Automotive diesel fuel is covered in the European Union by standard EN 590. In the 1990s specifications allowed a content of 2000 ppm max of sulfur, reduced to a limit of 350 ppm by the beginning of the 21st century with the introduction of Euro 3 specifications. The limit was lowered with the introduction of Euro 4 by 2006 to 50 ppm (ULSD, Ultra Low Sulfur Diesel). The standard for diesel fuel in force in Europe as of 2009 is the Euro 5, with a maximum content of 10 ppm. In the United States, more stringent emission standards have been adopted with the transition to ULSD starting in 2006, and becoming mandatory on June 1, 2010 (see also diesel exhaust). Algae, microbes, and water contamination. There has been much discussion and misunderstanding of algae in diesel fuel. Algae need light to live and grow. As there is no sunlight in a closed fuel tank, no algae can survive, but some microbes can survive and feed on the diesel fuel. These microbes form a colony that lives at the interface of fuel and water. They grow quite fast in warmer temperatures. They can even grow in cold weather when fuel tank heaters are installed. Parts of the colony can break off and clog the fuel lines and fuel filters. Water in fuel can damage a fuel injection pump. Some diesel fuel filters also trap water. Water contamination in diesel fuel can lead to freezing while in the fuel tank. The freezing water that saturates the fuel will sometimes clog the fuel injector pump. Once the water inside the fuel tank has started to freeze, gelling is more likely to occur. When the fuel is gelled it is not effective until the temperature is raised and the fuel returns to a liquid state. Road hazard. Diesel is less flammable than gasoline / petrol. However, because it evaporates slowly, any spills on a roadway can pose a slip hazard to vehicles. After the light fractions have evaporated, a greasy slick is left on the road which reduces tire grip and traction, and can cause vehicles to skid. The loss of traction is similar to that encountered on black ice, resulting in especially dangerous situations for two-wheeled vehicles, such as motorcycles and bicycles, in roundabouts. See also. &lt;templatestyles src="Div col/styles.css"/&gt; References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "0.838 kg/L \\cdot {\\frac{12}{14}}\\cdot {\\frac{44}{12}}= 2.63 kg/L" }, { "math_id": 1, "text": " 0.75 kg/L \\cdot {\\frac{6 \\cdot 12}{6\\cdot 12 + 14}\\cdot 1} \\cdot {\\frac{44}{12}}= 2.3 kg/L " } ]
https://en.wikipedia.org/wiki?curid=81761
8177257
Interaction nets
Graphical model of computation Interaction nets are a graphical model of computation devised by French mathematician Yves Lafont in 1990 as a generalisation of the proof structures of linear logic. An interaction net system is specified by a set of agent types and a set of interaction rules. Interaction nets are an inherently distributed model of computation in the sense that computations can take place simultaneously in many parts of an interaction net, and no synchronisation is needed. The latter is guaranteed by the strong confluence property of reduction in this model of computation. Thus interaction nets provide a natural language for massive parallelism. Interaction nets are at the heart of many implementations of the lambda calculus, such as efficient closed reduction and optimal, in Lévy's sense, Lambdascope. Definitions. Interactions nets are graph-like structures consisting of "agents" and "edges". An agent of type formula_0 and with "arity" formula_1 has one "principal port" and formula_2 "auxiliary ports". Any port can be connected to at most one edge. Ports that are not connected to any edge are called "free ports". Free ports together form the "interface" of an interaction net. All agent types belong to a set formula_3 called "signature". An interaction net that consists solely of edges is called a "wiring" and usually denoted as formula_4. A "tree" formula_5 with its "root" formula_6 is inductively defined either as an edge formula_6, or as an agent formula_0 with its free principal port formula_6 and its auxiliary ports formula_7 connected to the roots of other trees formula_8. Graphically, the primitive structures of interaction nets can be represented as follows: When two agents are connected to each other with their principal ports, they form an "active pair". For active pairs one can introduce "interaction rules" which describe how the active pair rewrites to another interaction net. An interaction net with no active pairs is said to be in "normal form". A signature formula_3 (with formula_9 defined on it) along with a set of interaction rules defined for agents formula_10 together constitute an "interaction system". Interaction calculus. Textual representation of interaction nets is called the "interaction calculus" and can be seen as a programming language. Inductively defined trees correspond to "terms" formula_11 in the interaction calculus, where formula_6 is called a "name". Any interaction net formula_12 can be redrawn using the previously defined wiring and tree primitives as follows: which in the interaction calculus corresponds to a "configuration" formula_13, where formula_8, formula_14, and formula_15 are arbitrary terms. The ordered sequence formula_16 in the left-hand side is called an "interface", while the right-hand side contains an unordered multiset of "equations" formula_17. Wiring formula_4 translates to names, and each name has to occur exactly twice in a configuration. Just like in the formula_18-calculus, the interaction calculus has the notions of "formula_0-conversion" and "substitution" naturally defined on configurations. Specifically, both occurrences of any name can be replaced with a new name if the latter does not occur in a given configuration. Configurations are considered equivalent up to formula_0-conversion. In turn, substitution formula_19 is the result of replacing the name formula_6 in a term formula_5 with another term formula_20 if formula_6 has exactly one occurrence in the term formula_5. Any interaction rule can be graphically represented as follows: where formula_21, and the interaction net formula_12 on the right-hand side is redrawn using the wiring and tree primitives in order to translate into the interaction calculus as formula_22 using Lafont's notation. The interaction calculus defines reduction on configurations in more details than seen from graph rewriting defined on interaction nets. Namely, if formula_23, the following reduction: formula_24 is called "interaction". When one of equations has the form of formula_25, "indirection" can be applied resulting in substitution of the other occurrence of the name formula_6 in some term formula_5: formula_26 or formula_27. An equation formula_28 is called a "deadlock" if formula_6 has occurrence in term formula_5. Generally only deadlock-free interaction nets are considered. Together, interaction and indirection define the reduction relation on configurations. The fact that configuration formula_29 reduces to its "normal form" formula_30 with no equations left is denoted as formula_31. Properties. Interaction nets benefit from the following properties: These properties together allow massive parallelism. Interaction combinators. One of the simplest interaction systems that can simulate any other interaction system is that of "interaction combinators". Its signature is formula_36 with formula_37 and formula_38. Interaction rules for these agents are: Graphically, the erasing and duplication rules can be represented as follows: with an example of a non-terminating interaction net that reduces to itself. Its infinite reduction sequence starting from the corresponding configuration in the interaction calculus is as follows: formula_43 Non-deterministic extension. Interaction nets are essentially deterministic and cannot model non-deterministic computations directly. In order to express non-deterministic choice, interaction nets need to be extended. In fact, it is sufficient to introduce just one agent formula_44 with two principal ports and the following interaction rules: This distinguished agent represents ambiguous choice and can be used to simulate any other agent with arbitrary number of principal ports. For instance, it allows to define a formula_45 boolean operation that returns true if any of its arguments is true, independently of the computation taking place in the other arguments. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\alpha" }, { "math_id": 1, "text": "\\text{ar}(\\alpha) = n \\ge 0" }, { "math_id": 2, "text": "n" }, { "math_id": 3, "text": "\\Sigma" }, { "math_id": 4, "text": "\\omega" }, { "math_id": 5, "text": "t" }, { "math_id": 6, "text": "x" }, { "math_id": 7, "text": "x_i" }, { "math_id": 8, "text": "t_i" }, { "math_id": 9, "text": "\\text{ar}: \\Sigma \\rightarrow \\mathbb{N}" }, { "math_id": 10, "text": "\\alpha \\in \\Sigma" }, { "math_id": 11, "text": "t ::= \\alpha(t_1, \\dots, t_n)\\ |\\ x" }, { "math_id": 12, "text": "N" }, { "math_id": 13, "text": "c \\equiv \\langle t_1, \\dots, t_m \\ |\\ v_1 = w_1, \\dots, v_n = w_n \\rangle" }, { "math_id": 14, "text": "v_i" }, { "math_id": 15, "text": "w_i" }, { "math_id": 16, "text": "t_1,...,t_m" }, { "math_id": 17, "text": "v_i = w_i" }, { "math_id": 18, "text": "\\lambda" }, { "math_id": 19, "text": "t[x := u]" }, { "math_id": 20, "text": "u" }, { "math_id": 21, "text": "\\alpha, \\beta \\in \\Sigma" }, { "math_id": 22, "text": "\\alpha[v_1,\\dots, v_m] \\bowtie \\beta[w_1,\\dots, w_n]" }, { "math_id": 23, "text": "\\alpha[v_1,\\dots, v_m] \\bowtie \\beta[w_1,\\dots,w_n]" }, { "math_id": 24, "text": "\\langle \\vec t\\ |\\ \\alpha(t_1,\\dots,t_m) = \\beta(u_1,\\dots,u_n), \\Delta\\rangle \\rightarrow \\langle \\vec t\\ |\\ t_1 = v_1,\\dots, t_m = v_m, u_1 = w_1,\\dots, u_n = w_n, \\Delta\\rangle" }, { "math_id": 25, "text": "x = u" }, { "math_id": 26, "text": "\\langle \\dots t \\dots \\ |\\ x = u, \\Delta\\rangle \\rightarrow \\langle \\dots t[x := u] \\dots \\ |\\ \\Delta\\rangle" }, { "math_id": 27, "text": "\\langle \\vec t\\ |\\ x = u, t = w, \\Delta\\rangle \\rightarrow \\langle \\vec t\\ |\\ t[x := u] = w, \\Delta \\rangle" }, { "math_id": 28, "text": "x = t" }, { "math_id": 29, "text": "c" }, { "math_id": 30, "text": "c'" }, { "math_id": 31, "text": "c \\downarrow c'" }, { "math_id": 32, "text": "c \\rightarrow c_1" }, { "math_id": 33, "text": "c \\rightarrow c_2" }, { "math_id": 34, "text": "c_1 \\rightarrow c'" }, { "math_id": 35, "text": "c_2 \\rightarrow c'" }, { "math_id": 36, "text": "\\Sigma = \\{\\epsilon, \\delta, \\gamma\\}" }, { "math_id": 37, "text": "\\text{ar}(\\epsilon) = 0" }, { "math_id": 38, "text": "\\text{ar}(\\delta) = \\text{ar}(\\gamma) = 2" }, { "math_id": 39, "text": "\\epsilon \\bowtie \\alpha[\\epsilon,\\dots, \\epsilon]" }, { "math_id": 40, "text": "\\delta[\\alpha(x_1,\\dots, x_n), \\alpha(y_1,\\dots, y_n)] \\bowtie \\alpha[\\delta(x_1, y_1),\\dots, \\delta(x_n, y_n)]" }, { "math_id": 41, "text": "\\delta[x, y] \\bowtie \\delta[x, y]" }, { "math_id": 42, "text": "\\gamma[x, y] \\bowtie \\gamma[y, x]" }, { "math_id": 43, "text": "\n\\begin{align}\n&\\langle \\varnothing\\ |\\ \\delta(\\epsilon, x) = \\gamma(x, \\epsilon)\\rangle \\rightarrow \\\\\n&\\langle \\varnothing\\ |\\ \\epsilon = \\gamma(x_1, x_2),\\ x = \\gamma(y_1, y_2),\\ x = \\delta(x_1, y_1),\\ \\epsilon = \\delta(x_2, y_2)\\rangle \\rightarrow^* \\\\\n&\\langle \\varnothing\\ |\\ x_1 = \\epsilon,\\ x_2 = \\epsilon,\\ x = \\gamma(y_1, y_2),\\ x = \\delta(x_1, y_1),\\ x_2 = \\epsilon,\\ y_2 = \\epsilon\\rangle \\rightarrow^* \\\\\n&\\langle \\varnothing\\ |\\ \\delta(\\epsilon, x) = \\gamma(x, \\epsilon)\\rangle \\rightarrow \\dots\n\\end{align}\n" }, { "math_id": 44, "text": "\\text{amb}" }, { "math_id": 45, "text": "\\text{ParallelOr}" } ]
https://en.wikipedia.org/wiki?curid=8177257
817735
Efficiency wage
Wage per efficiency unit of labor The term efficiency wages (also known as "efficiency earnings") was introduced by Alfred Marshall to denote the wage per efficiency unit of labor. Marshallian efficiency wages are those calculated with efficiency or ability exerted being the unit of measure rather than time. That is, the more efficient worker will be paid more than a less efficient worker for the same amount of hours worked. The modern use of the term is quite different and refers to the idea that higher wages may increase the efficiency of the workers by various channels, making it worthwhile for the employers to offer wages that exceed a market-clearing level. Optimal efficiency wage is achieved when the marginal cost of an increase in wages is equal to the marginal benefit of improved productivity to an employer. In labor economics, the "efficiency wage" hypothesis argues that wages, at least in some labour markets, form in a way that is not market-clearing. Specifically, it points to the incentive for managers to pay their employees more than the market-clearing wage to increase their productivity or efficiency, or to reduce costs associated with employee turnover in industries in which the costs of replacing labor are high. The increased labor productivity and/or decreased costs may pay for the higher wages. Companies tend to hire workers at lower costs, but workers expect to be paid more when they work. The labor market balances the needs of employees and companies, so wages can fluctuate up or down. Because workers are paid more than the equilibrium wage, there may be unemployment, as the above-market wage rates attract more workers. Efficiency wages offer, therefore, a market failure explanation of unemployment in contrast to theories that emphasize government intervention such as minimum wages. However, efficiency wages do not necessarily imply unemployment but only uncleared markets and job rationing in those markets. There may be full employment in the economy or yet efficiency wages may prevail in some occupations. In this case there will be excess supply for those occupations and some applicants whom are not hired may have to work at a lower wage elsewhere. Conversely, if supply is less than demand, some employers will need to hire employees at higher wages, and applicants can get jobs with wages higher than the considered wages. Overview of theory. There are several theories (or "microfoundations") of why managers pay efficiency wages: The model of efficiency wages, largely based on shirking, developed by Carl Shapiro and Joseph E. Stiglitz has been particularly influential. Shirking. A theory in which employers voluntarily pay employees above the market equilibrium level to increase worker productivity. The shirking model begins with the fact that complete contracts rarely (or never) exist in the real world. This implies that both parties to the contract have some discretion, but frequently, due to monitoring problems, the employee's side of the bargain is subject to the most discretion. Methods such as piece rates are often impracticable because monitoring is too costly or inaccurate; or they may be based on measures too imperfectly verifiable by workers, creating a moral hazard problem on the employer's side. Thus, paying a wage in excess of market-clearing may provide employees with cost-effective incentives to work rather than shirk. In the Shapiro and Stiglitz model, workers either work or shirk, and if they shirk they have a certain probability of being caught, with the penalty of being fired. Equilibrium then entails unemployment, because to create an opportunity cost to shirking, firms try to raise their wages above the market average (so that sacked workers face a probabilistic loss). But since all firms do this, the market wage itself is pushed up, and the result is that wages are raised above market-clearing, creating involuntary unemployment. This creates a low, or no income alternative, which makes job loss costly and serves as a worker discipline device. Unemployed workers cannot bid for jobs by offering to work at lower wages since, if hired, it would be in the worker's interest to shirk on the job, and he has no credible way of promising not to do so. Shapiro and Stiglitz point out that their assumption that workers are identical (e.g. there is no stigma to having been fired) is a strong one – in practice, reputation can work as an additional disciplining device. Conversely, higher wages and unemployment increase the cost of finding a new job after being laid off. So in the shirking model, higher wages are also a monetary incentive. Shapiro-Stiglitz's model holds that unemployment threatens workers, and the stronger the danger, the more willing workers are to work through correct behavior. This view illustrates the endogenous decision-making of workers in the labor market; that is, workers will be more inclined to work hard when faced with the threat of unemployment to avoid the risk of unemployment. In the labor market, many factors influence workers' behavior and supply. Among them, the threat of unemployment is an essential factor affecting workers' behavior and supply. When workers are at risk of losing their jobs, they tend to increase their productivity and efficiency by working harder, thus improving their chances of employment. This endogenous decision of behavior and supply can somewhat alleviate the unemployment problem in the labor market. The shirking model does not predict that the bulk of the unemployed at any one time are those fired for shirking, because if the threat associated with being fired is effective, little or no shirking and sacking will occur. Instead, the unemployed will consist of a rotating pool of individuals who have quit for personal reasons, are new entrants to the labour market, or have been laid off for other reasons. Pareto optimality, with costly monitoring, will entail some unemployment since unemployment plays a socially valuable role in creating work incentives. But the equilibrium unemployment rate will not be Pareto optimal since firms do not consider the social cost of the unemployment they helped to create. One criticism of the efficiency wage hypothesis is that more sophisticated employment contracts can, under certain conditions, reduce or eliminate involuntary unemployment. The use of seniority wages to solve the incentive problem, where initially, workers are paid less than their marginal productivity, and as they work effectively over time within the firm, earnings increase until they exceed marginal productivity. The upward tilt in the age-earnings profile here provides the incentive to avoid shirking, and the present value of wages can fall to the market-clearing level, eliminating involuntary unemployment. The slope of earnings profiles is significantly affected by incentives. However, a significant criticism is that moral hazard would be shifted to employers responsible for monitoring the worker's efforts. Employers do not want employees to be lazy. Employers want employees to be able to do more work while getting their reserved wages. Obvious incentives would exist for firms to declare shirking when it has not taken place. In the Lazear model, firms have apparent incentives to fire older workers (paid above marginal product) and hire new cheaper workers, creating a credibility problem. The seriousness of this employer moral hazard depends on how much effort can be monitored by outside auditors, so that firms cannot cheat. However, reputation effects (e.g. Lazear 1981) may be able to do the same job. Labor turnover. "Labor turnover" refers to rapid changes in the workforce from one position to another. This is determined by the ratio of the size of the labor and the number of employees employed. With regards to the efficiency wage hypothesis, firms also offer wages in excess of market-clearing, due to the high cost of replacing workers (search, recruitment, training costs). If all firms are identical, one possible equilibrium involves all firms paying a common wage rate above the market-clearing level, with involuntary unemployment serving to diminish turnover. These models can easily be adapted to explain dual labor markets: if low-skill, labor-intensive firms have lower turnover costs (as seems likely), there may be a split between a low-wage, low-effort, high-turnover sector and a high-wage, high effort, low-turnover sector. Again, more sophisticated employment contracts may solve the problem. Selection. Similar to the shirking model, the selection model also believes that the information asymmetry problem is the main culprit that causes the market function not fully to exert to eliminate involuntary unemployment. However, unlike the shirking model, which focuses on employee shirking, the election model emphasizes the information disadvantage of employers in terms of labor quality. Due to the inability to accurately observe the real quality of employees, we only know that high wages can hire high-quality employees, and wage cuts will make high-quality employees go first. Therefore, wages will not continue to fall due to involuntary unemployment to maintain the excellent quality of workers. In selection wage theories it is presupposed that performance on the job depends on "ability", and that workers are heterogeneous concerning ability. The selection effect of higher wages may come about through self-selection or because firms with a larger pool of applicants can increase their hiring standards and obtain a more productive workforce. Workers with higher abilities are more likely to earn more wages, and companies are willing to pay higher wages to hire high-quality people as employees. Self-selection (often referred to as adverse selection) comes about if the workers’ ability and reservation wages are positively correlated. The basic assumption of efficiency wage theory is that the efficiency of workers increases with the increase of wages. In this case, companies face a trade-off between hiring productive workers at higher salaries or less effective workers at lower wages. These notes derive the so-called Solow condition, which minimizes wages even if the cost of practical labor input is minimized. Solow condition means that in the labor market, the wage level paid by enterprises should equal the marginal product of workers, namely the market value of labor force. This condition is based on two basic assumptions: that firms operate in a competitive market and cannot control market wages and that individual workers are price takers rather than price setters. If there are two kinds of firms (low and high wage), then we effectively have two sets of lotteries (since firms cannot screen), the difference being that high-ability workers do not enter the low-wage lotteries as their reservation wage is too high. Thus low-wage firms attract only low-ability lottery entrants, while high-wage firms attract workers of all abilities (i.e. on average, they will select average workers). Therefore high-wage firms are paying an efficiency wage – they pay more and, on average, get more. However, the assumption that firms cannot measure effort and pay piece rates after workers are hired or to fire workers whose output is too low is quite strong. Firms may also be able to design self-selection or screening devices that induce workers to reveal their true characteristics. High wages can effectively reduce personnel turnover, promote employees to work harder, prevent employees from resigning collectively, and effectively attract more high-quality employees. If firms can assess the productivity of applicants, they will try to select the best among the applicants. A higher wage offer will attract more applicants, particularly more highly qualified ones. This permits a firm to raise its hiring standard, thereby enhancing its productivity. Wage compression makes it profitable for firms to screen applicants under such circumstances, and selection wages may be necessary. Sociological models. Fairness, norms, and reciprocity. Standard economic models ("neoclassical economics") assume that people pursue only their self-interest and do not care about "social" goals ("homo economicus"). Neoclassical economics is divided into three theories, namely methodological individualism, methodological instrumentalist, and methodological equilibration. Some attention has been paid to the idea that people may be altruistic, but it is only with the addition of reciprocity and norms of fairness that the model becomes accurate. Thus of crucial importance is the idea of exchange: a person who is altruistic towards another expects the other to fulfil some fairness norm, be it reciprocating in kind, in some different but – according to some shared standard – equivalent way, or simply by being grateful. If the expected reciprocation is not forthcoming, the altruism will unlikely be repeated or continued. In addition, similar norms of fairness will typically lead people into negative forms of reciprocity, too – in retaliation for acts perceived as vindictive. This can bind actors into vicious loops where vindictive acts are met with further vindictive acts. In practice, despite the neat logic of standard neoclassical models, these sociological models do impinge upon many economic relations, though in different ways and to different degrees. For example, suppose an employee has been exceptionally loyal. In that case, a manager may feel some obligation to treat that employee well, even when it is not in his (narrowly defined, economic) self-interest. It would appear that although broader, longer-term economic benefits may result (e.g. through reputation, or perhaps through simplified decision-making according to fairness norms), a significant factor must be that there are noneconomic benefits the manager receives, such as not having a guilty conscience (loss of self-esteem). For real-world, socialised, normal human beings (as opposed to abstracted factors of production), this is likely to be the case quite often. As a quantitative estimate of the importance of this, the total value of voluntary labor in the US – $74 billion annually – will suffice. Examples of the negative aspect of fairness include consumers "boycotting" firms they disapprove of by not buying products they otherwise would (and therefore settling for second-best); and employees sabotaging firms they feel hard done by. Rabin (1993) offers three stylised facts as a starting point on how norms affect behaviour: (a) people are prepared to sacrifice their material well-being to help those who are being kind; (b) they are also prepared to do this to punish those being unkind; (c) both (a) and (b) have a greater effect on behaviour as the material cost of sacrificing (in relative rather than absolute terms) becomes smaller. Rabin supports his Fact A by Dawes and Thaler's (1988) survey of the experimental literature, which concludes that for most one-shot public good decisions in which the individually optimal contribution is close to 0%, the contribution rate ranges from 40 to 60% of the socially optimal level. Fact B is demonstrated by the "ultimatum game" (e.g. Thaler 1988), where an amount of money is split between two people, one proposing a division, the other accepting or rejecting (where rejection means both get nothing). Rationally, the proposer should offer no more than a penny, and the decider accept any offer of at least a penny. Still, in practice, even in one-shot settings, proposers make fair proposals, and deciders are prepared to punish unfair offers by rejecting them. Fact C is tested and partially confirmed by Gerald Leventhal and David Anderson (1970), but is also reasonably intuitive. In the ultimatum game, a 90% split (regarded as unfair) is (intuitively) far more likely to be punished if the amount to be split is $1 than $1 million. A crucial point (as noted in Akerlof 1982) is that notions of fairness depend on the status quo and other reference points. Experiments (Fehr and Schmidt 2000) and surveys (Kahneman, Knetsch, and Thaler 1986) indicate that people have clear notions of fairness based on particular reference points (disagreements can arise in the choice of reference point). Thus, for example, firms who raise prices or lower wages to take advantage of increased demand or increased labour supply are frequently perceived as acting unfairly, where the same changes are deemed acceptable when the firm makes them due to increased costs (Kahneman et al.). In other words, in people's intuitive "naïve accounting" (Rabin 1993), a key role is played by the idea of entitlements embodied in reference points (although as Dufwenberg and Kirchsteiger 2000 point out, there may be informational problems, e.g. for workers in determining what the firm's profit is, given tax avoidance and stock-price considerations). In particular, it is perceived as unfair for actors to increase their share at the expense of others. However, over time such a change may become entrenched and form a new reference point which (typically) is no longer in itself deemed unfair. Sociological efficiency wage models. Solow (1981) argued that wage rigidity may be partly due to social conventions and principles of appropriate behaviour, which are not entirely individualistic. Akerlof (1982) provided the first explicitly sociological model leading to the efficiency wage hypothesis. Using a variety of evidence from sociological studies, Akerlof argues that worker effort depends on the work norms of the relevant reference group. In Akerlof's partial gift exchange model, the firm can raise group work norms and average effort by paying workers a gift of wages over the minimum required in return for effort above the minimum required. The sociological model can explain phenomena inexplicable on neoclassical terms, such as why firms do not fire workers who turn out to be less productive, why piece rates are so little used even where quite feasible; and why firms set work standards exceeded by most workers. A possible criticism is that workers do not necessarily view high wages as gifts, but as merely fair (particularly since typically 80% or more of workers consider themselves in the top quarter of productivity), in which case they will not reciprocate with high effort. Akerlof and Yellen (1990), responding to these criticisms and building on work from psychology, sociology, and personnel management, introduce "the fair wage-effort hypothesis", which states that workers form a notion of the fair wage, and if the actual wage is lower, withdraw effort in proportion, so that, depending on the wage-effort elasticity and the costs to the firm of shirking, the fair wage may form a key part of the wage bargain. This explains persistent evidence of consistent wage differentials across industries (e.g. Slichter 1950; Dickens and Katz 1986; Krueger and Summers 1988): if firms must pay high wages to some groups of workers – perhaps because they are in short supply or for other efficiency-wage reasons such as shirking – then demands for fairness will lead to a compression of the pay scale, and wages for different groups within the firm will be higher than in other industries or firms. The union threat model is one of several explanations for industry wage differentials. This Keynesian economics model looks at the role of unions in wage determination. The degree in which union wages exceed non-union member wages is known as union wage premium. Some firms seek to prevent unionization in the first instances. Varying costs of union avoidance across sectors will lead some firms to offer supracompetitive wages as pay premiums to workers in exchange for their avoiding unionization. Under the union threat model (Dickens 1986), the ease with which industry can defeat a union drive has a negative relationship with its wage differential. In other words, inter-industry wage variability should be low where the threat of unionization is low. Empirical literature. Raff and Summers (1987) conduct a case study on Henry Ford’s introduction of the five dollar day in 1914. Their conclusion is that the Ford experience supports efficiency wage interpretations. Ford’s decision to increase wages so dramatically (doubling for most workers) is most plausibly portrayed as the consequence of efficiency wage considerations, with the structure being consistent, evidence of substantial queues for Ford jobs, and significant increases in productivity and profits at Ford. Concerns such as high turnover and poor worker morale appear to have played an important role in the five-dollar decision. Ford’s new wage put him in the position of rationing jobs, and increased wages did yield substantial productivity benefits and profits. There is also evidence that other firms emulated Ford’s policy to some extent, with wages in the automobile industry 40% higher than in the rest of manufacturing (Rae 1965, quoted in Raff and Summers). Given low monitoring costs and skill levels on the Ford production line, such benefits (and the decision itself) appear particularly significant. Fehr, Kirchler, Weichbold and Gächter (1998) conduct labour market experiments to separate the effects of competition and social norms/customs/standards of fairness. They find that firms persistently try to enforce lower wages in complete contract markets. By contrast, wages are higher and more stable in gift exchange markets and bilateral gift exchanges. It appears that in complete contract situations, competitive equilibrium exerts a considerable drawing power, whilst in the gift exchange market it does not. Fehr et al. stress that reciprocal effort choices are truly a one-shot phenomenon without reputation or other repeated-game effects. "It is, therefore, tempting to interpret reciprocal effort behavior as a preference phenomenon."(p344). Two types of preferences can account for this behaviour: a) workers may feel obligated to share the additional income from higher wages at least partly with firms; b) workers may have reciprocal motives (reward good behaviour, punish bad). "In the context of this interpretation, wage setting is inherently associated with signaling intentions, and workers condition their effort responses on the inferred intentions." (p344). Charness (1996), quoted in Fehr et al., finds that when signaling is removed (wages are set randomly or by the experimenter), workers exhibit a lower, but still positive, wage-effort relation, suggesting some gain-sharing motive and some reciprocity (where intentions can be signaled). Fehr et al. state that "Our preferred interpretation of firms’ wage-setting behavior is that firms voluntarily paid job rents to elicit non-minimum effort levels." Although excess supply of labour created enormous competition among workers, firms did not take advantage. In the long run, instead of being governed by competitive forces, firms’ wage offers were solely governed by reciprocity considerations because the payment of non-competitive wages generated higher profits. Thus, firms and workers can be better off relying on stable reciprocal interactions. That is to say, when the demands of enterprises and workers reach a balance point, it is stable and developing for both parties. That reciprocal behavior generates efficiency gains has been confirmed by several other papers e.g. Berg, Dickhaut, and McCabe (1995) – even under conditions of double anonymity and where actors know even the experimenter cannot observe individual behaviour, reciprocal interactions, and efficiency gains are frequent. Fehr, Gächter, and Kirchsteiger (1996, 1997) show that reciprocal interactions generate substantial efficiency gains. However, the efficiency-enhancing role of reciprocity is generally associated with serious behavioural deviations from competitive equilibrium predictions. To counter a possible criticism of such theories, Fehr and Tougareva (1995) showed these reciprocal exchanges (efficiency-enhancing) are independent of the stakes involved (they compared outcomes with stakes worth a week's income with stakes worth 3 months’ income and found no difference). As one counter to over-enthusiasm for efficiency wage models, Leonard (1987) finds little support for shirking or turnover efficiency wage models, by testing their predictions for large and persistent wage differentials. The shirking version assumes a trade-off between self-supervision and external supervision, while the turnover version assumes turnover is costly to the firm. Variation in the cost of monitoring/shirking or turnover is hypothesized to account for wage variations across firms for homogeneous workers. But Leonard finds that wages for narrowly defined occupations within one sector of one state are widely dispersed, suggesting other factors may be at work. Efficiency wage models do not explain everything about wages. For example, involuntary unemployment and persistent wage rigidity are often problematic in many economies. But the efficiency wage model fails to account for these issues. Mathematical explanation. Paul Krugman explains how the efficiency wage theory comes into play in real society. The productivity formula_0 of individual workers is a function of their wage formula_1, and the total productivity is the sum of individual productivity. Accordingly, the sales formula_2 of the firm to which the workers belong becomes a function of both employment formula_3 and the individual productivity. The firm's profit formula_4 is formula_5 Then we assume that the higher the wage of the workers become, the higher the individual productivity: formula_6. If the employment is chosen so that the profit is maximised, it is constant. Under this optimised condition, we have formula_7 that is, formula_8 Obviously, the gradient formula_9 of the slope is positive, because the higher individual productivity the higher sales. The formula_10 never goes to negative because of the optimised condition, and therefore we have formula_11 This means that if the firm increases their wage their profit becomes constant or even larger. Because after the employee's salary increases, the employee will work harder, and will not easily quit or go to other companies. This increases the stability of the company and the motivation of employees. Thus the efficiency wage theory motivates the owners of the firm to raise the wage to increase the profit of the firm, and high wages can also be called a reward mechanism. Notes. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": " E(w) " }, { "math_id": 1, "text": " w " }, { "math_id": 2, "text": " V " }, { "math_id": 3, "text": " L " }, { "math_id": 4, "text": " P " }, { "math_id": 5, "text": " P = V(LE) - w L \\; . " }, { "math_id": 6, "text": " \\frac{d E }{d w} > 0 " }, { "math_id": 7, "text": " dP = \\frac{\\partial V}{\\partial (LE)} d E - L d w \\; , " }, { "math_id": 8, "text": " \\frac{dP}{dw} = \\frac{\\partial V}{\\partial E} \\frac{d E}{d w} - L \\; . " }, { "math_id": 9, "text": " \\frac{\\partial V}{\\partial E} " }, { "math_id": 10, "text": " \\frac{d P}{d w} " }, { "math_id": 11, "text": " 0 \\leq \\frac{d P}{d w} \\; . " } ]
https://en.wikipedia.org/wiki?curid=817735