id
stringlengths
2
8
title
stringlengths
1
130
text
stringlengths
0
252k
formulas
listlengths
1
823
url
stringlengths
38
44
661039
Kerr effect
Change in refractive index of a material in response to an applied electric field The Kerr effect, also called the quadratic electro-optic (QEO) effect, is a change in the refractive index of a material in response to an applied electric field. The Kerr effect is distinct from the Pockels effect in that the induced index change for the Kerr effect is directly proportional to the "square" of the electric field instead of varying linearly with it. All materials show a Kerr effect, but certain liquids display it more strongly than others. The Kerr effect was discovered in 1875 by Scottish physicist John Kerr. Two special cases of the Kerr effect are normally considered, these being the Kerr electro-optic effect, or DC Kerr effect, and the optical Kerr effect, or AC Kerr effect. Kerr electro-optic effect. The Kerr electro-optic effect, or DC Kerr effect, is the special case in which a slowly varying external electric field is applied by, for instance, a voltage on electrodes across the sample material. Under this influence, the sample becomes birefringent, with different indices of refraction for light polarized parallel to or perpendicular to the applied field. The difference in index of refraction, "Δn", is given by formula_0 where "λ" is the wavelength of the light, "K" is the "Kerr constant", and "E" is the strength of the electric field. This difference in index of refraction causes the material to act like a waveplate when light is incident on it in a direction perpendicular to the electric field. If the material is placed between two "crossed" (perpendicular) linear polarizers, no light will be transmitted when the electric field is turned off, while nearly all of the light will be transmitted for some optimum value of the electric field. Higher values of the Kerr constant allow complete transmission to be achieved with a smaller applied electric field. Some polar liquids, such as nitrotoluene (C7H7NO2) and nitrobenzene (C6H5NO2) exhibit very large Kerr constants. A glass cell filled with one of these liquids is called a "Kerr cell". These are frequently used to modulate light, since the Kerr effect responds very quickly to changes in electric field. Light can be modulated with these devices at frequencies as high as 10 GHz. Because the Kerr effect is relatively weak, a typical Kerr cell may require voltages as high as 30 kV to achieve complete transparency. This is in contrast to Pockels cells, which can operate at much lower voltages. Another disadvantage of Kerr cells is that the best available material, nitrobenzene, is poisonous. Some transparent crystals have also been used for Kerr modulation, although they have smaller Kerr constants. In media that lack inversion symmetry, the Kerr effect is generally masked by the much stronger Pockels effect. The Kerr effect is still present, however, and in many cases can be detected independently of Pockels effect contributions. Optical Kerr effect. The optical Kerr effect, or AC Kerr effect is the case in which the electric field is due to the light itself. This causes a variation in index of refraction which is proportional to the local irradiance of the light. This refractive index variation is responsible for the nonlinear optical effects of self-focusing, self-phase modulation and modulational instability, and is the basis for Kerr-lens modelocking. This effect only becomes significant with very intense beams such as those from lasers. The optical Kerr effect has also been observed to dynamically alter the mode-coupling properties in multimode fiber, a technique that has potential applications for all-optical switching mechanisms, nanophotonic systems and low-dimensional photo-sensors devices. Magneto-optic Kerr effect. The magneto-optic Kerr effect (MOKE) is the phenomenon that the light reflected from a magnetized material has a slightly rotated plane of polarization. It is similar to the Faraday effect where the plane of polarization of the transmitted light is rotated. Theory. DC Kerr effect. For a nonlinear material, the electric polarization formula_1 will depend on the electric field formula_2: formula_3 where formula_4 is the vacuum permittivity and formula_5 is the formula_6-th order component of the electric susceptibility of the medium. We can write that relationship explicitly; the "i-"th component for the vector "P" can be expressed as: formula_7 where formula_8. It is often assumed that formula_9 ∥ formula_10, i.e., the component parallel to "x" of the polarization field; formula_11 ∥ formula_12 and so on. For a linear medium, only the first term of this equation is significant and the polarization varies linearly with the electric field. For materials exhibiting a non-negligible Kerr effect, the third, χ(3) term is significant, with the even-order terms typically dropping out due to inversion symmetry of the Kerr medium. Consider the net electric field E produced by a light wave of frequency ω together with an external electric field E0: formula_13 where Eω is the vector amplitude of the wave. Combining these two equations produces a complex expression for P. For the DC Kerr effect, we can neglect all except the linear terms and those in formula_14: formula_15 which is similar to the linear relationship between polarization and an electric field of a wave, with an additional non-linear susceptibility term proportional to the square of the amplitude of the external field. For non-symmetric media (e.g. liquids), this induced change of susceptibility produces a change in refractive index in the direction of the electric field: formula_16 where λ0 is the vacuum wavelength and "K" is the "Kerr constant" for the medium. The applied field induces birefringence in the medium in the direction of the field. A Kerr cell with a transverse field can thus act as a switchable wave plate, rotating the plane of polarization of a wave travelling through it. In combination with polarizers, it can be used as a shutter or modulator. The values of "K" depend on the medium and are about 9.4×10−14 m·V−2 for water, and 4.4×10−12 m·V−2 for nitrobenzene. For crystals, the susceptibility of the medium will in general be a tensor, and the Kerr effect produces a modification of this tensor. AC Kerr effect. In the optical or AC Kerr effect, an intense beam of light in a medium can itself provide the modulating electric field, without the need for an external field to be applied. In this case, the electric field is given by: formula_17 where Eω is the amplitude of the wave as before. Combining this with the equation for the polarization, and taking only linear terms and those in χ(3)|Eω|3: formula_18 As before, this looks like a linear susceptibility with an additional non-linear term: formula_19 and since: formula_20 where "n"0=(1+χLIN)1/2 is the linear refractive index. Using a Taylor expansion since χNL ≪ "n"02, this gives an "intensity dependent refractive index" (IDRI) of: formula_21 where "n"2 is the second-order nonlinear refractive index, and "I" is the intensity of the wave. The refractive index change is thus proportional to the intensity of the light travelling through the medium. The values of "n"2 are relatively small for most materials, on the order of 10−20 m2 W−1 for typical glasses. Therefore, beam intensities (irradiances) on the order of 1 GW cm−2 (such as those produced by lasers) are necessary to produce significant variations in refractive index via the AC Kerr effect. The optical Kerr effect manifests itself temporally as self-phase modulation, a self-induced phase- and frequency-shift of a pulse of light as it travels through a medium. This process, along with dispersion, can produce optical solitons. Spatially, an intense beam of light in a medium will produce a change in the medium's refractive index that mimics the transverse intensity pattern of the beam. For example, a Gaussian beam results in a Gaussian refractive index profile, similar to that of a gradient-index lens. This causes the beam to focus itself, a phenomenon known as self-focusing. As the beam self-focuses, the peak intensity increases which, in turn, causes more self-focusing to occur. The beam is prevented from self-focusing indefinitely by nonlinear effects such as multiphoton ionization, which become important when the intensity becomes very high. As the intensity of the self-focused spot increases beyond a certain value, the medium is ionized by the high local optical field. This lowers the refractive index, defocusing the propagating light beam. Propagation then proceeds in a series of repeated focusing and defocusing steps. References. <templatestyles src="Reflist/styles.css" /> <templatestyles src="Citation/styles.css"/>
[ { "math_id": 0, "text": "\\Delta n = \\lambda K E^2,\\ " }, { "math_id": 1, "text": " \\mathbf{P} " }, { "math_id": 2, "text": " \\mathbf{E} " }, { "math_id": 3, "text": " \\mathbf{P} = \\varepsilon_0 \\chi^{(1)}\\mathbf{E} + \\varepsilon_0 \\chi^{(2)}\\mathbf{E E} + \\varepsilon_0 \\chi^{(3)}\\mathbf{E E E} + \\cdots " }, { "math_id": 4, "text": "\\varepsilon_0" }, { "math_id": 5, "text": "\\chi^{(n)}" }, { "math_id": 6, "text": "n" }, { "math_id": 7, "text": "P_i =\n\\varepsilon_0 \\sum_{j=1}^{3} \\chi^{(1)}_{i j} E_j +\n\\varepsilon_0 \\sum_{j=1}^{3} \\sum_{k=1}^{3} \\chi^{(2)}_{i j k} E_j E_k +\n\\varepsilon_0 \\sum_{j=1}^{3} \\sum_{k=1}^{3} \\sum_{l=1}^{3} \\chi^{(3)}_{i j k l} E_j E_k E_l + \\cdots\n" }, { "math_id": 8, "text": "i = 1,2,3" }, { "math_id": 9, "text": "P_1" }, { "math_id": 10, "text": "P_x" }, { "math_id": 11, "text": "E_2" }, { "math_id": 12, "text": "E_y" }, { "math_id": 13, "text": " \\mathbf{E} = \\mathbf{E}_0 + \\mathbf{E}_\\omega \\cos(\\omega t), " }, { "math_id": 14, "text": "\\chi^{(3)}|\\mathbf{E}_0|^2 \\mathbf{E}_\\omega" }, { "math_id": 15, "text": "\\mathbf{P} \\simeq \\varepsilon_0 \\left( \\chi^{(1)} + 3 \\chi^{(3)} |\\mathbf{E}_0|^2 \\right) \\mathbf{E}_\\omega \\cos(\\omega\nt)," }, { "math_id": 16, "text": " \\Delta n = \\lambda_0 K |\\mathbf{E}_0|^2, " }, { "math_id": 17, "text": " \\mathbf{E} = \\mathbf{E}_\\omega \\cos(\\omega t), " }, { "math_id": 18, "text": " \\mathbf{P} \\simeq \\varepsilon_0 \\left( \\chi^{(1)} + \\frac{3}{4} \\chi^{(3)} |\\mathbf{E}_\\omega|^2 \\right) \\mathbf{E}_\\omega \\cos(\\omega t)." }, { "math_id": 19, "text": " \\chi = \\chi_{\\mathrm{LIN}} + \\chi_{\\mathrm{NL}} = \\chi^{(1)} + \\frac{3\\chi^{(3)}}{4} |\\mathbf{E}_\\omega|^2," }, { "math_id": 20, "text": " n = (1 + \\chi)^{1/2} =\n\\left( 1+\\chi_{\\mathrm{LIN}} + \\chi_{\\mathrm{NL}} \\right)^{1/2}\n\\simeq n_0 \\left( 1 + \\frac{1}{2 {n_0}^2} \\chi_{\\mathrm{NL}} \\right)" }, { "math_id": 21, "text": " n = n_0 + \\frac{3\\chi^{(3)}}{8 n_0} |\\mathbf{E}_{\\omega}|^2 = n_0 + n_2 I" } ]
https://en.wikipedia.org/wiki?curid=661039
66104540
ITP method
Root-finding algorithm In numerical analysis, the ITP method, short for "Interpolate Truncate and Project", is the first root-finding algorithm that achieves the superlinear convergence of the secant method while retaining the optimal worst-case performance of the bisection method. It is also the first method with guaranteed average performance strictly better than the bisection method under any continuous distribution. In practice it performs better than traditional interpolation and hybrid based strategies (Brent's Method, Ridders, Illinois), since it not only converges super-linearly over well behaved functions but also guarantees fast performance under ill-behaved functions where interpolations fail. The ITP method follows the same structure of standard bracketing strategies that keeps track of upper and lower bounds for the location of the root; but it also keeps track of the region where worst-case performance is kept upper-bounded. As a bracketing strategy, in each iteration the ITP queries the value of the function on one point and discards the part of the interval between two points where the function value shares the same sign. The queried point is calculated with three steps: it interpolates finding the regula falsi estimate, then it perturbes/truncates the estimate (similar to ) and then projects the perturbed estimate onto an interval in the neighbourhood of the bisection midpoint. The neighbourhood around the bisection point is calculated in each iteration in order to guarantee minmax optimality (Theorem 2.1 of ). The method depends on three hyper-parameters formula_0 and formula_1 where formula_2 is the golden ratio formula_3: the first two control the size of the truncation and the third is a slack variable that controls the size of the interval for the projection step. Root finding problem. Given a continuous function formula_4 defined from formula_5 to formula_6 such that formula_7, where at the cost of one query one can access the values of formula_8 on any given formula_9. And, given a pre-specified target precision formula_10, a root-finding algorithm is designed to solve the following problem with the least amount of queries as possible: Problem Definition: Find formula_11 such that formula_12, where formula_13 satisfies formula_14. This problem is very common in numerical analysis, computer science and engineering; and, root-finding algorithms are the standard approach to solve it. Often, the root-finding procedure is called by more complex parent algorithms within a larger context, and, for this reason solving root problems efficiently is of extreme importance since an inefficient approach might come at a high computational cost when the larger context is taken into account. This is what the ITP method attempts to do by simultaneously exploiting interpolation guarantees as well as minmax optimal guarantees of the bisection method that terminates in at most formula_15 iterations when initiated on an interval formula_16. The method. Given formula_0, formula_17 and formula_1 where formula_2 is the golden ratio formula_3, in each iteration formula_18 the ITP method calculates the point formula_19 following three steps: The value of the function formula_27 on this point is queried, and the interval is then reduced to bracket the root by keeping the sub-interval with function values of opposite sign on each end. The algorithm. The following algorithm (written in pseudocode) assumes the initial values of formula_28 and formula_29 are given and satisfy formula_30 where formula_31 and formula_32; and, it returns an estimate formula_33 that satisfies formula_34 in at most formula_35 function evaluations. Input: formula_36 Preprocessing: formula_37, formula_38, and formula_39; While ( formula_40 ) Calculating Parameters: formula_41, formula_42, formula_43; Interpolation: formula_44; Truncation: formula_45; If formula_46 then formula_47, Else formula_48; Projection: If formula_49 then formula_50, Else formula_51; Updating Interval: formula_52; If formula_53 then formula_54 and formula_55, Elseif formula_56 then formula_57 and formula_58, Else formula_57 and formula_59; formula_60; Output: formula_61 Example: Finding the root of a polynomial. Suppose that the ITP method is used to find a root of the polynomial formula_62 Using formula_63 and formula_64 we find that: This example can be compared to . The ITP method required less than half the number of iterations than the bisection to obtain a more precise estimate of the root with no cost on the minmax guarantees. Other methods might also attain a similar speed of convergence (such as Ridders, Brent etc.) but without the minmax guarantees given by the ITP method. Analysis. The main advantage of the ITP method is that it is guaranteed to require no more iterations than the bisection method when formula_65. And so its average performance is guaranteed to be better than the bisection method even when interpolation fails. Furthermore, if interpolations do not fail (smooth functions), then it is guaranteed to enjoy the high order of convergence as interpolation based methods. Worst case performance. Because the ITP method projects the estimator onto the minmax interval with a formula_66 slack, it will require at most formula_67 iterations (Theorem 2.1 of ). This is minmax optimal like the bisection method when formula_66 is chosen to be formula_65. Average performance. Because it does not take more than formula_67 iterations, the average number of iterations will always be less than that of the bisection method for any distribution considered when formula_65 (Corollary 2.2 of ). Asymptotic performance. If the function formula_68 is twice differentiable and the root formula_69 is simple, then the intervals produced by the ITP method converges to 0 with an order of convergence of formula_70 if formula_71 or if formula_65 and formula_72 is not a power of 2 with the term formula_73 not too close to zero (Theorem 2.3 of ). Notes. <templatestyles src="Reflist/styles.css" /> References. <templatestyles src="Reflist/styles.css" />
[ { "math_id": 0, "text": "\\kappa_1\\in (0,\\infty), \\kappa_2 \\in \\left[1,1+\\phi\\right) " }, { "math_id": 1, "text": "n_0\\in[0,\\infty) " }, { "math_id": 2, "text": "\\phi " }, { "math_id": 3, "text": "\\tfrac{1}{2}(1+\\sqrt{5}) " }, { "math_id": 4, "text": "f" }, { "math_id": 5, "text": "[a,b]" }, { "math_id": 6, "text": "\\mathbb{R}" }, { "math_id": 7, "text": "f(a)f(b)\\leq 0" }, { "math_id": 8, "text": "f(x)" }, { "math_id": 9, "text": "x" }, { "math_id": 10, "text": "\\epsilon>0" }, { "math_id": 11, "text": "\\hat{x}" }, { "math_id": 12, "text": "|\\hat{x}-x^*|\\leq \\epsilon" }, { "math_id": 13, "text": "x^*" }, { "math_id": 14, "text": "f(x^*) = 0" }, { "math_id": 15, "text": "n_{1/2}\\equiv\\lceil\\log_2((b_0-a_0)/2\\epsilon)\\rceil " }, { "math_id": 16, "text": "[a_0,b_0] " }, { "math_id": 17, "text": "n_{1/2} \\equiv \\lceil\\log_2((b_0-a_0)/2\\epsilon)\\rceil " }, { "math_id": 18, "text": "j = 0,1,2\\dots " }, { "math_id": 19, "text": "x_{\\text{ITP}} " }, { "math_id": 20, "text": "x_{1/2} \\equiv \\frac{a+b}{2} " }, { "math_id": 21, "text": "x_f \\equiv \\frac{bf(a)-af(b)}{f(a)-f(b)} " }, { "math_id": 22, "text": "x_t \\equiv x_f+\\sigma \\delta " }, { "math_id": 23, "text": "\\sigma \\equiv \\text{sign}(x_{1/2}-x_f) " }, { "math_id": 24, "text": "\\delta \\equiv \\min\\{\\kappa_1|b-a|^{\\kappa_2},|x_{1/2}-x_f|\\} " }, { "math_id": 25, "text": "x_{\\text{ITP}} \\equiv x_{1/2} -\\sigma \\rho_k " }, { "math_id": 26, "text": "\\rho_k \\equiv \\min\\left\\{\\epsilon 2^{n_{1/2}+n_0-j} - \\frac{b-a}{2},|x_t-x_{1/2}|\\right\\} " }, { "math_id": 27, "text": "f(x_{\\text{ITP}}) " }, { "math_id": 28, "text": "y_a " }, { "math_id": 29, "text": "y_b " }, { "math_id": 30, "text": "y_a<0 <y_b " }, { "math_id": 31, "text": "y_a\\equiv f(a) " }, { "math_id": 32, "text": "y_b\\equiv f(b) " }, { "math_id": 33, "text": "\\hat{x} " }, { "math_id": 34, "text": "|\\hat{x} - x^*|\\leq \\epsilon " }, { "math_id": 35, "text": "n_{1/2}+n_0 " }, { "math_id": 36, "text": "a, b, \\epsilon, \\kappa_1, \\kappa_2, n_0, f " }, { "math_id": 37, "text": "n_{1/2} = \\lceil \\log_2\\tfrac{b-a}{2\\epsilon}\\rceil " }, { "math_id": 38, "text": "n_{\\max} = n_{1/2}+n_0 " }, { "math_id": 39, "text": "j = 0 " }, { "math_id": 40, "text": "b-a>2\\epsilon " }, { "math_id": 41, "text": "x_{1/2} = \\tfrac{a+b}{2} " }, { "math_id": 42, "text": "r = \\epsilon 2^{n_{\\max} - j}-(b-a)/2 " }, { "math_id": 43, "text": "\\delta = \\kappa_1(b-a)^{\\kappa_2} " }, { "math_id": 44, "text": "x_f = \\tfrac{y_ba-y_a b}{y_b-y_a} " }, { "math_id": 45, "text": "\\sigma = \\text{sign}(x_{1/2}-x_f) " }, { "math_id": 46, "text": "\\delta\\leq|x_{1/2}-x_f| " }, { "math_id": 47, "text": "x_t = x_f+\\sigma \\delta " }, { "math_id": 48, "text": "x_t = x_{1/2} " }, { "math_id": 49, "text": "|x_t-x_{1/2}|\\leq r " }, { "math_id": 50, "text": "x_{\\text{ITP}} = x_t " }, { "math_id": 51, "text": "x_{\\text{ITP}} = x_{1/2}-\\sigma r " }, { "math_id": 52, "text": "y_{\\text{ITP}} = f(x_{\\text{ITP}}) " }, { "math_id": 53, "text": "y_{\\text{ITP}}>0 " }, { "math_id": 54, "text": "b = x_{ITP} " }, { "math_id": 55, "text": "y_b = y_{\\text{ITP}} " }, { "math_id": 56, "text": "y_{\\text{ITP}}<0 " }, { "math_id": 57, "text": "a = x_{\\text{ITP}} " }, { "math_id": 58, "text": "y_a = y_{\\text{ITP}} " }, { "math_id": 59, "text": "b = x_{\\text{ITP}} " }, { "math_id": 60, "text": "j = j+1 " }, { "math_id": 61, "text": "\\hat{x} = \\tfrac{a+b}{2} " }, { "math_id": 62, "text": " f(x) = x^3 - x - 2 \\,." }, { "math_id": 63, "text": " \\epsilon = 0.0005, \\kappa_1 = 0.1, \\kappa_2 = 2" }, { "math_id": 64, "text": " n_0 = 1" }, { "math_id": 65, "text": " n_0 = 0" }, { "math_id": 66, "text": " n_0" }, { "math_id": 67, "text": " n_{1/2}+n_0" }, { "math_id": 68, "text": " f(x)" }, { "math_id": 69, "text": " x^*" }, { "math_id": 70, "text": " \\sqrt{\\kappa_2}" }, { "math_id": 71, "text": " n_0 \\neq 0" }, { "math_id": 72, "text": " (b-a)/\\epsilon" }, { "math_id": 73, "text": " \\tfrac{\\epsilon 2^{n_{1/2}}}{b-a}" } ]
https://en.wikipedia.org/wiki?curid=66104540
66108522
The Geometry of Numbers
Book on the geometry of numbers The Geometry of Numbers is a book on the geometry of numbers, an area of mathematics in which the geometry of lattices, repeating sets of points in the plane or higher dimensions, is used to derive results in number theory. It was written by Carl D. Olds, Anneli Cahn Lax, and Giuliana Davidoff, and published by the Mathematical Association of America in 2000 as volume 41 of their Anneli Lax New Mathematical Library book series. Authorship and publication history. "The Geometry of Numbers" is based on a book manuscript that Carl D. Olds, a New Zealand-born mathematician working in California at San Jose State University, was still writing when he died in 1979. Anneli Cahn Lax, the editor of the New Mathematical Library of the Mathematical Association of America, took up the task of editing it, but it remained unfinished when she died in 1999. Finally, Giuliana Davidoff took over the project, and saw it through to publication in 2000. Topics. "The Geometry of Numbers" is relatively short, and is divided into two parts. The first part applies number theory to the geometry of lattices, and the second applies results on lattices to number theory. Topics in the first part include the relation between the maximum distance between parallel lines that are not separated by any point of a lattice and the slope of the lines, Pick's theorem relating the area of a lattice polygon to the number of lattice points it contains, and the Gauss circle problem of counting lattice points in a circle centered at the origin of the plane. The second part begins with Minkowski's theorem, that centrally symmetric convex sets of large enough area (or volume in higher dimensions) necessarily contain a nonzero lattice point. It applies this to Diophantine approximation, the problem of accurately approximating one or more irrational numbers by rational numbers. After another chapter on the linear transformations of lattices, the book studies the problem of finding the smallest nonzero values of quadratic forms, and Lagrange's four-square theorem, the theorem that every non-negative integer can be represented as a sum of four squares of integers. The final two chapters concern Blichfeldt's theorem, that bounded planar regions with area formula_0 can be translated to cover at least formula_1 lattice points, and additional results in Diophantine approximation. The chapters on Minkowski's theorem and Blichfeldt's theorem, particularly, have been called the "foundation stones" of the book by reviewer Philip J. Davis. An appendix by Peter Lax concerns the Gaussian integers. A second appendix concerns lattice-based methods for packing problems including circle packing and, in higher dimensions, sphere packing. The book closes with biographies of Hermann Minkowski and Hans Frederick Blichfeldt. Audience and reception. "The Geometry of Numbers" is intended for secondary-school and undergraduate mathematics students, although it may be too advanced for the secondary-school students; it contains exercises making it suitable for classroom use. It has been described as "expository", "self-contained", and "readable". However, reviewer Henry Cohn notes several copyediting oversights, complains about its selection of topics, in which "curiosities are placed on an equal footing with deep results", and misses certain well-known examples which were not included. Despite this, he recommends the book to readers who are not yet ready for more advanced treatments of this material and wish to see "some beautiful mathematics". References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "A" }, { "math_id": 1, "text": "\\lceil A\\rceil" } ]
https://en.wikipedia.org/wiki?curid=66108522
66109
Eclipse cycle
Calculation and prediction of eclipses Eclipses may occur repeatedly, separated by certain intervals of time: these intervals are called eclipse cycles. The series of eclipses separated by a repeat of one of these intervals is called an eclipse series. Eclipse conditions. Eclipses may occur when Earth and the Moon are aligned with the Sun, and the shadow of one body projected by the Sun falls on the other. So at new moon, when the Moon is in conjunction with the Sun, the Moon may pass in front of the Sun as viewed from a narrow region on the surface of Earth and cause a solar eclipse. At full moon, when the Moon is in opposition to the Sun, the Moon may pass through the shadow of Earth, and a lunar eclipse is visible from the night half of Earth. The conjunction and opposition of the Moon together have a special name: syzygy (Greek for "junction"), because of the importance of these lunar phases. An eclipse does not occur at every new or full moon, because the plane of the Moon's orbit around Earth is tilted with respect to the plane of Earth's orbit around the Sun (the ecliptic): so as viewed from Earth, when the Moon appears nearest the Sun (at new moon) or furthest from it (at full moon), the three bodies are usually not exactly on the same line. This inclination is on average about 5° 9′, much larger than the apparent "mean" diameter of the Sun (32′ 2″), the Moon as viewed from Earth's surface directly below the Moon (31′ 37″), and Earth's shadow at the mean lunar distance (1° 23′). Therefore, at most new moons, Earth passes too far north or south of the lunar shadow, and at most full moons, the Moon misses Earth's shadow. Also, at most solar eclipses, the apparent angular diameter of the Moon is insufficient to fully occlude the solar disc, unless the Moon is around its perigee, i.e. nearer Earth and apparently larger than average. In any case, the alignment must be almost perfect to cause an eclipse. An eclipse can occur only when the Moon is on or near the plane of Earth's orbit, i.e. when its ecliptic latitude is low. This happens when the Moon is around either of the two orbital nodes on the ecliptic at the time of the syzygy. Of course, to produce an eclipse, the Sun must also be around a node at that time – the same node for a solar eclipse or the opposite node for a lunar eclipse. Recurrences. Up to three eclipses may occur during an eclipse season, a one- or two-month period that happens twice a year, around the time when the Sun is near the nodes of the Moon's orbit. An eclipse does not occur every month, because one month after an eclipse the relative geometry of the Sun, Moon, and Earth has changed. As seen from the Earth, the time it takes for the Moon to return to a node, the draconic month, is less than the time it takes for the Moon to return to the same ecliptic longitude as the Sun: the synodic month. The main reason is that during the time that the Moon has completed an orbit around the Earth, the Earth (and Moon) have completed about &lt;templatestyles src="Fraction/styles.css" /&gt;1⁄13 of their orbit around the Sun: the Moon has to make up for this in order to come again into conjunction or opposition with the Sun. Secondly, the orbital nodes of the Moon precess westward in ecliptic longitude, completing a full circle in about 18.60 years, so a draconic month is shorter than a sidereal month. In all, the difference in period between synodic and draconic month is nearly &lt;templatestyles src="Fraction/styles.css" /&gt;2+1⁄3 days. Likewise, as seen from the Earth, the Sun passes both nodes as it moves along its ecliptic path. The period for the Sun to return to a node is called the eclipse or draconic year: about 346.6201 days, which is about &lt;templatestyles src="Fraction/styles.css" /&gt;1⁄20 year shorter than a sidereal year because of the precession of the nodes. If a solar eclipse occurs at one new moon, which must be close to a node, then at the next full moon the Moon is already more than a day past its opposite node, and may or may not miss the Earth's shadow. By the next new moon it is even further ahead of the node, so it is less likely that there will be a solar eclipse somewhere on Earth. By the next month, there will certainly be no event. However, about 5 or 6 lunations later the new moon will fall close to the opposite node. In that time (half an eclipse year) the Sun will have moved to the opposite node too, so the circumstances will again be suitable for one or more eclipses. Periodicity. The periodicity of solar eclipses is the interval between any two solar eclipses in succession, which will be either 1, 5, or 6 synodic months. It is calculated that the Earth will experience a total number of 11,898 solar eclipses between 2000 BCE and 3000 CE. A particular solar eclipse will be repeated approximately after every 18 years 11 days and 8 hours (6,585.32 days) of period, but not in the same geographical region. A particular geographical region will experience a particular solar eclipse in every 54 years 34 days period. Total solar eclipses are rare events, although they occur somewhere on Earth every 18 months on average. Repetition of solar eclipses. For two solar eclipses to be almost identical, the geometric alignment of the Earth, Moon and Sun, as well as some parameters of the lunar orbit should be the same. The following parameters and criteria must be repeated for the repetition of a solar eclipse: These conditions are related to the three periods of the Moon's orbital motion, viz. the synodic month, anomalistic month and draconic month, and to the anomalistic year. In other words, a particular eclipse will be repeated only if the Moon will complete roughly an integer number of synodic, draconic, and anomalistic periods and the Earth-Sun-Moon geometry will be nearly identical. The Moon will be at the same node and the same distance from the Earth. This happens after the period called the saros. Gamma (how far the Moon is north or south of the ecliptic during an eclipse) changes monotonically throughout any single saros series. The change in gamma is larger when Earth is near its aphelion (June to July) than when it is near perihelion (December to January). When the Earth is near its average distance (March to April or September to October), the change in gamma is average. Repetition of lunar eclipses. For the repetition of a lunar eclipse, the geometric alignment of the Moon, Earth and Sun, as well as some parameters of the lunar orbit should be repeated. The following parameters and criteria must be repeated for the repetition of a lunar eclipse: These conditions are related with the three periods of the Moon's orbital motion, viz. the synodic month, anomalistic month and draconic month. In other words, a particular eclipse will be repeated only if the Moon will complete roughly an integer number of synodic, draconic, and anomalistic periods (223, 242, and 239) and the Earth-Sun-Moon geometry will be nearly identical to that eclipse. The Moon will be at the same node and the same distance from the Earth. Gamma changes monotonically throughout any single Saros series. The change in gamma is larger when Earth is near its aphelion (June to July) than when it is near perihelion (December to January). When the Earth is near its average distance (March to April or September to October), the change in gamma is average. Eclipses would not occur in every month. Another thing to consider is that the motion of the Moon is not a perfect circle. Its orbit is distinctly elliptic, so the lunar distance from Earth varies throughout the lunar cycle. This varying distance changes the apparent diameter of the Moon, and therefore influences the chances, duration, and type (partial, annular, total, mixed) of an eclipse. This orbital period is called the anomalistic month, and together with the synodic month causes the so-called "full moon cycle" of about 14 lunations in the timings and appearances of full (and new) Moons. The Moon moves faster when it is closer to the Earth (near perigee) and slower when it is near apogee (furthest distance), thus periodically changing the timing of syzygies by up to 14 hours either side (relative to their mean timing), and causing the apparent lunar angular diameter to increase or decrease by about 6%. An eclipse cycle must comprise close to an integer number of anomalistic months in order to perform well in predicting eclipses. If the Earth had a perfectly circular orbit centered around the Sun, and the Moon's orbit was also perfectly circular and centered around the Earth, and both orbits were coplanar (on the same plane) with each other, then two eclipses would happen every lunar month (29.53 days). A lunar eclipse would occur at every full moon, a solar eclipse every new moon, and all solar eclipses would be the same type. In fact the distances between the Earth and Moon and that of the Earth and the Sun vary because both the Earth and the Moon have elliptic orbits. Also, both the orbits are not on the same plane. The Moon's orbit is inclined about 5.14° to Earth's orbit around the Sun. So the Moon's orbit crosses the ecliptic at two points or nodes. If a New Moon takes place within about 17° of a node, then a solar eclipse will be visible from some location on Earth. At an average angular velocity of 0.99° per day, the Sun takes 34.5 days to cross the 34° wide eclipse zone centered on each node. Because the Moon's orbit with respect to the Sun has a mean duration of 29.53 days, there will always be one and possibly two solar eclipses during each 34.5-day interval when the Sun passes through the nodal eclipse zones. These time periods are called eclipse seasons. Either two or three eclipses happen each eclipse season. During the eclipse season, the inclination of the Moon's orbit is low, hence the Sun, Moon, and Earth become aligned straight enough (in syzygy) for an eclipse to occur. Numerical values. These are the lengths of the various types of months as discussed above (according to the lunar ephemeris ELP2000-85, valid for the epoch J2000.0; taken from ("e.g.") Meeus (1991) ): SM = 29.530588853 days (Synodic month) DM = 27.212220817 days (Draconic month) AM = 27.55454988 days (Anomalistic month) EY = 346.620076 days (Eclipse year) Note that there are three main moving points: the Sun, the Moon, and the (ascending) node; and that there are three main periods, when each of the three possible pairs of moving points meet one another: the synodic month when the Moon returns to the Sun, the draconic month when the Moon returns to the node, and the eclipse year when the Sun returns to the node. These three 2-way relations are not independent (i.e. both the synodic month and eclipse year are dependent on the apparent motion of the Sun, both the draconic month and eclipse year are dependent on the motion of the nodes), and indeed the eclipse year can be described as the beat period of the synodic and draconic months (i.e. the period of the difference between the synodic and draconic months); in formula: formula_0 as can be checked by filling in the numerical values listed above. Eclipse cycles have a period in which a certain number of synodic months closely equals an integer or half-integer number of draconic months: one such period after an eclipse, a syzygy (new moon or full moon) takes place again near a node of the Moon's orbit on the ecliptic, and an eclipse can occur again. However, the synodic and draconic months are incommensurate: their ratio is not an integer number. We need to approximate this ratio by common fractions: the numerators and denominators then give the multiples of the two periods – draconic and synodic months – that (approximately) span the same amount of time, representing an eclipse cycle. These fractions can be found by the method of continued fractions: this arithmetical technique provides a series of progressively better approximations of any real numeric value by proper fractions. Since there may be an eclipse every half draconic month, we need to find approximations for the number of half draconic months per synodic month: so the target ratio to approximate is: SM / (DM/2) = 29.530588853 / (27.212220817/2) = 2.170391682 The continued fractions expansion for this ratio is: 2.170391682 = [2;5,1,6,1,1,1,1,1,11,1...]: Quotients Convergents half DM/SM decimal named cycle (if any) 2; 2/1 = 2 synodic month 5 11/5 = 2.2 pentalunex 1 13/6 = 2.166666667 semester 6 89/41 = 2.170731707 hepton 1 102/47 = 2.170212766 octon 1 191/88 = 2.170454545 tzolkinex 1 293/135 = 2.170370370 tritos 1 484/223 = 2.170403587 saros 1 777/358 = 2.170391061 inex 11 9031/4161 = 2.170391732 selebit 1 9808/4519 = 2.170391679 square year The ratio of synodic months per half eclipse year yields the same series: 5.868831091 = [5;1,6,1,1,1,1,1,11,1...] Quotients Convergents SM/half EY decimal SM/full EY named cycle 5; 5/1 = 5 pentalunex 1 6/1 = 6 12/1 semester 6 41/7 = 5.857142857 hepton 1 47/8 = 5.875 47/4 octon 1 88/15 = 5.866666667 tzolkinex 1 135/23 = 5.869565217 tritos 1 223/38 = 5.868421053 223/19 saros 1 358/61 = 5.868852459 716/61 inex 11 4161/709 = 5.868829337 selebit 1 4519/770 = 5.868831169 4519/385 square year Each of these is an eclipse cycle. Less accurate cycles may be constructed by combinations of these. Eclipse cycles. This table summarizes the characteristics of various eclipse cycles, and can be computed from the numerical results of the preceding paragraphs; "cf." Meeus (1997) Ch.9. More details are given in the comments below, and several notable cycles have their own pages. Many other cycles have been noted, some of which have been named. The number of days given is the average. The actual number of days and fractions of days between two eclipses varies because of the variation in the speed of the Moon and of the Sun in the sky. The variation is less if the number of anomalistic months is near a whole number, and if the number of anomalistic years is near a whole number. (See graphs lower down of semester and Hipparchic cycle.) Any eclipse cycle, and indeed the interval between any two eclipses, can be expressed as a combination of saros ("s") and inex ("i") intervals. These are listed in the column "formula". For more information see eclipse season. Notes. The next nine cycles, Cartouche through Accuratissima, are all similar, being equal to 52 inex periods plus up to two triads and various numbers of saros periods. This means they all have a near-whole number of anomalistic months. They range from 1505 to 1841 years, and each series lasts for many thousands of years. Saros series and inex series. Any eclipse can be assigned to a given saros series and inex series. The year of a solar eclipse (in the Gregorian calendar) is then given approximately by: year = 28.945 × number of the saros series + 18.030 × number of the inex series − 2882.55 When this is greater than 1, the integer part gives the year AD, but when it is negative the year BC is obtained by taking the integer part and adding 2. For instance, the eclipse in saros series 0 and inex series 0 was in the middle of 2884 BC. A "panorama" of solar eclipses arranged by saros and inex has been produced by Luca Quaglia and John Tilley showing 61775 solar eclipses from 11001 BC to AD 15000 (see below). Each column of the graph is a complete Saros series which progresses smoothly from partial eclipses into total or annular eclipses and back into partials. Each graph row represents an inex series. Since a saros, of 223 synodic months, is slightly less than a whole number of draconic months, the early eclipses in a saros series (in the upper part of the diagram) occur after the Moon goes through its node (the beginning and end of a draconic month), while the later eclipses (in the lower part) occur before the Moon goes through its node. Every 18 years, the eclipse occurs on average about half a degree further west with respect to the node, but the progression is not uniform. Saros and inex number can be calculated for an eclipse near a given date. One can also find the approximate date of solar eclipses at distant dates by first determining one in an inex series such as series 50. This can be done by adding or subtracting some multiple of 28.9450 Gregorian years from the solar eclipse of 10 May, 2013, or 28.9444 Julian years from the Julian date of 27 April, 2013. Once such an eclipse has been found, others around the same time can be found using the short cycles. For lunar eclipses, the anchor dates May 4, 2004 or Julian April 21 may be used. Saros and inex numbers are also defined for lunar eclipses. A solar eclipse of given saros and inex series will be preceded a fortnight earlier by a lunar eclipse whose saros number is 26 lower and whose inex number is 18 higher, or it will be followed a fortnight later by a lunar eclipse whose saros number is 12 higher and whose inex number is 43 lower. As with solar eclipses, the Gregorian year of a lunar eclipse can be calculated as: year = 28.945 × number of the saros series + 18.030 × number of the inex series − 2454.564 Lunar eclipses can also be plotted in a similar diagram, this diagram covering 1000 AD to 2500 AD. The yellow diagonal band represents all the eclipses from 1900 to 2100. This graph immediately illuminates that this 1900–2100 period contains an above average number of total lunar eclipses compared to other adjacent centuries. This is related to the fact that tetrads (see above) are more common at present than at other periods. Tetrads occur when four lunar eclipses occur at four lunar inex numbers, decreseing by 8 (that is, a semester apart), which are in the range giving fairly central eclipses (small gamma), and furthermore the eclipses take place around halfway between the Earth's perihelion and aphelion. For example, in the tetrad of 2014-2015 (the so-called Four Blood Moons), the inex numbers were 52, 44, 36, and 28, and the eclipses occurred in April and late September-early October. Normally the absolute value of gamma decreases and then increases, but because in April the Sun is further east than its mean longitude, and in September/October further west than its mean longitude, the absolute values of gamma in the first and fourth eclipse are decreased, while the absolute values in the second and third are increased. The result is that all four gamma values are small enough to lead to total lunar eclipses. The phenomenon of the Moon "catching up" with the Sun (or the point opposite the Sun), which is usually not at its mean longitude, has been called a "stern chase". Inex series move slowly through the year, each eclipse occurring about 20 days earlier in the year, 29 years later. This means that over a period of 18.2 inex cycles (526 years) the date moves around the whole year. But because the perihelion of Earth's orbit is slowly moving as well, the inex series that are now producing tetrads will again be halfway between Earth's perihelion and aphelion in about 586 years. One can skew the graph of inex versus saros for solar or lunar eclipses so that the x axis shows the time of year. (An eclipse which is two saros series and one inex series later than another will be only 1.8 days later in the year in the Gregorian calendar.) This shows the 586-year oscillations as oscillations that go up around perihelion and down around aphelion (see graph). Properties of eclipses. The properties of eclipses, such as the timing, the distance or size of the Moon and Sun, or the distance the Moon passes north or south of the line between the Sun and the Earth, depend on the details of the orbits of the Moon and the Earth. There exist formulae for calculating the longitude, latitude, and distance of the Moon and of the Sun using sine and cosine series. The arguments of the sine and cosine functions depend on only four values, the Delaunay arguments: These four arguments are basically linear functions of time but with slowly varying higher-order terms. A diagram of inex and saros indices such as the "Panorama" shown above is like a map, and we can consider the values of the Delaunay arguments on it. The mean elongation, D, goes through 360° 223 times when the inex value goes up by 1, and 358 times when the saros value goes up by 1. It is thus equivalent to 0°, by definition, at each combination of solar saros index and inex index, because solar eclipses occur when the elongation is zero. From D one can find the actual elapsed time from some reference time such as J2000, which is like a linear function of inex and saros but with a deviation that grows quadratically with distance from the reference time, amounting to about 19 minutes at a distance of 1000 years. The mean argument of latitude, F, is equivalent to 0° or 180° (depending on whether the saros index is even or odd) along the smooth curve going through the centre of the band of eclipses, where gamma is near zero (around inex series 50 at present). F decreases as we go away from this curve towards higher inex series, and increases on the other side, by about 0.5° per inex series. When the inex value is too far from the centre, the eclipses disappear because the Moon is too far north or south of the Sun. The mean anomaly of the Sun is a smooth function, increasing by about 10° when increasing inex by 1 in a saros series and decreasing by about 20° when increasing saros index by 1 in an inex series. This means it is almost constant when increasing inex by 1 and saros index by 2 (the "Unidos" interval of 65 years). The above graph showing the time of year of eclipses basically shows the solar anomaly, since the perihelion moves by only one day per century in the Julian calendar, or 1.7 days per century in the Gregorian calendar. The mean anomaly of the Moon is more complicated. If we look at the eclipses whose saros index is divisible by 3, then the mean anomaly is a smooth function of inex and saros values. Contours run at an angle, so that mean anomaly is fairly constant when inex and saros values increase together at a ratio of around 21:24. The function varies slowly, changing by only 7.4° when changing the saros index by 3 at a constant inex value. A similar smooth function obtains for eclipses with saros modulo 3 equal to 1, but shifted by about 120°, and for saros modulo 3 equal to 2, shifted by 120° the other way. The upshot is that the properties vary slowly over the diagram in any of the three sets of saros series. The accompanying graph shows just the saros series that have saros index modulo 3 equal to zero. The blue areas are where the mean anomaly of the Moon is near 0°, meaning that the Moon is near perigee at the time of the eclipse, and therefore relatively large, favoring total eclipses. In the red area, the Moon is generally further from the Earth, and the eclipses are annular. We can also see the effect of the Sun's anomaly. Eclipses in July, when the Sun is further from the Earth, are more likely to be total, so the blue area extends over a greater range of inex index than for eclipses in January. The waviness seen in the graph is also due to the Sun's anomaly. In April the Sun is further east than if its longitude progressed evenly, and in October it is further west, and this means that in April the Moon catches up with the Sun relatively late, and in October relatively early. This in turn means that the argument of latitude at the actual time of the eclipse will be raised higher in April and lowered in October. Eclipses (either partial or not) with low inex index (near the upper edge in the "Panorama" graph) fail to occur in April because syzygy occurs too far to the east of the node, but more eclipses occur at high inex values in April because syzygy is not so far west of the node. The opposite applies to October. It also means that in April ascending-node solar eclipses will cast their shadow further north (such as the solar eclipse of April 8, 2024), and descending-node eclipses further south. The opposite is the case in October. Eclipses that occur when the earth is near perihelion (sun anomaly near zero) are in saros series in which the gamma value changes little every 18.03 years. The reason for this is that from one eclipse to the next in the saros series, the day in the year advances by about 11 days, but the Sun's position moves eastward by more than what it does for that change of day in year at other times. This means the Sun's position relative to the node doesn't change as much as for saros series giving eclipses at other times of the year. In the first half of the 21st century, solar saros series showing this slow rate of change of gamma include 122 (giving an eclipse on January 6, 2019), 132 (January 5, 2038), 141 (January 15, 2010), and 151 (January 4, 2011). Sometimes this phenomenon leads to a saros series giving a large number of central eclipses, for exammple solar saros 128 gave 20 eclipses with |γ|&lt;0.75 between 1615 and 1958, whereas series 135 gave only nine, between 1872 and 2016. The time interval between two eclipses in an eclipse cycle is variable. The time of an eclipse can be advanced or delayed by up to ten hours due to the eccentricity of the Moon's orbit – the eclipse will be early when the Moon is going from perigee to apogee, and late when it is going from apogee toward perigee. The time is also delayed because of the eccentricity of the Earth's orbit. Eclipses occur about four hours later in April and four hours earlier in October. This means that the delay varies from eclipse to eclipse in a series. The delay is the sum of two sine-like functions, one based on the time in the anomalistic year and one on the time in the anomalistic month. The periods of these two waves depends on how close the nominal interval between two eclipses in the series is to a whole number of anomalistic years and anomalistic months. In series like the "Immobilis" or the "Accuratissima", which are near whole numbers of both, the delay varies very slowly, so the interval is quite constant. In series like the octon, the Moon's anomaly changes considerably at least twice every three intervals, so the intervals vary considerably. The "Panorama" can also be related to where on the Earth the shadow of the Moon falls at the central time of the eclipse. If this "maximum eclipse" for a given eclipse is at a particular location, eclipses three saros later will be at a similar latitude (because the saros is close to a whole number of draconic months) and longitude (because a period of three saros is always within a couple hours of being 19755.96 days long, which would change the longitude by about 13° eastward). If instead we increase the saros index at a constant inex index, the intervals are quite variable because the number of anomalistic months or years is not very close to a whole number. This means that although the latitude will be similar (but changing sign), the longitude change can vary by more than 180°. Moving by six inex (a de la Hire cycle) preserves the latitude fairly well but the longitude change is very variable because of the variation of the solar anomaly. Both the angular size of the Moon in the sky at eclipses at the ascending node and the size of the Sun at those eclipses vary in a sort of sine wave. The sizes at the descending node vary in the same way, but 180° out of phase. The Moon is large at an ascending-node eclipse when its perigee is near the ascending node, so the period for the size of the Moon is the time it takes for the angle between the node and the perigee to go through 360°, or formula_1 years (Note that a plus sign is used because the perigee moves eastward whereas the node moves westward.) A maximum of this is in 2024 (September), explaining why the ascending-node solar eclipse of April 8, 2024, is near perigee and total and the descending-node solar eclipse of October 2, 2024, is near apogee and annular. Although this cycle is about a day less than six years, super-moon eclipses actually occur every three years on average, because there are also the ones at the descending node that occur in between the ones at the ascending node. At lunar eclipses the size of the Moon is 180° out of phase with its size at solar eclipses. The Sun is large at an ascending-node eclipse when its perigee (the direction toward the Sun when it is closest to the Earth) is near the ascending node, so the period for the size of the Sun is formula_2 years In terms of Delaunay arguments, the Sun is biggest at ascending-node solar eclipses and smallest at descending-node solar eclipses around when l'+D=F (modulo 360°), such as June, 2010. It is smallest at descending-node solar eclipses and biggest at ascending-node solar eclipses 9.3 years later, such as September, 2019. Long-term trends. The lengths of the synodic, draconic, and anomalistic months, the length of the day, and the length of the anomalistic year are all slowly changing. The synodic and draconic months, the day, and the anomalistic year (at least at present) are getting longer, whereas the anomalistic month is getting shorter. The eccentricity of the Earth's orbit is presently decreasing at about one percent per 300 years, thus decreasing the effect of the sun's anomaly. Formulae for the Delaunay arguments show that the lengthening of the synodic month means that eclipses tend to occur later than they would otherwise proportionally to the square of the time separation from now, by about 0.32 hours per millennium squared. The other Delaunay arguments (mean anomaly of the Moon and of the sun and the argument of latitude) will all be increased because of this, but on the other hand the Delaunay arguments are also affected by the fact that the lengths of the draconic month and anomalistic month and year are changing. The net results are: As an example, from the solar eclipse of April, 1688 BC, to that of April, AD 1623, is 110 inex plus 7 saros (equivalent to a "Palaea-Horologia" plus a "tritrix", 3310.09 Julian years). According to the table above, the Delaunay arguments should change by: But because of the changing lengths of these, they actually changed by: Note that in this example, in terms of anomaly (position with respect to perigee) the moon returns to within 1% of an orbit (about 3.4°), rather than 3.2% as predicted using today's values of month lengths. The fact that the day is getting longer means there are more revolutions of the Earth since some point in the past than what one might calculate from the time and date, and fewer from now to some future time. This effect means eclipses occur earlier in the day or calendar, going in the opposite direction relative to the effect of the lengthening synodic month already mentioned. This effect is known as ΔT. It cannot be calculated exactly but amounts to around 50 minutes per millennium squared. In our example above, this means that although the eclipse in 1688 BC was centred on March 16 at 00:15:31 in Dynamic time, it actually occurred before midnight and therefore on March 15 (using time based on the location of present-day Greenwich, and using the proleptic Julian calendar). The fact that the argument of latitude is decreased explains why one sees a curvature in the "Panorama" above. Central eclipses in the past and in the future are higher in the graph (lower inex number) than what one would expect from a linear extrapolation. This is because the ratio of the length of a synodic month to the length of a draconic month is getting smaller. Although both are getting longer, the draconic month is doing so more quickly because the rate at which the node moves west is decreasing. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\mbox{EY} = \\frac{\\mbox{SM} \\times \\mbox{DM}}{\\mbox{SM}-\\mbox{DM}}" }, { "math_id": 1, "text": "\\frac 1{1/\\text{period of node}+1/\\text{period of perigee}}=\\frac 1{1/18.60+1/8.85}=5.997" }, { "math_id": 2, "text": "\\frac 1{1/\\text{period of node}-1/\\text{period of perigee}}=\\frac 1{1/18.60+1/41\\text{ million}}=18.60" } ]
https://en.wikipedia.org/wiki?curid=66109
66110715
Slot-die coating
Technique for coating flat substrates Slot-die coating is a coating technique for the application of solution, slurry, hot-melt, or extruded thin films onto typically flat substrates such as glass, metal, paper, fabric, plastic, or metal foils. The process was first developed for the industrial production of photographic papers in the 1950's. It has since become relevant in numerous commercial processes and nanomaterials related research fields. Slot-die coating produces thin films via solution processing. The desired coating material is typically dissolved or suspended into a precursor solution or slurry (sometimes referred to as "ink") and delivered onto the surface of the substrate through a precise coating head known as a slot-die. The slot-die has a high aspect ratio outlet controlling the final delivery of the coating liquid onto the substrate. This results in the continuous production of a wide layer of coated material on the substrate, with adjustable width depending on the dimensions of the slot-die outlet. By closely controlling the rate of solution deposition and the relative speed of the substrate, slot-die coating affords thin material coatings with easily controllable thicknesses in the range of 10 nanometers to hundreds of micrometers after evaporation of the precursor solvent. Commonly cited benefits of the slot-die coating process include its pre-metered thickness control, non-contact coating mechanism, high material efficiency, scalability of coating areas and throughput speeds, and roll-to-roll compatibility. The process also allows for a wide working range of layer thickness and precursor solution properties such as material choice, viscosity, and solids content. Commonly cited drawbacks of the slot-die coating process include its comparatively high complexity of apparatus and process optimization relative to similar coating techniques such as blade coating and spin coating. Furthermore, slot-die coating falls into the category of coating processes rather than printing processes. It is therefore better suited for coating of uniform, thin material layers rather than printing or consecutive buildup of complex images and patterns. Coating apparatus. Typical components. Slot-die coating equipment is available in a variety of configurations and form factors. However, the vast majority of slot-die processes are driven by a similar set of common core components. These include: Depending on the complexity of the coating apparatus, a slot-die coating system may include additional modules for e.g. precise positioning of the slot-die over the substrate, particulate filtering of the coating solution, pre-treatment of the substrate (e.g. cleaning and surface energy modification), and post-processing steps (e.g. drying, curing, calendering, printing, slitting, etc.). Industrial coating systems. Slot-die coating was originally developed for industrial use and remains primarily applied in production-scale settings. This is due to its potential for large-scale production of high-value thin films and coatings at a low operating cost via roll-to-roll and sheet-to-sheet line integration. Such roll-to-roll and sheet-to-sheet coating systems are similar in their intent for large-scale production, but are distinguished from each other by the physical rigidity of the substrates they handle. Roll-to-roll systems are designed to coat and handle flexible substrate rolls such as paper, fabric, plastic or metal foils. Conversely, sheet-to-sheet systems are designed to coat and handle rigid substrate sheets such as glass, metal, or plexiglass. Combinations of these systems such roll-to-sheet lines are also possible. Both industrial roll-to-roll and sheet-to-sheet systems typically feature slot-dies in the range of 300 to 1000 mm in coating width, though slot-dies up to 4000 mm wide have been reported. Commercial slot-die systems are claimed to operate at speeds up to several hundred square meters per minute, with roll-to-roll systems typically offering higher throughput due to decreased complexity of substrate handling. Such large-scale coating systems can be driven by a variety of industrial pumping solutions including gear pumps, progressive cavity pumps, pressure pots, and diaphragm pumps depending on process requirements. Roll-to-roll lines. To handle flexible substrates, roll-to-roll lines typically use a series of rollers to continually drive the substrate through the various stations of the process line. The bare substrate originates at an "unwind" roll at the start of the line and is collected at a "rewind" roll at the end. Hence, the substrate is often referred to as a "web" as it winds its way through the process line from start to finish. When a substrate roll has been fully processed, it is collected from the rewind roll, allowing for a new, bare substrate roll to be mounted onto the unwind roller to begin the process again. Slot-die coating often comprises just a single step of an overall roll-to-roll process. The slot-die is typically mounted in a fixed position on the roll-to-roll line, dispensing coating fluid onto the web in a continuous or patch-based manner as the substrate passes by. Because the substrate web spans all stations of the roll-to-roll line simultaneously, the individual processes at these stations are highly coupled and must be optimized to work in tandem with each other at the same web speed. Sheet-to-sheet lines. The rigid substrates employed in sheet-to-sheet systems are not compatible with the roll-to-roll processing method. Sheet-to-sheet systems rely instead on a rack-based system to transport individual sheets between the various stations of a process line, where transfer between stations may occur in a manual or automated manner. Sheet-to-sheet lines are therefore more analogous to a series of semi-coupled batch operations rather than a single continuous process. This allows for easier optimization of individual unit operations at the expense of potentially increased handling complexity and reduced throughput. Furthermore, the need to start and stop the slot-die coating process for each substrate sheet places higher tolerance requirements on the leading and trailing edge uniformity of the slot-die step. In sheet-to-sheet lines, the substrate may be fixed in place as the substrate passes underneath on a moving support bed (sometimes referred to as a "chuck"). Alternatively, the slot-die may move during coating while the substrate remains fixed in place. Lab-scale development tools. Miniaturized slot-die tools have become increasingly available to support the development of new roll-to-roll compatible processes prior to the requirement of full pilot- and production-scale equipment. These tools feature similar core components and functionality as compared to larger slot-die coating lines, but are designed to integrate into pre-production research environments. This is typically achieved by e.g. accepting standard A4 sized substrate sheets rather than full substrate rolls, using syringe pumps rather than industrial pumping solutions, and relying upon hot-plate heating rather than large industrial drying ovens, which can otherwise reach lengths of several meters to provide suitable residence times for drying. Because the slot-die coating process can be readily scaled between large and small areas by adjusting the size of the slot-die and throughput speed, processes developed on lab-scale tools are considered to be reasonably scalable to industrial roll-to-roll and sheet-to-sheet coating lines. This has led to significant interest in slot-die coating as a method of scaling new thin film materials and devices, particularly in the sphere of thin film solar cell research for e.g. perovskite and organic photovoltaics. Common coating modalities. Slot-die hardware can be applied in several distinct coating modalities, depending on the requirements of a given process. These include: The dynamics of proximity coating have been extensively studied and applied over a wide range of scales and applications. Furthermore, the concepts governing proximity coating are relevant in understanding the behavior of other coating modalities. Proximity coating is therefore considered to be the default configuration for the purposes of this introductory article, though curtain coating and tensioned web over slot die configurations remain highly relevant in industrial manufacturing. Key process parameters. Film thickness control. Slot-die coating is a non-contact coating method, in which the slot-die is typically held over the substrate at a height several times higher than the target wet film thickness. The coating fluid transfers from the slot-die to the substrate via a fluid bridge that spans the air gap between the slot-die lips and substrate surface. This fluid bridge is commonly referred to as the coating meniscus or coating bead. The thickness of the resulting wet coated layer is controlled by tuning the ratio between the applied volumetric pump rate and areal coating rate. Unlike in self-metered coating methods such as blade- and bar coating, the slot-die does not influence the thickness of the wet coated layer via any form of destructive physical contact or scraping. The height of the slot-die therefore does not determine the thickness of the wet coated layer. The height of the slot-die is instead significant in determining the quality of the coated film, as it controls the distance that must be spanned by the meniscus to maintain a stable coating process. formula_0 Slot-die coating operates via a pre-metered liquid coating mechanism. The thickness of the wet coated layer (formula_1) is therefore significantly determined by the width of coating (formula_2), the volumetric pump rate (formula_3), and the coating speed, or relative speed between the slot-die and the substrate during coating (formula_4). Increasing the pump rate increases the thickness of the wet layer, while increasing the coating speed or coating width decreases the wet layer thickness. The coating width is typically a fixed value for a given slot-die process. Hence, pump rate and coating speed can be used to calculate, control, and adjust the wet film thickness in a highly predictable manner. However, deviation from this idealized relationship can occur in practice due to non-ideal behavior of materials and process components; for example when using highly viscoelastic fluids, or a sub-optimal process setup where fluid creeps up the slot-die component rather than transferring fully to the substrate. formula_5 The final thickness of the dry layer after solvent evaporation (formula_6) is further determined by the solids concentration of the precursor solution (formula_7) and the volumetric density of the coated material in its final form (formula_8). Increasing the solids content of the precursor solution increases the thickness of the dry layer, while using a more dense material results a thinner dry layer for a given concentration. Film quality control. As with all solution processed coating methods, the final quality of a thin film produced via slot-die coating depends on a wide array of parameters both intrinsic and external to the slot-die itself. These parameters can be broadly categorized into: Coating window parameters. Under ideal conditions, the potential to achieve a defect-free film via slot-die is entirely governed by the coating window of the a given process. The coating window is a multivariable map of key process parameters, describing the range over which they can be applied together to achieve a defect-free film. Understanding the coating window behavior of a typical slot-die process enables operators to observe defects in a slot-die coated layer and intuitively determine a course of action for defect resolution. The key process parameters used to define the coating window typically include: The coating window can be visualized by plotting two such key parameters against each other while assuming the others to remain constant. In an initial simple representation, the coating window can be described by plotting the relationship between viable pump rates and coating speeds for a given process. Excessive pumping or insufficient coating speeds result in defect spilling of the coating liquid outside of the desired coating area, while coating too quickly or pumping insufficiently results in defect breakup of the meniscus. The pump rate and coating speed can therefore be adjusted to directly compensate for these defects, though changing these parameters also affects wet film thickness via the pre-metered coating mechanism. Implicit in this relationship is the effect of the slot-die height parameter, as this affects the distance over which the meniscus must be stretched while remaining stable during coating. Raising the slot-die higher can thus counteract spilling defects by stretching the meniscus further, while lowering the slot-die can counteract streaking and breakup defects by reducing the gap that the meniscus must breach. Other helpful coating window plots to consider include the relationship between fluid capillary number and slot-die height, as well as the relationship between pressure across the meniscus and slot-die height. The former is particularly relevant when considering changes in fluid viscosity and surface tension (i.e. the effect of coating various materials with significantly different rheology), while the latter is relevant in the context of applying a vacuum box at the upstream face of the meniscus to stabilize the meniscus against breakup. Downstream process effects. In reality, the final quality of a slot-die coated film is heavily influenced by a variety of factors beyond the parameter boundaries of the ideal coating window. Surface energy effects and drying effects are examples of common downstream effects with a significant influence on final film morphology. Sub-optimal matching of surface energy between the substrate and coating fluid can cause dewetting of the liquid film after it has been applied to the substrate, resulting in pinholes or beading of the coated layer. Sub-optimal drying processes are also often noted to influence film morphology, resulting in increased thickness at the edge of a film caused by the coffee ring effect. Surface energy and downstream processing must therefore be carefully optimized to maintain the integrity of the slot-die coated layer as it moves through the system, until the final thin film product can be collected. External effects. Slot-die coating is a highly mechanical process in which uniformity of motion and high hardware tolerances are critical to achieving uniform coatings. Mechanical imperfections such as jittery motion in the pump and coating motion systems, poor parallelism between the slot-die and substrate, and external vibrations in the environment can all lead to undesired variations in film thickness and quality. Slot-die coating apparatus and its environment must therefore be suitably specified to meet the needs of a given process and avoid hardware- and environment-derived defects in the coated film. Applications. Industrial applications. Slot-die coating was originally developed for the commercial production of photographic films and papers. In the past several decades it has become a critical process in the production of adhesive films, flexible packaging, transdermal and oral pharmaceutical patches, LCD panels, multi-layer ceramic capacitors, lithium-ion batteries and more. Research applications. With growing interest in the potential of nanomaterials and functional thin film devices, slot-die coating has become increasingly applied in the sphere of materials research. This is primarily attributed to the flexibility, predictability and high repeatability of the process, as well as its scalability and origin as a proven industrial technique. Slot-die coating has been most notably employed in research related to flexible, printed, and organic electronics, but remains relevant in any field where scalable thin film production is required. Examples of research enabled by slot-die coating include: References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "t_{wet} = \\frac{Q}{W \\times U}" }, { "math_id": 1, "text": "t_{wet}" }, { "math_id": 2, "text": "W" }, { "math_id": 3, "text": "Q" }, { "math_id": 4, "text": "U" }, { "math_id": 5, "text": "t_{dry} = t_{wet} \\times \\frac{c}{\\rho}" }, { "math_id": 6, "text": "t_{dry}" }, { "math_id": 7, "text": "c" }, { "math_id": 8, "text": "\\rho" }, { "math_id": 9, "text": "H_0/t_{wet}" }, { "math_id": 10, "text": "Ca" }, { "math_id": 11, "text": "\\Delta P" } ]
https://en.wikipedia.org/wiki?curid=66110715
66113656
Branched flow
Scattering phenomenon in wave dynamics Branched flow refers to a phenomenon in wave dynamics, that produces a tree-like pattern involving successive mostly forward scattering events by smooth obstacles deflecting traveling rays or waves. Sudden and significant momentum or wavevector changes are absent, but accumulated small changes can lead to large momentum changes. The path of a single ray is less important than the environs around a ray, which rotate, compress, and stretch around in an area preserving way. Even more revealing are groups, or manifolds of neighboring rays extending over significant zones. Starting rays out from a point but varying their direction over a range, one to the next, or from different points along a line all with the same initial directions are examples of a manifold. Waves have analogous launching conditions, such as a point source spraying in many directions, or an extended plane wave heading on one direction. The ray bending or refraction leads to characteristic structure in phase space and nonuniform distributions in coordinate space that look somehow universal and resemble branches in trees or stream beds. The branches taken on non-obvious paths through the refracting landscape that are indirect and nonlocal results of terrain already traversed. For a given refracting landscape, the branches will look completely different depending on the initial manifold. Examples. Two-dimensional electron gas. Branched flow was first identified in experiments with a two-dimensional electron gas. Electrons flowing from a quantum point contact were scanned using a scanning probe microscope. Instead of usual diffraction patterns, the electrons flowed forming branching strands that persisted for several correlation lengths of the background potential. Ocean dynamics. Focusing of random waves in the ocean can also lead to branched flow. The fluctuation in the depth of the ocean floor can be described as a random potential. A tsunami wave propagating in such medium will form branches which carry huge energy densities over long distances. This mechanism may also explain some statistical discrepancies in the occurrence of freak waves. Light propagation. Given the wave nature of light, its propagation in random media can produce branched flow too. Experiments with laser beams in soap bubbles have shown this effect, which has also been proposed to control light focusing in a disordered medium. Flexural waves in elastic plates. Flexural waves travelling in elastic plates also produce branched flows. Disorder, in this case, appears in the form of inhomogeneous flexural rigidity. Other examples. Other examples where branched flow has been proposed to happen include microwave radiation of pulsars refracted by interstellar clouds, the Zeldovitch model for the large structure of the universe and electron-phonon interaction in metals. Dynamics: Kick and drift map. The dynamical mechanism that originates the branch formation can be understood by means of the kick and drift map, an area preserving map defined by: formula_0 formula_1 where n accounts for the discrete time, x and p are position and momentum respectively, and V is the potential. The equation for the momentum is called the “kick” stage, whereas the equation for the position is the “drift”. Given an initial manifold in phase space, it can be iterated under the action of the kick and drift map. Typically, the manifold stretches and folds (although keeping its total area constant) forming cusps or caustics and stable regions. These regions of phases space with high concentration of trajectories are precisely the branches. Scaling properties of branched flow in random potentials. When plane waves or parallel trajectories propagate through a weak random medium, several caustics can arise at more or less regularly ordered positions. Taking the direction perpendicular to the flow, the distance separating the caustics is determined by the correlation length of the potential d. Another characteristic length is the distance L downstream where the first generation of caustics appear. Taking into account the energy of the trajectories E and the height of the potential ɛ«E, it can be argued that the following relation holds formula_2 References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\vec{p}_{n+1}=\\vec{p}_n- \\nabla V|_{\\vec{x}=\\vec{x}_n }" }, { "math_id": 1, "text": "\\vec{x}_{n+1}=\\vec{x}_n + \\vec{p}_{n+1}" }, { "math_id": 2, "text": "\\frac{E}{\\varepsilon} = \\left(\\frac{L}{d}\\right)^{3/2}." } ]
https://en.wikipedia.org/wiki?curid=66113656
66118361
Corestriction
In mathematics, a corestriction of a function is a notion analogous to the notion of a restriction of a function. The duality prefix co- here denotes that while the restriction changes the domain to a subset, the corestriction changes the codomain to a subset. However, the notions are not categorically dual. Given any subset formula_0 we can consider the corresponding inclusion of sets formula_1 as a function. Then for any function formula_2, the restriction formula_3 of a function formula_4 onto formula_5 can be defined as the composition formula_6. Analogously, for an inclusion formula_7 the corestriction formula_8 of formula_4 onto formula_9 is the unique function formula_10 such that there is a decomposition formula_11. The corestriction exists if and only if formula_9 contains the image of formula_4. In particular, the corestriction onto the image always exists and it is sometimes simply called the corestriction of formula_4. More generally, one can consider corestriction of a morphism in general categories with images. The term is well known in category theory, while rarely used in print. Andreotti introduces the above notion under the name , while the name corestriction reserves to the notion categorically dual to the notion of a restriction. Namely, if formula_12 is a surjection of sets (that is a quotient map) then Andreotti considers the composition formula_13, which surely always exists. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "S\\subset A," }, { "math_id": 1, "text": "i_S:S\\hookrightarrow A" }, { "math_id": 2, "text": "f:A\\to B" }, { "math_id": 3, "text": "f|_S:S\\to B" }, { "math_id": 4, "text": "f" }, { "math_id": 5, "text": "S" }, { "math_id": 6, "text": "f|_S = f\\circ i_S" }, { "math_id": 7, "text": "i_T:T\\hookrightarrow B" }, { "math_id": 8, "text": "f|^T:A\\to T" }, { "math_id": 9, "text": "T" }, { "math_id": 10, "text": "f|^T" }, { "math_id": 11, "text": "f = i_T\\circ f|^T" }, { "math_id": 12, "text": "p^U:B\\to U" }, { "math_id": 13, "text": "p^U\\circ f:A\\to U" } ]
https://en.wikipedia.org/wiki?curid=66118361
66118528
Biorthogonal nearly coiflet basis
In applied mathematics, biorthogonal nearly coiflet bases are wavelet bases proposed by Lowell L. Winger. The wavelet is based on biorthogonal coiflet wavelet bases, but sacrifices its regularity to increase the filter's bandwidth, which might lead to better image compression performance. Motivation. Nowadays, a large amount of information is stored, processed, and delivered, so the method of data compressing—especially for images—becomes more significant. Since wavelet transforms can deal with signals in both space and frequency domains, they compensate for the deficiency of Fourier transforms and emerged as a potential technique for image processing. Traditional wavelet filter design prefers filters with high regularity and smoothness to perform image compression. Coiflets are such a kind of filter which emphasizes the vanishing moments of both the wavelet and scaling function, and can be achieved by maximizing the total number of vanishing moments and distributing them between the analysis and synthesis low pass filters. The property of vanishing moments enables the wavelet series of the signal to be a sparse presentation, which is the reason why wavelets can be applied for image compression. Besides orthogonal filter banks, biorthogonal wavelets with maximized vanishing moments have also been proposed. However, regularity and smoothness are not sufficient for excellent image compression. Common filter banks prefer filters with high regularity, flat passbands and stopbands, and a narrow transition zone, while Pixstream Incorporated proposed filters with wider passband by sacrificing their regularity and passband flatness. Theory. The biorthogonal wavelet base contains two wavelet functions, formula_0 and its couple wavelet formula_1, while formula_0 relates to the lowpass analysis filter formula_2 and the high pass analysis filter formula_3. Similarly, formula_1 relates to the lowpass synthesis filter formula_4 and the high pass synthesis filter formula_5. For biorthogonal wavelet base, formula_2 and formula_5 are orthogonal; Likewise, formula_3 and formula_6 are orthogonal, too. In order to construct a biorthogonal nearly coiflet base, the Pixstream Incorporated begins with the (max flat) biorthogonal coiflet base. Decomposing and reconstructing low-pass filters expressed by Bernstein polynomials ensures that the coefficients of filters are symmetric, which benefits the image processing: If the phase of real-valued function is symmetry, than the function has generalized linear phase, and since the human eyes are sensitive to symmetrical error, wavelet base with linear phase is better for image processing application. Recall that the Bernstein polynomials are defined as below: formula_7 which can be considered as a polynomial f(x) over the interval formula_8. Besides, the Bernstein form of a general polynomial is expressed by formula_9 where "d"("i") are the Bernstein coefficients. Note that the number of zeros in Bernstein coefficients determines the vanishing moments of wavelet functions. By sacrificing a zero of the Bernstein-basis filter at formula_10 (which sacrifices its regularity and flatness), the filter is no longer coiflet but nearly coiflet. Then, the magnitude of the highest-order non-zero Bernstein basis coefficient is increased, which leads to a wider passband. On the other hand, to perform image compression and reconstruction, analysis filters are determined by synthesis filters. Since the designed filter has a lower regularity, worse flatness and wider passband, the resulting dual low pass filter has a higher regularity, better flatness and narrower passband. Besides, if the passband of the starting biorthogonal coiflet is narrower than the target synthesis filter G0, then its passband is widened only enough to match G0 in order to minimize the impact on smoothness (i.e. the analysis filter H0 is not invariably the design filter). Similarly, if the original coiflet is wider than the target G0, than the original filter's passband is adjusted to match the analysis filter H0. Therefore, the analysis and synthesis filters have similar bandwidth. The ringing effect (overshoot and undershoot) and shift-variance of image compression might be alleviated by balancing the passband of the analysis and synthesis filters. In other word, the smoothest or highest regularity filters are not always the best choices for synthesis low pass filters. Drawback. The idea of this method is to obtain more free parameters by despairing some vanishing elements. However, this technique cannot unify biorthogonal wavelet filter banks with different taps into a closed-form expression based on one degree of freedom. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\psi(t)" }, { "math_id": 1, "text": "\\tilde{\\psi}(t)" }, { "math_id": 2, "text": "H0" }, { "math_id": 3, "text": "G0" }, { "math_id": 4, "text": "\\tilde{H}0" }, { "math_id": 5, "text": "\\tilde{G0}" }, { "math_id": 6, "text": "{\\tilde {H0}}" }, { "math_id": 7, "text": "B^n_k(x)=(^n_k)x^k(1-x)^{n-k} \\text{ for } k=1,2,\\ldots,n," }, { "math_id": 8, "text": "x\\in[0,1]" }, { "math_id": 9, "text": "H1(x)=\\sum^n_{k=0}d(k)(^n_k)x^k(1-x)^{n-k}," }, { "math_id": 10, "text": "\\omega=\\pi" } ]
https://en.wikipedia.org/wiki?curid=66118528
6612077
Lindelöf's lemma
In mathematics, Lindelöf's lemma is a simple but useful lemma in topology on the real line, named for the Finnish mathematician Ernst Leonard Lindelöf. Statement of the lemma. Let the real line have its standard topology. Then every open subset of the real line is a countable union of open intervals. Generalized Statement. Lindelöf's lemma is also known as the statement that every open cover in a second-countable space has a countable subcover (Kelley 1955:49). This means that every second-countable space is also a Lindelöf space. Proof of the generalized statement. Let formula_0 be a countable basis of formula_1. Consider an open cover, formula_2. To get prepared for the following deduction, we define two sets for convenience, formula_3, formula_4. A straight-forward but essential observation is that, formula_5 which is from the definition of base. Therefore, we can get that, formula_6 where formula_7, and is therefore at most countable. Next, by construction, for each formula_8 there is some formula_9 such that formula_10. We can therefore write formula_11 completing the proof. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "B" }, { "math_id": 1, "text": "X" }, { "math_id": 2, "text": "\\mathcal{F} = \\bigcup_{\\alpha} U_{\\alpha}" }, { "math_id": 3, "text": "B_{\\alpha} := \\left \\{ \\beta \\in B: \\beta \\subset U_{\\alpha} \\right \\}" }, { "math_id": 4, "text": "B':= \\bigcup_{\\alpha} B_{\\alpha}" }, { "math_id": 5, "text": "U_{\\alpha} = \\bigcup_{\\beta \\in B_{\\alpha}} \\beta" }, { "math_id": 6, "text": "\\mathcal{F} = \\bigcup_{\\alpha} U_{\\alpha} = \\bigcup_{\\alpha} \\bigcup_{\\beta \\in B_{\\alpha}} \\beta = \\bigcup_{\\beta \\in B'} \\beta" }, { "math_id": 7, "text": "B' \\subset B" }, { "math_id": 8, "text": "\\beta\\in B'" }, { "math_id": 9, "text": "\\delta_{\\beta}" }, { "math_id": 10, "text": "\\beta\\subset U_{\\delta_{\\beta}}" }, { "math_id": 11, "text": "\\mathcal{F} = \\bigcup_{\\beta\\in B'} U_{\\delta_{\\beta}}" } ]
https://en.wikipedia.org/wiki?curid=6612077
6612381
Glicko rating system
Rating system for players of skill-based games The Glicko rating system and Glicko-2 rating system are methods of assessing a player's strength in zero-sum two-player games. The Glicko rating system was invented by Mark Glickman in 1995 as an improvement on the Elo rating system and initially intended for the primary use as a chess rating system. Glickman's principal contribution to measurement is "ratings reliability", called RD, for "ratings deviation". Overview. Mark Glickman created the Glicko rating system in 1995 as an improvement on the Elo rating system. Both the Glicko and Glicko-2 rating systems are under public domain and have been implemented on game servers online like "," "Team Fortress 2"," Dota 2", "Guild Wars 2", "Splatoon 2", Lichess and chess.com. The Reliability Deviation (RD) measures the accuracy of a player's rating, where the RD is equal to one standard deviation. For example, a player with a rating of 1500 and an RD of 50 has a real strength between 1400 and 1600 (two standard deviations from 1500) with 95% confidence. Twice (exact: 1.96) the RD is added and subtracted from their rating to calculate this range. After a game, the amount the rating changes depends on the RD: the change is smaller when the player's RD is low (since their rating is already considered accurate), and also when their opponent's RD is high (since the opponent's true rating is not well known, so little information is being gained). The RD itself decreases after playing a game, but it will increase slowly over time of inactivity. The Glicko-2 rating system improves upon the Glicko rating system and further introduces the "rating volatility" σ. A very slightly modified version of the Glicko-2 rating system is implemented by the Australian Chess Federation. The algorithm of Glicko. Step 1: Determine ratings deviation. The new Ratings Deviation (formula_0) is found using the old Ratings Deviation (formula_1): formula_2 where formula_3 is the amount of time (rating periods) since the last competition and '350' is assumed to be the RD of an unrated player. If several games have occurred within one rating period, the method treats them as having happened simultaneously. The rating period may be as long as several months or as short as a few minutes, according to how frequently games are arranged. The constant formula_4 is based on the uncertainty of a player's skill over a certain amount of time. It can be derived from thorough data analysis, or estimated by considering the length of time that would have to pass before a player's rating deviation would grow to that of an unrated player. If it is assumed that it would take 100 rating periods for a player's rating deviation to return to an initial uncertainty of 350, and a typical player has a rating deviation of 50 then the constant can be found by solving formula_5 for formula_4. Or formula_6 Step 2: Determine new rating. The new ratings, after a series of m games, are determined by the following equation: formula_7 where: formula_8 formula_9 formula_10 formula_11 formula_12 represents the ratings of the individual opponents. formula_13 represents the rating deviations of the individual opponents. formula_14 represents the outcome of the individual games. A win is 1, a draw is formula_15, and a loss is 0. Step 3: Determine new ratings deviation. The function of the prior RD calculation was to increase the RD appropriately to account for the increasing uncertainty in a player's skill level during a period of non-observation by the model. Now, the RD is updated (decreased) after the series of games: formula_16 Glicko-2 algorithm. Glicko-2 works in a similar way to the original Glicko algorithm, with the addition of a rating volatility formula_17 which measures the degree of expected fluctuation in a player’s rating, based on how erratic the player's performances are. For instance, a player's rating volatility would be low when they performed at a consistent level, and would increase if they had exceptionally strong results after that period of consistency. A simplified explanation of the Glicko-2 algorithm is presented below: Step 1: Compute ancillary quantities. Across one rating period, a player with a current rating formula_18 and ratings deviation formula_19 plays against formula_20 opponents, with ratings formula_21 and RDs formula_22, resulting in scores formula_23. We first need to compute the ancillary quantities formula_24 and formula_25: formula_26 formula_27 where formula_28 formula_29 Step 2: Determine new rating volatility. We then need to choose a small constant formula_30 which constrains the volatility over time, for instance formula_31 (smaller values of formula_30 prevent dramatic rating changes after upset results). Then, for formula_32 we need to find the value formula_33 which satisfies formula_34. An efficient way of solving this would be to use the Illinois algorithm, a modified version of the "regula falsi" procedure (see for details on how this would be done). Once this iterative procedure is complete, we set the new rating volatility formula_35 as formula_36 Step 3: Determine new ratings deviation and rating. We then get the new RD formula_37 and new rating formula_38 These ratings and RDs are on a different scale than in the original Glicko algorithm, and would need to be converted to properly compare the two.
[ { "math_id": 0, "text": "RD" }, { "math_id": 1, "text": "RD_0" }, { "math_id": 2, "text": "RD = \\min\\left(\\sqrt{{RD_0}^2 + c^2 t},350\\right)" }, { "math_id": 3, "text": "t" }, { "math_id": 4, "text": "c" }, { "math_id": 5, "text": "350 = \\sqrt{50^2 +100c^2}" }, { "math_id": 6, "text": "c = \\sqrt{(350^2 - 50^2)/100} \\approx 34.6" }, { "math_id": 7, "text": "r = r_0 + \\frac{q}{\\frac{1}{RD^2} + \\frac{1}{d^2}}\\sum_{i=1}^{m}{g(RD_i)(s_i-E(s|r_0,r_i,RD_i))}" }, { "math_id": 8, "text": "g(RD_i) = \\frac{1}{\\sqrt{1 + \\frac{3 q^2 (RD_i^2)}{\\pi^2} }}" }, { "math_id": 9, "text": "E(s|r_0,r_i,RD_i) = \\frac{1}{1+10^{\\left(\\frac{g(RD_i)(r_0-r_i)}{-400}\\right)}}" }, { "math_id": 10, "text": "q = \\frac{\\ln(10)}{400} = 0.00575646273" }, { "math_id": 11, "text": "d^2 = \\frac{1}{q^2 \\sum_{i=1}^{m}{(g(RD_i))^2 E(s|r_0,r_i,RD_i) (1-E(s|r_0,r_i,RD_i))}}" }, { "math_id": 12, "text": "r_i" }, { "math_id": 13, "text": "RD_i" }, { "math_id": 14, "text": "s_i" }, { "math_id": 15, "text": "\\frac{1}{2}" }, { "math_id": 16, "text": "RD'=\\sqrt{\\left(\\frac{1}{RD^2}+\\frac{1}{d^2}\\right)^{-1}}" }, { "math_id": 17, "text": "\\sigma" }, { "math_id": 18, "text": "\\mu" }, { "math_id": 19, "text": "\\phi" }, { "math_id": 20, "text": "m" }, { "math_id": 21, "text": "\\mu_1,...,\\mu_m" }, { "math_id": 22, "text": "\\phi_1,...,\\phi_m" }, { "math_id": 23, "text": "s_1,...,s_m" }, { "math_id": 24, "text": "v" }, { "math_id": 25, "text": "\\Delta" }, { "math_id": 26, "text": "v = \\left[ \\sum_{j=1}^m g(\\phi_j)^2 E(\\mu,\\mu_j,\\phi_j) \\{1 - E(\\mu,\\mu_j,\\phi_j)\\} \\right]^{-1}" }, { "math_id": 27, "text": "\\Delta = v \\sum_{j=1}^m g(\\phi_j) \\{s_j - E(\\mu,\\mu_j,\\phi_j)\\}" }, { "math_id": 28, "text": "g(\\phi_j) = \\frac{1}{\\sqrt{1 + 3\\phi_j^2/\\pi^2}}," }, { "math_id": 29, "text": "E(\\mu,\\mu_j,\\phi_j) = \\frac{1}{1 + \\exp\\{-g(\\phi_j)(\\mu - \\mu_j)\\}}." }, { "math_id": 30, "text": "\\tau" }, { "math_id": 31, "text": "\\tau=0.2" }, { "math_id": 32, "text": " f(x) = \\frac{1}{2}\\frac{e^x(\\Delta^2 - \\phi^2 - v - e^x)}{(\\phi^2 + v + e^x)^2} - \\frac{x-\\ln({\\sigma^2})}{\\tau^2}," }, { "math_id": 33, "text": "A" }, { "math_id": 34, "text": "f(A)=0" }, { "math_id": 35, "text": "\\sigma'" }, { "math_id": 36, "text": "\\sigma' = \\exp\\{A/2\\}." }, { "math_id": 37, "text": "\\phi' = 1\\Big/\\sqrt{\\frac{1}{\\phi^2 + \\sigma'^2} + \\frac{1}{v}}," }, { "math_id": 38, "text": "\\mu' = \\mu + \\phi'^2 \\sum_{j=1}^m g(\\phi_j) \\{s_j - E(\\mu,\\mu_j,\\phi_j)\\}." } ]
https://en.wikipedia.org/wiki?curid=6612381
66125330
Quantum logic spectroscopy
Ion control scheme Quantum logic spectroscopy (QLS) is an ion control scheme that maps quantum information between two co-trapped ion species. Quantum logic operations allow desirable properties of each ion species to be utilized simultaneously. This enables work with ions and molecular ions that have complex internal energy level structures which preclude laser cooling and direct manipulation of state. QLS was first demonstrated by NIST in 2005. QLS was first applied to state detection in diatomic molecules in 2016 by Wolf et al, and later applied to state manipulation and detection of diatomic molecules by the Liebfried group at NIST in 2017 Overview. Lasers are used to couple each ion's internal and external motional degrees of freedom. The Coulomb interaction between the two ions couples their motion. This allows the internal state of one ion to be transferred to the other. An auxiliary "logic ion" provides cooling, state preparation, and state detection for the co-trapped "spectroscopy ion," which has an electronic transition of interest. The logic ion is used to sense and control the internal and external state of the spectroscopy ion. The logic ion is selected to have a simple energy level structure that can be directly laser cooled, often an alkaline earth ion. The laser cooled logic ion provides sympathetic cooling to the spectroscopy ion, which lacks an efficient laser cooling scheme. Cooling the spectroscopy ion reduces the number of rotational and vibrational states that it can occupy. The remaining states are then accessed by driving stimulated Raman spectroscopy transitions with a laser. The light used for driving these transitions is far off-resonant from any electronic transitions. This enables control over the spectroscopy ion's rotational and vibrational state. Thus far, QLS is limited to diatomic molecules with a mass within 1 AMU of the laser cooled "logic" ion. This is largely due to poorer coupling of the motional states of the occupants of the ion trap as the mass mismatch becomes larger. Other techniques more tolerant of large mass mismatches are better suited to cases where the ultimate resolution of QLS is not needed, but single-molecule sensitivity is still desired. State transfer protocol. The internal states of each ion can be treated as a two level system, with eigenstates denoted formula_0 and formula_1. One of the ion's normal modes is chosen to be the transfer mode used for state mapping. This motional mode must be shared by both ions, which requires both ions be similar in mass. The normal mode has harmonic oscillator states denoted as formula_2, where n is the nth level of mode m. The wave function formula_3 denotes both ions and the transfer mode in the ground state. S and L represent the spectroscopy and logic ion. The spectroscopy ion's spectroscopy transition is then excited with a laser, producing the state: formula_4 A red sideband pi-pulse is then driven on the spectroscopy ion, resulting in the state: formula_5 At this stage, the spectroscopy ion's internal state has been mapped on to the transfer mode. The internal state of the ion has been coupled to its motional mode. The formula_6 state is unaffected by the pulse of light carrying out this operation because the state formula_7 does not exist. QLS takes advantage of this in order to map the spectroscopy ion's state onto the transfer mode. A final red sideband pi-pulse is applied to the logic ion, resulting in the state: formula_8 The spectroscopy ion's initial state has been mapped onto the logic ion, which can then be detected. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "| \\uparrow\\ \\rangle," }, { "math_id": 1, "text": "| \\downarrow\\ \\rangle" }, { "math_id": 2, "text": "| n \\rangle_m " }, { "math_id": 3, "text": "| \\phi\\ \\rangle_{0} = | \\downarrow\\ \\rangle_S | \\downarrow\\ \\rangle_L | 0 \\rangle_m ," }, { "math_id": 4, "text": "| \\phi\\ \\rangle_{1} = (\\alpha\\ | \\downarrow\\ \\rangle_S + \\beta\\ | \\uparrow\\ \\rangle_S) | \\downarrow\\ \\rangle_L | 0 \\rangle_m = (\\alpha\\ | \\downarrow\\ \\rangle_S | 0 \\rangle_m + \\beta\\ | \\uparrow\\ \\rangle_S | 0 \\rangle_m) | \\downarrow\\ \\rangle_L " }, { "math_id": 5, "text": "| \\phi\\ \\rangle_{2} = (\\alpha\\ | \\downarrow\\ \\rangle_S | 0 \\rangle_m + \\beta\\ | \\downarrow\\ \\rangle_S) | 1 \\rangle_m) | \\downarrow\\ \\rangle_L = | \\downarrow\\ \\rangle_S | \\downarrow\\ \\rangle_L (\\alpha\\ | 0 \\rangle_m + \\beta\\ | 1 \\rangle_m) " }, { "math_id": 6, "text": " | \\downarrow\\ \\rangle_S | 0 \\rangle_m " }, { "math_id": 7, "text": " | \\uparrow\\ \\rangle_S | -1 \\rangle_m " }, { "math_id": 8, "text": "| \\phi\\ \\rangle_{final} = | \\downarrow\\ \\rangle_S (\\alpha\\ | \\downarrow\\ \\rangle_L + \\beta\\ | \\uparrow\\ \\rangle_L) | 0 \\rangle_m " } ]
https://en.wikipedia.org/wiki?curid=66125330
66125656
Enumeration reducibility
In computability theory, enumeration reducibility (or e-reducibility for short) is a specific type of reducibility. Roughly speaking, "A" is enumeration-reducible to "B" if an enumeration of "B" can be algorithmically converted to an enumeration of "A". In particular, if "B" is computably enumerable, then "A" also is. Introduction. In one of the possible formalizations of the concept, a Turing reduction from "A" to "B" is a Turing machine augmented with a special instruction "query the oracle". This instruction takes an integer "x" and instantly returns whether "x" belongs to "B". The oracle machine should compute "A", possibly using this special capability to decide "B". Informally, the existence of a Turing reduction from "A" to "B" means that if it is possible to decide "B", then this can be used to decide "A". Enumeration reducibility is a variant whose informal explanation is, instead, that if it is possible to enumerate "B", then this can be used to enumerate "A". The reduction can be defined by a Turing machine with a special oracle query instruction which takes no parameter, and either returns a new element of "B", or returns no output. The oracle supplies the elements of "B" in any order. It can possibly return no output for some queries before resuming the enumeration. The machine should similarly the members of "A", in any order and at any pace. Repetitions in the enumerations of "A" and "B" may be permitted or not; the concept is equivalent in both cases. Although this could be made precise, the definition given below is more common since it is formally simpler. Enumeration reducibility is a form of positive reducibility, in the sense that the oracle machine receives information about which elements are in "B" (positive information), but no information about which elements are "not" in "B" (negative information). Indeed, if an element has not been listed yet, the oracle machine cannot know whether it will be listed later, or never. The concept of enumeration reducibility was first introduced by the results of John Myhill, which concluded that "a set is many-one complete if and only if it is recursively enumerable and its complement is productive". This result extends to enumeration reducibility as well. Enumeration reducibility was later formally codified by Rogers and his collaborator Richard M. Friedberg in "Zeitschrift für mathematische Logik und Grundlagen der Mathematik" (the predecessor of "Mathematical Logic Quarterly") in 1959. Definition. Let formula_0 be a standard numbering of finite subsets of formula_1, and let formula_2 be a standard pairing function. A set formula_3 is enumeration reducible to a set formula_4 if there exists a computably enumerable set formula_5 such that for all formula_6, formula_7 When "A" is enumeration reducible to "B", we write formula_8. The relation formula_9 is a preorder. Its associated equivalence relation is denoted by formula_10. formula_13 Variants. Strong enumeration reducibilities. In addition to enumeration reducibility, there exist strong versions, the most important one being "s"-reducibility (named after Robert M. Solovay). "S"-reducibility states that a computably enumerable real set formula_11 is "s"-reducible to another computably enumerable real set formula_12 if formula_12 is at least as difficult to be approximated as formula_11. This method shows similarity to "e"-reducibility in that it compares the elements of multiple sets. In addition, the structure of "s"-degrees have natural analogs in the enumeration degrees. The reasoning for using "s"-reducibility is summarized by Omandaze and Sorbi as a result of positive reducibility models being unable to answer certain oracle questions (e.g. an answer to "Is formula_17?" is only given if formula_17, and is not true for the inverse.) because they inherently model computational situations where incomplete oracle information is available. This is in contrast from the well-studied Turing reducibility, in which information is captured in both negative and positive values. In addition, T-reducibility uses information that is provided immediately and without delay. A strong reducibility is utilized in order to prevent problems occurring when incomplete information is supplied. Partial functions. "E"-reducibility can be defined for partial functions as well. Writing graphformula_18 formula_19 formula_20 , etc., we can define for partial functions formula_21: formula_22 graphformula_23 graphformula_24 Kleene's recursion theorem introduces the notion of "relative partial recursiveness", which, by means of systems of equations, can demonstrate equivalence through formula_9 between graphs of partial functions. "E"-reducibility relates to "relative partial recursiveness" in the same way that T-reducibility relates to μ-recursiveness. Further reading. "Introduction to Metamathematics" "Theory of Recursive Functions and Effective Computability" "Enumeration Reducibility and Polynomial Time"
[ { "math_id": 0, "text": "(D_u)" }, { "math_id": 1, "text": "\\mathbb{N}" }, { "math_id": 2, "text": "\\langle \\bullet, \\bullet \\rangle" }, { "math_id": 3, "text": "A \\subseteq \\mathbb{N}" }, { "math_id": 4, "text": "B \\subseteq \\mathbb{N}" }, { "math_id": 5, "text": "W" }, { "math_id": 6, "text": "x \\in \\mathbb{N}" }, { "math_id": 7, "text": "x \\in A \\iff \\exists u, (D_u \\subseteq B \\land \\langle x, D \\rangle \\in W)" }, { "math_id": 8, "text": "A \\leq_e B" }, { "math_id": 9, "text": "\\leq_e" }, { "math_id": 10, "text": "\\equiv_e" }, { "math_id": 11, "text": "A" }, { "math_id": 12, "text": "B" }, { "math_id": 13, "text": "A\\oplus B=\\{2n:n\\in A\\}\\cup\\{2n+1:n\\in B\\}." }, { "math_id": 14, "text": "A^+" }, { "math_id": 15, "text": "B^+" }, { "math_id": 16, "text": "A^+ := A \\oplus \\overline{A}" }, { "math_id": 17, "text": "x\\in A" }, { "math_id": 18, "text": "(f)=\\{\\langle x,y\\rangle" }, { "math_id": 19, "text": "|" }, { "math_id": 20, "text": "f(x)=y\\}" }, { "math_id": 21, "text": "f,g" }, { "math_id": 22, "text": "f\\leq_eg\\Leftrightarrow" }, { "math_id": 23, "text": "(f)\\leq_e" }, { "math_id": 24, "text": "(g)." } ]
https://en.wikipedia.org/wiki?curid=66125656
6612581
Hilbert scheme
Moduli scheme of subschemes of a scheme, represents the flat-family-of-subschemes functor In algebraic geometry, a branch of mathematics, a Hilbert scheme is a scheme that is the parameter space for the closed subschemes of some projective space (or a more general projective scheme), refining the Chow variety. The Hilbert scheme is a disjoint union of projective subschemes corresponding to Hilbert polynomials. The basic theory of Hilbert schemes was developed by Alexander Grothendieck (1961). Hironaka's example shows that non-projective varieties need not have Hilbert schemes. Hilbert scheme of projective space. The Hilbert scheme formula_0 of formula_1 classifies closed subschemes of projective space in the following sense: For any locally Noetherian scheme S, the set of S-valued points formula_2 of the Hilbert scheme is naturally isomorphic to the set of closed subschemes of formula_3 that are flat over S. The closed subschemes of formula_3 that are flat over S can informally be thought of as the families of subschemes of projective space parameterized by S. The Hilbert scheme formula_0 breaks up as a disjoint union of pieces formula_4 corresponding to the Hilbert scheme of the subschemes of projective space with Hilbert polynomial P. Each of these pieces is projective over formula_5. Construction as a determinantal variety. Grothendieck constructed the Hilbert scheme formula_0 of formula_6-dimensional projective formula_1 space as a subscheme of a Grassmannian defined by the vanishing of various determinants. Its fundamental property is that for a scheme formula_7, it represents the functor whose formula_7-valued points are the closed subschemes of formula_8 that are flat over formula_7. If formula_9 is a subscheme of formula_6-dimensional projective space, then formula_9 corresponds to a graded ideal formula_10 of the polynomial ring formula_11 in formula_12 variables, with graded pieces formula_13. For sufficiently large formula_14 all higher cohomology groups of formula_9 with coefficients in formula_15 vanish. Using the exact sequenceformula_16we have formula_17 has dimension formula_18, where formula_19 is the Hilbert polynomial of projective space. This can be shown by tensoring the exact sequence above by the locally flat sheaves formula_20, giving an exact sequence where the latter two terms have trivial cohomology, implying the triviality of the higher cohomology of formula_21. Note that we are using the equality of the Hilbert polynomial of a coherent sheaf with the Euler-characteristic of its sheaf cohomology groups. Pick a sufficiently large value of formula_14. The formula_22-dimensional space formula_13 is a subspace of the formula_23-dimensional space formula_24, so represents a point of the Grassmannian formula_25. This will give an embedding of the piece of the Hilbert scheme corresponding to the Hilbert polynomial formula_26 into this Grassmannian. It remains to describe the scheme structure on this image, in other words to describe enough elements for the ideal corresponding to it. Enough such elements are given by the conditions that the map "IX"("m") ⊗ "S"("k") → "S"("k" + "m") has rank at most dim("IX"("k" + "m")) for all positive k, which is equivalent to the vanishing of various determinants. (A more careful analysis shows that it is enough just to take "k" 1.) Universality. Given a closed subscheme formula_27 over a field with Hilbert polynomial formula_28, the Hilbert scheme H=Hilb("n", "P") has a universal subscheme formula_29 flat over formula_30 such that Tangent space. The tangent space of the point formula_35 is given by the global sections of the normal bundle formula_39; that is, formula_40 Unobstructedness of complete intersections. For local complete intersections formula_41 such that formula_42, the point formula_43 is smooth. This implies every deformation of formula_41 in formula_9 is unobstructed. Dimension of tangent space. In the case formula_44, the dimension of formula_30 at formula_45 is greater than or equal to formula_46. In addition to these properties, Francis Sowerby Macaulay (1927) determined for which polynomials the Hilbert scheme formula_4 is non-empty, and Robin Hartshorne (1966) showed that if formula_4 is non-empty then it is linearly connected. So two subschemes of projective space are in the same connected component of the Hilbert scheme if and only if they have the same Hilbert polynomial. Hilbert schemes can have bad singularities, such as irreducible components that are non-reduced at all points. They can also have irreducible components of unexpectedly high dimension. For example, one might expect the Hilbert scheme of d points (more precisely dimension 0, length d subschemes) of a scheme of dimension n to have dimension dn, but if "n" ≥ 3 its irreducible components can have much larger dimension. Functorial interpretation. There is an alternative interpretation of the Hilbert scheme which leads to a generalization of relative Hilbert schemes parameterizing subschemes of a relative scheme. For a fixed base scheme formula_11, let formula_47 and letformula_48be the functor sending a relative scheme formula_49 to the set of isomorphism classes of the setformula_50where the equivalence relation is given by the isomorphism classes of formula_51. This construction is functorial by taking pullbacks of families. Given formula_52, there is a family formula_53 over formula_54. Representability for projective maps. If the structure map formula_55 is projective, then this functor is represented by the Hilbert scheme constructed above. Generalizing this to the case of maps of finite type requires the technology of algebraic spaces developed by Artin. Relative Hilbert scheme for maps of algebraic spaces. In its greatest generality, the Hilbert functor is defined for a finite type map of algebraic spaces formula_56 defined over a scheme formula_11. Then, the Hilbert functor is defined as formula_57 sending "T" to formula_58. This functor is not representable by a scheme, but by an algebraic space. Also, if formula_59, and formula_60 is a finite type map of schemes, their Hilbert functor is represented by an algebraic space. Examples of Hilbert schemes. Fano schemes of hypersurfaces. One of the motivating examples for the investigation of the Hilbert scheme in general was the Fano scheme of a projective scheme. Given a subscheme formula_61 of degree formula_62, there is a scheme formula_63 in formula_64 parameterizing formula_65 where formula_30 is a formula_66-plane in formula_1, meaning it is a degree one embedding of formula_67. For smooth surfaces in formula_68 of degree formula_69, the non-empty Fano schemes formula_63 are smooth and zero-dimensional. This is because lines on smooth surfaces have negative self-intersection. Hilbert scheme of points. Another common set of examples are the Hilbert schemes of formula_6-points of a scheme formula_9, typically denoted formula_70. For formula_71 there is a nice geometric interpretation where the boundary loci formula_72 describing the intersection of points can be thought of parametrizing points along with their tangent vectors. For example, formula_73 is the blowup formula_74 of the diagonal modulo the symmetric action. Degree d hypersurfaces. The Hilbert scheme of degree k hypersurfaces in formula_1 is given by the projectivization formula_75. For example, the Hilbert scheme of degree 2 hypersurfaces in formula_76 is formula_71 with the universal hypersurface given by formula_77 where the underlying ring is bigraded. Hilbert scheme of curves and moduli of curves. For a fixed genus formula_78 algebraic curve formula_79, the degree of the tri-tensored dualizing sheaf formula_80 is globally generated, meaning its Euler characteristic is determined by the dimension of the global sections, so formula_81. The dimension of this vector space is formula_82, hence the global sections of formula_80 determine an embedding into formula_83 for every genus formula_78 curve. Using the Riemann-Roch formula, the associated Hilbert polynomial can be computed as formula_84. Then, the Hilbert scheme formula_85 parameterizes all genus "g" curves. Constructing this scheme is the first step in the construction of the moduli stack of algebraic curves. The other main technical tool are GIT quotients, since this moduli space is constructed as the quotient formula_86, where formula_87 is the sublocus of smooth curves in the Hilbert scheme. Hilbert scheme of points on a manifold. "Hilbert scheme" sometimes refers to the punctual Hilbert scheme of 0-dimensional subschemes on a scheme. Informally this can be thought of as something like finite collections of points on a scheme, though this picture can be very misleading when several points coincide. There is a Hilbert–Chow morphism from the reduced Hilbert scheme of points to the Chow variety of cycles taking any 0-dimensional scheme to its associated 0-cycle. (Fogarty 1968, 1969, 1973). The Hilbert scheme formula_88 of n points on M is equipped with a natural morphism to an n-th symmetric product of M. This morphism is birational for M of dimension at most 2. For M of dimension at least 3 the morphism is not birational for large n: the Hilbert scheme is in general reducible and has components of dimension much larger than that of the symmetric product. The Hilbert scheme of points on a curve C (a dimension-1 complex manifold) is isomorphic to a symmetric power of C. It is smooth. The Hilbert scheme of n points on a surface is also smooth (Grothendieck). If formula_89, it is obtained from formula_90 by blowing up the diagonal and then dividing by the formula_91 action induced by formula_92. This was used by Mark Haiman in his proof of the positivity of the coefficients of some Macdonald polynomials. The Hilbert scheme of a smooth manifold of dimension 3 or more is usually not smooth. Hilbert schemes and hyperkähler geometry. Let M be a complex Kähler surface with formula_93 (K3 surface or a torus). The canonical bundle of M is trivial, as follows from the Kodaira classification of surfaces. Hence M admits a holomorphic symplectic form. It was observed by Akira Fujiki (for formula_89) and Arnaud Beauville that formula_88 is also holomorphically symplectic. This is not very difficult to see, e.g., for formula_89. Indeed, formula_94 is a blow-up of a symmetric square of M. Singularities of formula_95 are locally isomorphic to formula_96. The blow-up of formula_97 is formula_98, and this space is symplectic. This is used to show that the symplectic form is naturally extended to the smooth part of the exceptional divisors of formula_88. It is extended to the rest of formula_88 by Hartogs' principle. A holomorphically symplectic, Kähler manifold is hyperkähler, as follows from the Calabi–Yau theorem. Hilbert schemes of points on the K3 surface and on a 4-dimensional torus give two series of examples of hyperkähler manifolds: a Hilbert scheme of points on K3 and a generalized Kummer surface. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\mathbf{Hilb}(n)" }, { "math_id": 1, "text": "\\mathbb{P}^n" }, { "math_id": 2, "text": "\\operatorname{Hom}(S, \\mathbf{Hilb}(n))" }, { "math_id": 3, "text": "\\mathbb{P}^n \\times S" }, { "math_id": 4, "text": "\\mathbf{Hilb}(n, P)" }, { "math_id": 5, "text": "\\operatorname{Spec}(\\Z)" }, { "math_id": 6, "text": "n" }, { "math_id": 7, "text": "T" }, { "math_id": 8, "text": "\\mathbb{P}^n \\times T" }, { "math_id": 9, "text": "X" }, { "math_id": 10, "text": "I_X^\\bullet" }, { "math_id": 11, "text": "S" }, { "math_id": 12, "text": "n+1" }, { "math_id": 13, "text": "I_X^m" }, { "math_id": 14, "text": "m" }, { "math_id": 15, "text": "\\mathcal{O}(m)" }, { "math_id": 16, "text": "0 \\to I_X \\to \\mathcal{O}_{\\mathbb{P}^n} \\to \\mathcal{O}_X \\to 0" }, { "math_id": 17, "text": "I_X^m = \\Gamma(I_X\\otimes \\mathcal{O}_{\\mathbb{P}^n}(m))" }, { "math_id": 18, "text": "Q(m) - P_X(m)" }, { "math_id": 19, "text": "Q" }, { "math_id": 20, "text": "\\mathcal{O}_{\\mathbb{P}^n}(m)" }, { "math_id": 21, "text": "I_X(m)" }, { "math_id": 22, "text": "(Q(m) - P_X(m))" }, { "math_id": 23, "text": "Q(m)" }, { "math_id": 24, "text": "S^m" }, { "math_id": 25, "text": "\\textbf{Gr}(Q(m)-P_X(m), Q(m))" }, { "math_id": 26, "text": "P_X" }, { "math_id": 27, "text": "Y \\subset \\mathbb{P}^n_k=X" }, { "math_id": 28, "text": "P" }, { "math_id": 29, "text": "W \\subset X \\times H" }, { "math_id": 30, "text": "H" }, { "math_id": 31, "text": "W_x" }, { "math_id": 32, "text": "x \\in H" }, { "math_id": 33, "text": "Y \\subset X" }, { "math_id": 34, "text": "x" }, { "math_id": 35, "text": "[Y] \\in H" }, { "math_id": 36, "text": "W' \\subset X\\times T" }, { "math_id": 37, "text": "\\phi: T \\to H" }, { "math_id": 38, "text": "\\phi^*W \\cong W'" }, { "math_id": 39, "text": "N_{Y/X}" }, { "math_id": 40, "text": "T_{[Y]}H = H^0(Y, N_{Y/X})" }, { "math_id": 41, "text": "Y" }, { "math_id": 42, "text": "H^1(Y,N_{X/Y}) = 0" }, { "math_id": 43, "text": "[Y]\\in H" }, { "math_id": 44, "text": "H^1(Y,N_{X/Y}) \\neq 0" }, { "math_id": 45, "text": "[Y]" }, { "math_id": 46, "text": "h^0(Y,N_{X/Y}) - h^1(Y,N_{X/Y})" }, { "math_id": 47, "text": "X \\in (Sch/S)" }, { "math_id": 48, "text": "\\underline{\n \\text{Hilb}\n}_{X/S}:(Sch/S)^{op} \\to Sets" }, { "math_id": 49, "text": "T \\to S" }, { "math_id": 50, "text": "\\underline{\n \\text{Hilb}\n}_{X/S}(T) = \\left\\{\n\\begin{matrix}\nZ & \\hookrightarrow & X \\times_S T & \\to & X \\\\\n\\downarrow & & \\downarrow & & \\downarrow \\\\\nT & = & T & \\to & S\n\\end{matrix}\n: Z \\to T \\text{ is flat}\n\\right\\} / \\sim " }, { "math_id": 51, "text": "Z" }, { "math_id": 52, "text": "f: T' \\to T" }, { "math_id": 53, "text": "f^*Z = Z\\times_TT'" }, { "math_id": 54, "text": "T'" }, { "math_id": 55, "text": "X \\to S" }, { "math_id": 56, "text": "f\\colon X \\to B" }, { "math_id": 57, "text": "\\underline{\\text{Hilb}}_{X/B}:(Sch/B)^{op} \\to Sets" }, { "math_id": 58, "text": "\\underline{\\text{Hilb}}_{X/B}(T) = \\left\\{ Z \\subset X\\times_BT : \n\\begin{align}\n&Z \\to T \\text{ is flat, proper,} \\\\\n&\\text{and of finite presentation}\n\\end{align}\n\\right\\}" }, { "math_id": 59, "text": "S = \\text{Spec}(\\Z)" }, { "math_id": 60, "text": "X\\to B" }, { "math_id": 61, "text": "X \\subset \\mathbb{P}^n" }, { "math_id": 62, "text": "d" }, { "math_id": 63, "text": "F_k(X)" }, { "math_id": 64, "text": "\\mathbb{G}(k, n)" }, { "math_id": 65, "text": "H \\subset X \\subset \\mathbb{P}^n" }, { "math_id": 66, "text": "k" }, { "math_id": 67, "text": "\\mathbb{P}^k" }, { "math_id": 68, "text": "\\mathbb{P}^3" }, { "math_id": 69, "text": "d \\geq 3" }, { "math_id": 70, "text": "X^{[n]}" }, { "math_id": 71, "text": "\\mathbb{P}^2" }, { "math_id": 72, "text": "B \\subset H" }, { "math_id": 73, "text": "(\\mathbb{P}^2)^{[2]}" }, { "math_id": 74, "text": "Bl_{\\Delta}(\\mathbb{P}^2\\times\\mathbb{P}^2/S_2)" }, { "math_id": 75, "text": "\\mathbb{P}(\\Gamma(\\mathcal{O}(k)))" }, { "math_id": 76, "text": "\\mathbb{P}^1" }, { "math_id": 77, "text": "\\text{Proj}(k[x_0,x_1][\\alpha,\\beta,\\gamma]/(\\alpha x_0^2 + \\beta x_0x_1 + \\gamma x_1^2)) \\subseteq \\mathbb{P}_{x_0,x_1}^1\\times\\mathbb{P}^2_{\\alpha,\\beta,\\gamma}" }, { "math_id": 78, "text": "g" }, { "math_id": 79, "text": "C" }, { "math_id": 80, "text": "\\omega_C^{\\otimes 3}" }, { "math_id": 81, "text": "\\chi(\\omega_C^{\\otimes 3}) = \\dim H^0(C,\\omega_X^{\\otimes 3})" }, { "math_id": 82, "text": "5g-5" }, { "math_id": 83, "text": "\\mathbb{P}^{5g-6}" }, { "math_id": 84, "text": "H_C(t) = 6(g-1)t + (1-g)" }, { "math_id": 85, "text": "\\text{Hilb}_{\\mathbb{P}^{5g-6}}^{H_C(t)}" }, { "math_id": 86, "text": "\\mathcal{M}_g = [U_g/GL_{5g-6}]" }, { "math_id": 87, "text": "U_g" }, { "math_id": 88, "text": "M^{[n]}" }, { "math_id": 89, "text": "n=2" }, { "math_id": 90, "text": "M\\times M" }, { "math_id": 91, "text": "\\Z/2\\Z" }, { "math_id": 92, "text": "(x,y) \\mapsto (y,x)" }, { "math_id": 93, "text": "c_1= 0" }, { "math_id": 94, "text": "M^{[2]}" }, { "math_id": 95, "text": "\\operatorname{Sym}^2 M" }, { "math_id": 96, "text": "\\Complex^2 \\times \\Complex^2/\\{\\pm 1\\}" }, { "math_id": 97, "text": "\\Complex^2/\\{\\pm 1\\}" }, { "math_id": 98, "text": "T^{*}\\mathbb{P}^{1}(\\Complex)" } ]
https://en.wikipedia.org/wiki?curid=6612581
6612596
Hilbert series and Hilbert polynomial
Tool in mathematical dimension theory In commutative algebra, the Hilbert function, the Hilbert polynomial, and the Hilbert series of a graded commutative algebra finitely generated over a field are three strongly related notions which measure the growth of the dimension of the homogeneous components of the algebra. These notions have been extended to filtered algebras, and graded or filtered modules over these algebras, as well as to coherent sheaves over projective schemes. The typical situations where these notions are used are the following: The Hilbert series of an algebra or a module is a special case of the Hilbert–Poincaré series of a graded vector space. The Hilbert polynomial and Hilbert series are important in computational algebraic geometry, as they are the easiest known way for computing the dimension and the degree of an algebraic variety defined by explicit polynomial equations. In addition, they provide useful invariants for families of algebraic varieties because a flat family formula_0 has the same Hilbert polynomial over any closed point formula_1. This is used in the construction of the Hilbert scheme and Quot scheme. Definitions and main properties. Consider a finitely generated graded commutative algebra "S" over a field "K", which is finitely generated by elements of positive degree. This means that formula_2 and that formula_3. The Hilbert function formula_4 maps the integer "n" to the dimension of the "K"-vector space "S""n". The Hilbert series, which is called Hilbert–Poincaré series in the more general setting of graded vector spaces, is the formal series formula_5 If "S" is generated by "h" homogeneous elements of positive degrees formula_6, then the sum of the Hilbert series is a rational fraction formula_7 where "Q" is a polynomial with integer coefficients. If "S" is generated by elements of degree 1 then the sum of the Hilbert series may be rewritten as formula_8 where "P" is a polynomial with integer coefficients, and formula_9 is the Krull dimension of S. In this case the series expansion of this rational fraction is formula_10 where formula_11 is the binomial coefficient for formula_12 and is 0 otherwise. If formula_13 the coefficient of formula_14 in formula_15 is thus formula_16 For formula_17 the term of index i in this sum is a polynomial in n of degree formula_18 with leading coefficient formula_19 This shows that there exists a unique polynomial formula_20 with rational coefficients which is equal to formula_21 for n large enough. This polynomial is the Hilbert polynomial, and has the form formula_22 The least "n"0 such that formula_23 for "n" ≥ "n"0 is called the Hilbert regularity. It may be lower than formula_24. The Hilbert polynomial is a numerical polynomial, since the dimensions are integers, but the polynomial almost never has integer coefficients . All these definitions may be extended to finitely generated graded modules over "S", with the only difference that a factor "tm" appears in the Hilbert series, where "m" is the minimal degree of the generators of the module, which may be negative. The Hilbert function, the Hilbert series and the Hilbert polynomial of a filtered algebra are those of the associated graded algebra. The Hilbert polynomial of a projective variety "V" in P"n" is defined as the Hilbert polynomial of the homogeneous coordinate ring of "V". Graded algebra and polynomial rings. Polynomial rings and their quotients by homogeneous ideals are typical graded algebras. Conversely, if "S" is a graded algebra generated over the field "K" by "n" homogeneous elements "g"1, ..., "g""n" of degree 1, then the map which sends "X""i" onto "g""i" defines an homomorphism of graded rings from formula_25 onto "S". Its kernel is a homogeneous ideal "I" and this defines an isomorphism of graded algebra between formula_26 and "S". Thus, the graded algebras generated by elements of degree 1 are exactly, up to an isomorphism, the quotients of polynomial rings by homogeneous ideals. Therefore, the remainder of this article will be restricted to the quotients of polynomial rings by ideals. Properties of Hilbert series. Additivity. Hilbert series and Hilbert polynomial are additive relatively to exact sequences. More precisely, if formula_27 is an exact sequence of graded or filtered modules, then we have formula_28 and formula_29 This follows immediately from the same property for the dimension of vector spaces. Quotient by a non-zero divisor. Let "A" be a graded algebra and "f" a homogeneous element of degree "d" in "A" which is not a zero divisor. Then we have formula_30 It follows from the additivity on the exact sequence formula_31 where the arrow labeled "f" is the multiplication by "f", and formula_32 is the graded module which is obtained from "A" by shifting the degrees by "d", in order that the multiplication by "f" has degree 0. This implies that formula_33 Hilbert series and Hilbert polynomial of a polynomial ring. The Hilbert series of the polynomial ring formula_34 in formula_35 indeterminates is formula_36 It follows that the Hilbert polynomial is formula_37 The proof that the Hilbert series has this simple form is obtained by applying recursively the previous formula for the quotient by a non zero divisor (here formula_38) and remarking that formula_39 Shape of the Hilbert series and dimension. A graded algebra "A" generated by homogeneous elements of degree 1 has Krull dimension zero if the maximal homogeneous ideal, that is the ideal generated by the homogeneous elements of degree 1, is nilpotent. This implies that the dimension of "A" as a "K"-vector space is finite and the Hilbert series of "A" is a polynomial "P"("t") such that "P"(1) is equal to the dimension of "A" as a "K"-vector space. If the Krull dimension of "A" is positive, there is a homogeneous element "f" of degree one which is not a zero divisor (in fact almost all elements of degree one have this property). The Krull dimension of "A"/"(f)" is the Krull dimension of "A" minus one. The additivity of Hilbert series shows that formula_40. Iterating this a number of times equal to the Krull dimension of "A", we get eventually an algebra of dimension 0 whose Hilbert series is a polynomial "P"("t"). This show that the Hilbert series of "A" is formula_41 where the polynomial "P"("t") is such that "P"(1) ≠ 0 and "d" is the Krull dimension of "A". This formula for the Hilbert series implies that the degree of the Hilbert polynomial is "d", and that its leading coefficient is formula_42. Degree of a projective variety and Bézout's theorem. The Hilbert series allows us to compute the degree of an algebraic variety as the value at 1 of the numerator of the Hilbert series. This provides also a rather simple proof of Bézout's theorem. For showing the relationship between the degree of a projective algebraic set and the Hilbert series, consider a projective algebraic set V, defined as the set of the zeros of a homogeneous ideal formula_43, where k is a field, and let formula_44 be the ring of the regular functions on the algebraic set. In this section, one does not need irreducibility of algebraic sets nor primality of ideals. Also, as Hilbert series are not changed by extending the field of coefficients, the field k is supposed, without loss of generality, to be algebraically closed. The dimension d of V is equal to the Krull dimension minus one of R, and the degree of V is the number of points of intersection, counted with multiplicities, of V with the intersection of formula_45 hyperplanes in general position. This implies the existence, in R, of a regular sequence formula_46 of "d" + 1 homogeneous polynomials of degree one. The definition of a regular sequence implies the existence of exact sequences formula_47 for formula_48 This implies that formula_49 where formula_50 is the numerator of the Hilbert series of R. The ring formula_51 has Krull dimension one, and is the ring of regular functions of a projective algebraic set formula_52 of dimension 0 consisting of a finite number of points, which may be multiple points. As formula_53 belongs to a regular sequence, none of these points belong to the hyperplane of equation formula_54 The complement of this hyperplane is an affine space that contains formula_55 This makes formula_52 an affine algebraic set, which has formula_56 as its ring of regular functions. The linear polynomial formula_57 is not a zero divisor in formula_58 and one has thus an exact sequence formula_59 which implies that formula_60 Here we are using Hilbert series of filtered algebras, and the fact that the Hilbert series of a graded algebra is also its Hilbert series as filtered algebra. Thus formula_61 is an Artinian ring, which is a k-vector space of dimension "P"(1), and Jordan–Hölder theorem may be used for proving that "P"(1) is the degree of the algebraic set V. In fact, the multiplicity of a point is the number of occurrences of the corresponding maximal ideal in a composition series. For proving Bézout's theorem, one may proceed similarly. If formula_62 is a homogeneous polynomial of degree formula_9, which is not a zero divisor in R, the exact sequence formula_63 shows that formula_64 Looking on the numerators this proves the following generalization of Bézout's theorem: Theorem - If f is a homogeneous polynomial of degree formula_9, which is not a zero divisor in R, then the degree of the intersection of V with the hypersurface defined by formula_62 is the product of the degree of V by formula_65 In a more geometrical form, this may restated as: Theorem - If a projective hypersurface of degree d does not contain any irreducible component of an algebraic set of degree δ, then the degree of their intersection is dδ. The usual Bézout's theorem is easily deduced by starting from a hypersurface, and intersecting it with "n" − 1 other hypersurfaces, one after the other. Complete intersection. A projective algebraic set is a complete intersection if its defining ideal is generated by a regular sequence. In this case, there is a simple explicit formula for the Hilbert series. Let formula_66 be "k" homogeneous polynomials in formula_67, of respective degrees formula_68 Setting formula_69 one has the following exact sequences formula_70 The additivity of Hilbert series implies thus formula_71 A simple recursion gives formula_72 This shows that the complete intersection defined by a regular sequence of "k" polynomials has a codimension of "k", and that its degree is the product of the degrees of the polynomials in the sequence. Relation with free resolutions. Every graded module "M" over a graded regular ring "R" has a graded free resolution because of the Hilbert syzygy theorem, meaning there exists an exact sequence formula_73 where the formula_74 are graded free modules, and the arrows are graded linear maps of degree zero. The additivity of Hilbert series implies that formula_75 If formula_76 is a polynomial ring, and if one knows the degrees of the basis elements of the formula_77 then the formulas of the preceding sections allow deducing formula_78 from formula_79 In fact, these formulas imply that, if a graded free module "L" has a basis of "h" homogeneous elements of degrees formula_80 then its Hilbert series is formula_81 These formulas may be viewed as a way for computing Hilbert series. This is rarely the case, as, with the known algorithms, the computation of the Hilbert series and the computation of a free resolution start from the same Gröbner basis, from which the Hilbert series may be directly computed with a computational complexity which is not higher than that the complexity of the computation of the free resolution. Computation of Hilbert series and Hilbert polynomial. The Hilbert polynomial is easily deducible from the Hilbert series (see above). This section describes how the Hilbert series may be computed in the case of a quotient of a polynomial ring, filtered or graded by the total degree. Thus let "K" a field, formula_82 be a polynomial ring and "I" be an ideal in "R". Let "H" be the homogeneous ideal generated by the homogeneous parts of highest degree of the elements of "I". If "I" is homogeneous, then "H"="I". Finally let "B" be a Gröbner basis of "I" for a monomial ordering refining the total degree partial ordering and "G" the (homogeneous) ideal generated by the leading monomials of the elements of "B". The computation of the Hilbert series is based on the fact that "the filtered algebra R/I and the graded algebras R/H and R/G have the same Hilbert series". Thus the computation of the Hilbert series is reduced, through the computation of a Gröbner basis, to the same problem for an ideal generated by monomials, which is usually much easier than the computation of the Gröbner basis. The computational complexity of the whole computation depends mainly on the regularity, which is the degree of the numerator of the Hilbert series. In fact the Gröbner basis may be computed by linear algebra over the polynomials of degree bounded by the regularity. The computation of Hilbert series and Hilbert polynomials are available in most computer algebra systems. For example in both Maple and Magma these functions are named "HilbertSeries" and "HilbertPolynomial". Generalization to coherent sheaves. In algebraic geometry, graded rings generated by elements of degree 1 produce projective schemes by Proj construction while finitely generated graded modules correspond to coherent sheaves. If formula_83 is a coherent sheaf over a projective scheme "X", we define the Hilbert polynomial of formula_83 as a function formula_84, where "χ" is the Euler characteristic of coherent sheaf, and formula_85 a Serre twist. The Euler characteristic in this case is a well-defined number by Grothendieck's finiteness theorem. This function is indeed a polynomial. For large "m" it agrees with dim formula_86 by Serre's vanishing theorem. If "M" is a finitely generated graded module and formula_87 the associated coherent sheaf the two definitions of Hilbert polynomial agree. Graded free resolutions. Since the category of coherent sheaves on a projective variety formula_88 is equivalent to the category of graded-modules modulo a finite number of graded-pieces, we can use the results in the previous section to construct Hilbert polynomials of coherent sheaves. For example, a complete intersection formula_88 of multi-degree formula_89 has the resolution formula_90 Citations. &lt;templatestyles src="Reflist/styles.css" /&gt; References. &lt;templatestyles src="Refbegin/styles.css" /&gt;
[ { "math_id": 0, "text": "\\pi:X \\to S" }, { "math_id": 1, "text": "s \\in S" }, { "math_id": 2, "text": "S = \\bigoplus_{i \\ge 0} S_i " }, { "math_id": 3, "text": "S_0=K" }, { "math_id": 4, "text": "HF_S : n\\longmapsto \\dim_K S_n" }, { "math_id": 5, "text": "HS_S(t)=\\sum_{n=0}^{\\infty} HF_S(n)t^n." }, { "math_id": 6, "text": "d_1, \\ldots, d_h" }, { "math_id": 7, "text": "HS_S(t)=\\frac{Q(t)}{\\prod_{i=1}^h \\left (1-t^{d_i} \\right )}," }, { "math_id": 8, "text": "HS_S(t)=\\frac{P(t)}{(1-t)^\\delta}," }, { "math_id": 9, "text": "\\delta" }, { "math_id": 10, "text": "HS_S(t)=P(t) \\left(1+\\delta t+\\cdots +\\binom{n+\\delta-1}{\\delta-1} t^n+\\cdots\\right)" }, { "math_id": 11, "text": "\\binom{n+\\delta-1}{\\delta-1} = \\frac{(n+\\delta-1)(n+\\delta-2)\\cdots (n+1)}{(\\delta-1)!}" }, { "math_id": 12, "text": "n>-\\delta," }, { "math_id": 13, "text": "P(t)=\\sum_{i=0}^d a_it^i," }, { "math_id": 14, "text": "t^n" }, { "math_id": 15, "text": "HS_S(t)" }, { "math_id": 16, "text": "HF_S(n)= \\sum_{i=0}^d a_i \\binom{n -i+\\delta-1}{\\delta-1}." }, { "math_id": 17, "text": "n\\ge i-\\delta+1," }, { "math_id": 18, "text": "\\delta-1" }, { "math_id": 19, "text": "a_i/(\\delta-1)!." }, { "math_id": 20, "text": "HP_S(n)" }, { "math_id": 21, "text": "HF_S(n)" }, { "math_id": 22, "text": "HP_S(n)= \\frac{P(1)}{(\\delta-1)!}n^{\\delta-1} + \\text{ terms of lower degree in } n. " }, { "math_id": 23, "text": "HP_S(n)=HF_S(n)" }, { "math_id": 24, "text": "\\deg P-\\delta+1" }, { "math_id": 25, "text": "R_n=K[X_1,\\ldots, X_n]" }, { "math_id": 26, "text": "R_n/I" }, { "math_id": 27, "text": "0 \\;\\rightarrow\\; A\\;\\rightarrow\\; B\\;\\rightarrow\\; C \\;\\rightarrow\\; 0" }, { "math_id": 28, "text": "HS_B=HS_A+HS_C" }, { "math_id": 29, "text": "HP_B=HP_A+HP_C." }, { "math_id": 30, "text": "HS_{A/(f)}(t)=(1-t^d)\\,HS_A(t)\\,." }, { "math_id": 31, "text": "0 \\;\\rightarrow\\; A^{[d]}\\; \\xrightarrow{f}\\; A \\;\\rightarrow\\; A/f\\rightarrow\\; 0\\,," }, { "math_id": 32, "text": "A^{[d]}" }, { "math_id": 33, "text": "HS_{A^{[d]}}(t)=t^d\\,HS_A(t)\\,." }, { "math_id": 34, "text": "R_n=K[x_1, \\ldots, x_n]" }, { "math_id": 35, "text": "n" }, { "math_id": 36, "text": "HS_{R_n}(t) = \\frac{1}{(1-t)^{n}}\\,." }, { "math_id": 37, "text": " HP_{R_n}(k) = {{k+n-1}\\choose{n-1}} = \\frac{(k+1)\\cdots(k+n-1)}{(n-1)!}\\,." }, { "math_id": 38, "text": "x_n" }, { "math_id": 39, "text": "HS_K(t)=1\\,." }, { "math_id": 40, "text": "HS_{A/(f)}(t)=(1-t)\\,HS_A(t)" }, { "math_id": 41, "text": "HS_A(t)=\\frac{P(t)}{(1-t)^d}" }, { "math_id": 42, "text": "\\frac{P(1)}{d!}" }, { "math_id": 43, "text": "I\\subset k[x_0, x_1, \\ldots, x_n]" }, { "math_id": 44, "text": " R=k[x_0, \\ldots, x_n]/I" }, { "math_id": 45, "text": "d" }, { "math_id": 46, "text": "h_0, \\ldots, h_{d}" }, { "math_id": 47, "text": "0 \\longrightarrow \\left(R/\\langle h_0,\\ldots, h_{k-1}\\rangle \\right)^{[1]} \\stackrel{h_k}{\\longrightarrow} R/\\langle h_1,\\ldots, h_{k-1}\\rangle \\longrightarrow R/\\langle h_1,\\ldots, h_k \\rangle \\longrightarrow 0," }, { "math_id": 48, "text": "k=0, \\ldots, d." }, { "math_id": 49, "text": "HS_{R/\\langle h_0, \\ldots, h_{d-1}\\rangle}(t) = (1-t)^d\\,HS_R(t)=\\frac{P(t)}{1-t}," }, { "math_id": 50, "text": " P(t)" }, { "math_id": 51, "text": "R_1=R/\\langle h_0, \\ldots, h_{d-1}\\rangle" }, { "math_id": 52, "text": "V_0" }, { "math_id": 53, "text": "h_d" }, { "math_id": 54, "text": "h_d=0." }, { "math_id": 55, "text": "V_0." }, { "math_id": 56, "text": "R_0 = R_1/\\langle h_d-1\\rangle" }, { "math_id": 57, "text": "h_d-1" }, { "math_id": 58, "text": "R_1," }, { "math_id": 59, "text": "0 \\longrightarrow R_1 \\stackrel{h_d-1}{\\longrightarrow} R_1 \\longrightarrow R_0 \\longrightarrow 0," }, { "math_id": 60, "text": "HS_{R_0}(t) = (1-t)HS_{R_1}(t) = P(t)." }, { "math_id": 61, "text": "R_0" }, { "math_id": 62, "text": "f" }, { "math_id": 63, "text": "0 \\longrightarrow R^{[\\delta]} \\stackrel{f}{\\longrightarrow} R \\longrightarrow R/\\langle f\\rangle \\longrightarrow 0," }, { "math_id": 64, "text": "HS_{R/\\langle f \\rangle}(t)= \\left (1-t^\\delta \\right )HS_R(t)." }, { "math_id": 65, "text": "\\delta." }, { "math_id": 66, "text": "f_1, \\ldots, f_k" }, { "math_id": 67, "text": "R=K[x_1, \\ldots, x_n]" }, { "math_id": 68, "text": "\\delta_1, \\ldots, \\delta_k." }, { "math_id": 69, "text": "R_i=R/\\langle f_1, \\ldots, f_i\\rangle," }, { "math_id": 70, "text": "0 \\;\\rightarrow\\; R_{i-1}^{[\\delta_i]}\\; \\xrightarrow{f_i}\\; R_{i-1} \\;\\rightarrow\\; R_i\\; \\rightarrow\\; 0\\,." }, { "math_id": 71, "text": "HS_{R_i}(t)=(1-t^{\\delta_i})HS_{R_{i-1}}(t)\\,." }, { "math_id": 72, "text": "HS_{R_k}(t)=\\frac{(1-t^{\\delta_1})\\cdots (1-t^{\\delta_k})}{(1-t)^n}= \\frac{(1+t+\\cdots+t^{\\delta_1})\\cdots (1+t+\\cdots+t^{\\delta_k})}{(1-t)^{n-k}}\\,." }, { "math_id": 73, "text": " 0 \\to L_k \\to \\cdots \\to L_1 \\to M \\to 0," }, { "math_id": 74, "text": "L_i" }, { "math_id": 75, "text": "HS_M(t) =\\sum_{i=1}^k (-1)^{i-1}HS_{L_i}(t)." }, { "math_id": 76, "text": "R=k[x_1, \\ldots, x_n]" }, { "math_id": 77, "text": "L_i," }, { "math_id": 78, "text": "HS_M(t)" }, { "math_id": 79, "text": "HS_R(t) = 1/(1-t)^n." }, { "math_id": 80, "text": "\\delta_1, \\ldots, \\delta_h," }, { "math_id": 81, "text": "HS_L(t) = \\frac{t^{\\delta_1}+\\cdots +t^{\\delta_h}}{(1-t)^n}." }, { "math_id": 82, "text": "R=K[x_1,\\ldots,x_n]" }, { "math_id": 83, "text": "\\mathcal{F}" }, { "math_id": 84, "text": "p_{\\mathcal{F}}(m) = \\chi(X, \\mathcal{F}(m))" }, { "math_id": 85, "text": "\\mathcal{F}(m)" }, { "math_id": 86, "text": "H^0(X, \\mathcal{F}(m))" }, { "math_id": 87, "text": "\\tilde{M}" }, { "math_id": 88, "text": "X" }, { "math_id": 89, "text": "(d_1,d_2)" }, { "math_id": 90, "text": "\n0 \\to \\mathcal{O}_{\\mathbb{P}^n}(-d_1-d_2) \\xrightarrow{\\begin{bmatrix} f_2 \\\\ -f_1 \\end{bmatrix}} \\mathcal{O}_{\\mathbb{P}^n}(-d_1)\\oplus\\mathcal{O}_{\\mathbb{P}^n}(-d_2) \\xrightarrow{\\begin{bmatrix}f_1 & f_2 \\end{bmatrix}} \\mathcal{O}_{\\mathbb{P}^n} \\to \\mathcal{O}_X \\to 0\n" } ]
https://en.wikipedia.org/wiki?curid=6612596
6612667
Equivalence (measure theory)
In mathematics, and specifically in measure theory, equivalence is a notion of two measures being qualitatively similar. Specifically, the two measures agree on which events have measure zero. Definition. Let formula_0 and formula_1 be two measures on the measurable space formula_2 and let formula_3 and formula_4 be the sets of formula_0-null sets and formula_1-null sets, respectively. Then the measure formula_1 is said to be absolutely continuous in reference to formula_0 if and only if formula_5 This is denoted as formula_6 The two measures are called equivalent if and only if formula_7 and formula_8 which is denoted as formula_9 That is, two measures are equivalent if they satisfy formula_10 Examples. On the real line. Define the two measures on the real line as formula_11 formula_12 for all Borel sets formula_13 Then formula_0 and formula_1 are equivalent, since all sets outside of formula_14 have formula_0 and formula_1 measure zero, and a set inside formula_14 is a formula_0-null set or a formula_1-null set exactly when it is a null set with respect to Lebesgue measure. Abstract measure space. Look at some measurable space formula_15 and let formula_0 be the counting measure, so formula_16 where formula_17 is the cardinality of the set a. So the counting measure has only one null set, which is the empty set. That is, formula_18 So by the second definition, any other measure formula_1 is equivalent to the counting measure if and only if it also has just the empty set as the only formula_1-null set. Supporting measures. A measure formula_0 is called a &lt;templatestyles src="Template:Visible anchor/styles.css" /&gt;supporting measure of a measure formula_1 if formula_0 is formula_19-finite and formula_1 is equivalent to formula_20 References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\mu" }, { "math_id": 1, "text": "\\nu" }, { "math_id": 2, "text": "(X, \\mathcal A)," }, { "math_id": 3, "text": "\\mathcal{N}_\\mu := \\{A \\in \\mathcal{A} \\mid \\mu(A) = 0\\}" }, { "math_id": 4, "text": "\\mathcal{N}_\\nu := \\{A \\in \\mathcal{A} \\mid \\nu(A) = 0\\}" }, { "math_id": 5, "text": "\\mathcal N_\\nu \\supseteq \\mathcal N_\\mu." }, { "math_id": 6, "text": "\\nu \\ll \\mu." }, { "math_id": 7, "text": "\\mu \\ll \\nu" }, { "math_id": 8, "text": "\\nu \\ll \\mu," }, { "math_id": 9, "text": "\\mu \\sim \\nu." }, { "math_id": 10, "text": "\\mathcal N_\\mu = \\mathcal N_\\nu." }, { "math_id": 11, "text": "\\mu(A)= \\int_A \\mathbf 1_{[0,1]}(x) \\mathrm dx" }, { "math_id": 12, "text": "\\nu(A)= \\int_A x^2 \\mathbf 1_{[0,1]}(x) \\mathrm dx" }, { "math_id": 13, "text": "A." }, { "math_id": 14, "text": "[0,1]" }, { "math_id": 15, "text": "(X, \\mathcal A)" }, { "math_id": 16, "text": "\\mu(A) = |A|," }, { "math_id": 17, "text": "|A|" }, { "math_id": 18, "text": "\\mathcal N_\\mu = \\{\\varnothing\\}." }, { "math_id": 19, "text": "\\sigma" }, { "math_id": 20, "text": "\\mu." } ]
https://en.wikipedia.org/wiki?curid=6612667
66127387
Blichfeldt's theorem
High-area shapes can shift to hold many grid points Blichfeldt's theorem is a mathematical theorem in the geometry of numbers, stating that whenever a bounded set in the Euclidean plane has area formula_0, it can be translated so that it includes at least formula_1 points of the integer lattice. Equivalently, every bounded set of area formula_0 contains a set of formula_1 points whose coordinates all differ by integers. This theorem can be generalized to other lattices and to higher dimensions, and can be interpreted as a continuous version of the pigeonhole principle. It is named after Danish-American mathematician Hans Frederick Blichfeldt, who published it in 1914. Some sources call it Blichfeldt's principle or Blichfeldt's lemma. Statement and proof. The theorem can be stated most simply for points in the Euclidean plane, and for the integer lattice in the plane. For this version of the theorem, let formula_2 be any measurable set, let formula_0 denote its area, and round this number up to the next integer value, formula_3. Then Blichfeldt's theorem states that formula_2 can be translated so that its translated copy contains at least formula_4 points with integer coordinates. The basic idea of the proof is to cut formula_2 into pieces according to the squares of the integer lattice, and to translate each of those pieces by an integer amount so that it lies within the unit square having the origin as its lower right corner. This translation may cause some pieces of the unit square to be covered more than once, but if the combined area of the translated pieces is counted with multiplicity it remains unchanged, equal to formula_0. On the other hand, if the whole unit square were covered with multiplicity formula_5 its area would be formula_5, less than formula_0. Therefore, some point formula_6 of the unit square must be covered with multiplicity at least formula_4. A translation that takes formula_6 to the origin will also take all of the formula_4 points of formula_2 that covered formula_6 to integer points, which is what was required. More generally, the theorem applies to formula_7-dimensional sets formula_2, with formula_7-dimensional volume formula_0, and to an arbitrary formula_7-dimensional lattice formula_8 (a set of points in formula_7-dimensional space that do not all lie in any lower dimensional subspace, are separated from each other by some minimum distance, and can be combined by adding or subtracting their coordinates to produce other points in the same set). Just as the integer lattice divides the plane into squares, an arbitrary lattice divides its space into fundamental regions (called parallelotopes) with the property that any one of these regions can be translated onto any other of them by adding the coordinates of a unique lattice point. If formula_9 is the formula_7-dimensional volume of one of parallelotopes, then Blichfeldt's theorem states that formula_2 can be translated to include at least formula_10 points of formula_8. The proof is as before: cut up formula_2 by parallelotopes, translate the pieces by translation vectors in formula_11 onto a single parallelotope without changing the total volume (counted with multiplicity), observe that there must be a point formula_6 of multiplicity at least formula_10, and use a translation that takes formula_6 to the origin. Instead of asking for a translation for which there are formula_4 lattice points, an equivalent form of the theorem states that formula_2 itself contains a set of formula_4 points, all of whose pairwise differences belong to the lattice. A strengthened version of the theorem applies to compact sets, and states that they can be translated to contain at least formula_12 points of the lattice. This number of points differs from formula_13 only when formula_0 is an integer, for which it is larger by one. Applications. Minkowski's theorem. Minkowski's theorem, proved earlier than Blichfeldt's work by Hermann Minkowski, states that any convex set in the plane that is centrally symmetric around the origin, with area greater than four (or a compact symmetric set with area equal to four) contains a nonzero integer point. More generally, for a formula_7-dimensional lattice formula_8 whose fundamental parallelotopes have volume formula_9, any set centrally symmetric around the origin with volume greater than formula_14 contains a nonzero lattice point. Although Minkowski's original proof was different, Blichfeldt's theorem can be used in a simple proof of Minkowski's theorem. Let formula_15 be any centrally symmetric set with volume greater than formula_14 (meeting the conditions of Minkowski's theorem), and scale it down by a factor of two to obtain a set formula_16 of volume greater than formula_9. By Blichfeldt's theorem, formula_16 has two points formula_6 and formula_17 whose coordinatewise difference belongs to formula_9. Reversing the shrinking operation, formula_18 and formula_19 belong to formula_15. By symmetry formula_20 also belongs to formula_15, and by convexity the midpoint of formula_18 and formula_20 belongs to formula_15. But this midpoint is formula_21, a nonzero point of formula_9. Other applications. Many applications of Blichfeldt's theorem, like the application to Minkowski's theorem, involve finding a nonzero lattice point in a large-enough set, but one that is not convex. For the proof of Minkowski's theorem, the key relation between the sets formula_15 and formula_16 that makes the proof work is that all differences of pairs of points in formula_16 belong to formula_15. However, for a set formula_15 that is not convex, formula_16 might have pairs of points whose difference does not belong to formula_15, making it unusable in this technique. One could instead find the largest centrally symmetric convex subset formula_25, and then apply Minkowski's theorem to formula_23, or equivalently apply Blichfeldt's theorem to formula_24. However, in many cases a given non-convex set formula_15 has a subset formula_26 that is larger than formula_24, whose pairwise differences belong to formula_15. When this is the case, the larger size of formula_22 relative to formula_24 leads to tighter bounds on how big formula_15 needs to be sure of containing a lattice point. For a centrally symmetric star domain, it is possible to use the calculus of variations to find the largest set formula_27 whose pairwise differences belong to formula_15. Applications of this method include simultaneous Diophantine approximation, the problem of approximating a given set of irrational numbers by rational numbers that all have the same denominators. Generalizations. Analogues of Blichfeldt's theorem have been proven for other sets of points than lattices, showing that large enough regions contain many points from these sets. These include a theorem for Fuchsian groups, lattice-like subsets of formula_28 matrices, and for the sets of vertices of Archimedean tilings. Other generalizations allow the set formula_2 to be a measurable function, proving that its sum over some set of translated lattice points is at least as large as its integral, or replace the single set formula_2 with a family of sets. Computational complexity. A computational problem related to Blichfeldt's theorem has been shown to be complete for the PPP complexity class, and therefore unlikely to be solvable in polynomial time. The problem takes as input a set of integer vectors forming the basis of a formula_7-dimensional lattice formula_8, and a set formula_2 of integer vectors, represented implicitly by a Boolean circuit for testing whether a given vector belongs to formula_2. It is required that the cardinality of formula_2, divided by the volume of the fundamental parallelotope of formula_8, is at least one, from which a discrete version of Blichfeldt's theorem implies that formula_2 includes a pair of points whose difference belongs to formula_8. The task is to find either such a pair, or a point of formula_2 that itself belongs to formula_8. The computational hardness of this task motivates the construction of a candidate for a collision-resistant cryptographic hash function. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "A" }, { "math_id": 1, "text": "\\lceil A\\rceil" }, { "math_id": 2, "text": "S" }, { "math_id": 3, "text": "n=\\lceil A\\rceil " }, { "math_id": 4, "text": "n" }, { "math_id": 5, "text": "n-1" }, { "math_id": 6, "text": "p" }, { "math_id": 7, "text": "d" }, { "math_id": 8, "text": "\\Lambda" }, { "math_id": 9, "text": "L" }, { "math_id": 10, "text": "\\lceil A/L\\rceil" }, { "math_id": 11, "text": "\\lambda" }, { "math_id": 12, "text": "\\lfloor A+1\\rfloor" }, { "math_id": 13, "text": "\\lceil A\\rceil " }, { "math_id": 14, "text": "2^d L" }, { "math_id": 15, "text": "X" }, { "math_id": 16, "text": "\\tfrac{1}{2}X" }, { "math_id": 17, "text": "q" }, { "math_id": 18, "text": "2p" }, { "math_id": 19, "text": "2q" }, { "math_id": 20, "text": "-2q" }, { "math_id": 21, "text": "p-q" }, { "math_id": 22, "text": "Y" }, { "math_id": 23, "text": "K" }, { "math_id": 24, "text": "\\tfrac{1}{2}K" }, { "math_id": 25, "text": "K\\subset X" }, { "math_id": 26, "text": "Y\\subset X" }, { "math_id": 27, "text": "X'" }, { "math_id": 28, "text": "2\\times 2" } ]
https://en.wikipedia.org/wiki?curid=66127387
661281
Self-stabilization
Concept of fault-tolerance Self-stabilization is a concept of fault-tolerance in distributed systems. Given any initial state, a self-stabilizing distributed system will end up in a correct state in a finite number of execution steps. At first glance, the guarantee of self stabilization may seem less promising than that of the more traditional fault-tolerance of algorithms, that aim to guarantee that the system always remains in a correct state under certain kinds of state transitions. However, that traditional fault tolerance cannot always be achieved. For example, it cannot be achieved when the system is started in an incorrect state or is corrupted by an intruder. Moreover, because of their complexity, it is very hard to debug and to analyze distributed systems. Hence, it is very hard to prevent a distributed system from reaching an incorrect state. Indeed, some forms of self-stabilization are incorporated into many modern computer and telecommunications networks, since it gives them the ability to cope with faults that were not foreseen in the design of the algorithm. Many years after the seminal paper of Edsger Dijkstra in 1974, this concept remains important as it presents an important foundation for self-managing computer systems and fault-tolerant systems. As a result, Dijkstra's paper received the 2002 ACM PODC Influential-Paper Award, one of the highest recognitions in the distributed computing community. Moreover, after Dijkstra's death, the award was renamed and is now called the Dijkstra Award. History. E.W. Dijkstra in 1974 presented the concept of self-stabilization, prompting further research in this area. His demonstration involved the presentation of self-stabilizing mutual exclusion algorithms. It also showed the first self-stabilizing algorithms that did not rely on strong assumptions on the system. Some previous protocols used in practice did actually stabilize, but only assuming the existence of a clock that was global to the system, and assuming a known upper bound on the duration of each system transition. It was only ten years later when Leslie Lamport pointed out the importance of Dijkstra's work at a 1983 conference called Symposium on Principles of Distributed Computing that researchers directed their attention to this elegant fault-tolerance concept. In his talk, Lamport stated:I regard this as Dijkstra's most brilliant work - at least, his most brilliant published paper. It's almost completely unknown. I regard it to be a milestone in work on fault tolerance... I regard self-stabilization to be a very important concept in fault tolerance and to be a very fertile field for research.Afterwards, Dijkstra's work was awarded ACM-PODC influential paper award, which then became ACM's (the Association for computing Machinery) Dijkstra Prize in Distributed Computing given at the annual ACM-PODC symposium. Overview. A distributed algorithm is self-stabilizing if, starting from an arbitrary state, it is guaranteed to converge to a legitimate state and remain in a legitimate set of states thereafter. A state is legitimate if, starting from this state, the algorithm satisfies its specification. The property of self-stabilization enables a distributed algorithm to recover from a transient fault regardless of its nature. Moreover, a self-stabilizing algorithm does not have to be initialized as it eventually starts to behave correctly regardless of its initial state. Dijkstra's paper, which introduces the concept of self-stabilization, presents an example in the context of a "token ring"—a network of computers ordered in a circle. Here, each computer or processor can "see" the whole state of one processor that immediately precedes it and that this state may imply that the processor "has a token" or it "does not have a token." One of the requirements is that exactly one of them must "hold a token" at any given time. The second requirement prescribes that each node "passes the token" to the computer/processor succeeding it so that the token eventually circulates the ring. The first self-stabilizing algorithms did not detect errors explicitly in order to subsequently repair them. Instead, they constantly pushed the system towards a legitimate state. Since traditional methods for detecting an error were often very difficult and time-consuming, such a behavior was considered desirable. (The method described in the paper cited above collects a huge amount of information from the whole network to one place; after that, it attempts to determine whether the collected global state is correct; even that determination alone can be a hard task). Efficiency improvements. More recently, researchers have presented newer methods for light-weight error detection for self-stabilizing systems using local checking. and for general tasks. The term "local" refers to a part of a computer network. When local detection is used, a computer in a network is not required to communicate with the entire network in order to detect an error—the error can be detected by having each computer communicate only with its nearest neighbors. These local detection methods simplified the task of designing self-stabilizing algorithms considerably. This is because the error detection mechanism and the recovery mechanism can be designed separately. Newer algorithms based on these detection methods also turned out to be much more efficient. Moreover, these papers suggested rather efficient general transformers to transform non self stabilizing algorithms to become self stabilizing. The idea is to, The combination of these 4 parts is self stabilizing (as long as there is no trigger of fault during the correction fault phases, e.g.,). Initial self stabilizing protocols were also presented in the above papers. More efficient reset protocols were presented later, e.g. Additional efficiency was introduced with the notion of time-adaptive protocols. The idea behind these is that when only a small number of errors occurs, the recovery time can (and should) be made short. Dijkstra's original self-stabilization algorithms do not have this property. A useful property of self-stabilizing algorithms is that they can be composed of layers if the layers do not exhibit any circular dependencies. The stabilization time of the composition is then bounded by the sum of the individual stabilization times of each layer. New approaches to Dijkstra's work emerged later on such as the case of Krzysztof Apt and Ehsan Shoja's proposition, which demonstrated how self-stabilization can be naturally formulated using the standard concepts of strategic games, particularly the concept of an improvement path. This particular work sought to demonstrate the link between self-stabilization and game theory. Time complexity. The time complexity of a self-stabilizing algorithm is measured in (asynchronous) rounds or cycles. To measure the output stabilization time, a subset of the state variables is defined to be externally visible (the "output"). Certain states of outputs are defined to be correct (legitimate). The set of the outputs of all the components of the system is said to have stabilized at the time that it starts to be correct, provided it stays correct indefinitely, unless additional faults occur. The output stabilization time is the time (the number of (asynchronous) "rounds") until the output stabilizes. Definition. A system is self-stabilizing if and only if: A system is said to be "randomized self-stabilizing" if and only if it is self-stabilizing and the expected number of rounds needed to reach a correct state is bounded by some constant formula_0. Design of self-stabilization in the above-mentioned sense is well known to be a difficult job. In fact, a class of distributed algorithms do not have the property of local checking: the legitimacy of the network state cannot be evaluated by a single process. The most obvious case is Dijkstra's token-ring defined above: no process can detect whether the network state is legitimate or not in the case where more than one token is present in non-neighboring processes. This suggests that self-stabilization of a distributed system is a sort of collective intelligence where each component is taking local actions, based on its local knowledge but eventually this guarantees global convergence at the end. To help overcome the difficulty of designing self-stabilization as defined above, other types of stabilization were devised. For instance, "weak stabilization" is the property that a distributed system has a possibility to reach its legitimate behavior from every possible state. Weak stabilization is easier to design as it just guarantees a "possibility" of convergence for some runs of the distributed system rather than convergence for every run. A self-stabilizing algorithm is "silent" if and only if it converges to a global state where the values of communication registers used by the algorithm remain fixed. Related work. An extension of the concept of self-stabilization is that of superstabilization. The intent here is to cope with dynamic distributed systems that undergo topological changes. In classical self-stabilization theory, arbitrary changes are viewed as errors where no guarantees are given until the system has stabilized again. With superstabilizing systems, there is a "passage" predicate that is always satisfied while the system's topology is reconfigured. A Theory that started within the area of self-stabilization is verifying (in a distributed manner) that the collection of the states of the nodes in a network obeys some predicate. That theory has grown beyond self-stabilization and led to notions such as "distributed NP" (a distributed version of NP (complexity)), distributed Zero Knowledge (a distributed version of Zero Knowledge), etc. The International Colloquium on Structural Information and Communication Complexity (SIRROCO) Prize for Innovation in Distributed Computing of 2024 was awarded for initiating that theory. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "k" } ]
https://en.wikipedia.org/wiki?curid=661281
66128381
Reinforcement (composite)
Constituent of a composite material which increases tensile strength In materials science, reinforcement is a constituent of a composite material which increases the composite's stiffness and tensile strength. Function. Following are the functions of the reinforcement in a composite: Fiber reinforcement. Crack propagation is prevented considerably, while rigidity is added normally by the reinforcement. Thin fibers can have very high strength, and they can increase substantially the overall properties of the composite provided they are linked mechanically to the matrix. Fiber-reinforced composites have two types, and they are short fibre-reinforced and continuous fiber-reinforced. Sheet moulding and compression moulding operations usually use the long and short fibers. These are available in the form of chips, flakes and random mate (which also can be produced from a continuous fibre laid randomly till the desired thickness of the laminate/ply is attained). A laminated or layered structure is usually constituted in continuous reinforced materials. The continuous and woven fiber styles are usually available in various forms, being pre-impregnated with the given matrix (resin), dry, uni-directional tapes of different widths, plain weave, harness satins, braided, and stitched. Reinforcement uses some of the common fibers such as carbon fibres, cellulose (wood/paper fibre and straw), glass fibers and high strength polymers, for example, aramid. For high-temperature applications, Silicon carbide fibers are used. Particle reinforcement. Particle reinforcement adds a similar effect to precipitation hardening in metals and ceramics. Large particles prevent dislocation movement and crack propagation as well as contribute to the composite's Young's Modulus. In general, particle reinforcement effect on Young's Modulus lies between values predicted by formula_0 as a lower bound and formula_1 as an upper bound. Therefore, it can be expressed as a linear combination of contribution from the matrix and some weighted contribution from the particles. formula_2 Where Kc is an experimentally derived constant between 0 and 1. This range of values for Kc reflects that particle reinforced composites are not characterized by the isostrain condition. Similarly, the tensile strength can be modeled in an equation of similar construction where Ks is a similarly bounded constant not necessarily of the same value of Kc formula_3 The true value of Kc and Ks vary based on factors including particle shape, particle distribution, and particle/matrix interface. Knowing these parameters, the mechanical properties can be modeled based on effects from grain boundary strengthening, dislocation strengthening, and Orowan strengthening. The most common particle reinforced composite is concrete, which is a mixture of gravel and sand usually strengthened by addition of small rocks or sand. Metals are often reinforced with ceramics to increase strength at the cost of ductility. Finally polymers and rubber are often reinforced with carbon black, commonly used in auto tires. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "E_c = \\frac{E_\\alpha E_\\beta}{(V_\\alpha E_\\beta + V_\\beta E_\\alpha)}" }, { "math_id": 1, "text": "E_c = V_\\alpha E_\\alpha + V_\\beta E_\\beta " }, { "math_id": 2, "text": "E_c = V_m E_m + K_c V_p E_p" }, { "math_id": 3, "text": "(T.S.)_c = V_m (T.S.)_m + K_s V_p (T.S.)_p" } ]
https://en.wikipedia.org/wiki?curid=66128381
66129362
Merhale
Ottoman unit of distance Merhale was an Ottoman unit of length. 8 fersahs were equal to 1 merhale. Fersah was based on the distance covered by a horse in normal gait per hour. Exact definition was 5685 meters. So 1 merhale can be converted to meters. formula_0 References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "1 \\quad \\mbox{merhale}= 8 \\cdot 5685 =45480 \\quad \\mbox {meters}" } ]
https://en.wikipedia.org/wiki?curid=66129362
6613227
Friedrichs's inequality
In mathematics, Friedrichs's inequality is a theorem of functional analysis, due to Kurt Friedrichs. It places a bound on the "Lp" norm of a function using "Lp" bounds on the weak derivatives of the function and the geometry of the domain, and can be used to show that certain norms on Sobolev spaces are equivalent. Friedrichs's inequality generalizes the Poincaré–Wirtinger inequality, which deals with the case "k" = 1. Statement of the inequality. Let formula_0 be a bounded subset of Euclidean space formula_1 with diameter formula_2. Suppose that formula_3 lies in the Sobolev space formula_4, i.e., formula_5 and the trace of formula_6 on the boundary formula_7 is zero. Then formula_8 In the above
[ { "math_id": 0, "text": "\\Omega" }, { "math_id": 1, "text": "\\mathbb R^n" }, { "math_id": 2, "text": "d" }, { "math_id": 3, "text": "u:\\Omega\\to\\mathbb R" }, { "math_id": 4, "text": "W_0^{k, p} (\\Omega)" }, { "math_id": 5, "text": "u\\in W^{k,p}(\\Omega)" }, { "math_id": 6, "text": "u" }, { "math_id": 7, "text": "\\partial\\Omega" }, { "math_id": 8, "text": "\\| u \\|_{L^p (\\Omega)} \\leq d^k \\left( \\sum_{| \\alpha | = k} \\| \\mathrm{D}^{\\alpha} u \\|_{L^p (\\Omega)}^p \\right)^{1/p}." }, { "math_id": 9, "text": "\\| \\cdot \\|_{L^p (\\Omega)}" }, { "math_id": 10, "text": " \\mathrm{D}^\\alpha u = \\frac{\\partial^{| \\alpha |} u}{\\partial_{x_1}^{\\alpha_1} \\cdots \\partial_{x_n}^{\\alpha_n} }." } ]
https://en.wikipedia.org/wiki?curid=6613227
66141540
Rotary friction welding
Spinning one metal workpiece against another to join them Rotary friction welding (RFW) one of the methods of friction welding, the classic way of which uses the work of friction to create a not separable weld. Typically one welded element is rotated relative to the other and to the forge (pressed down by axial force). The heating of the material is caused by friction work and creates a permanent connection. In this method, the materials to be welded can be the same, dissimilar, or composite and non-metallic materials. The friction welding methods of are often considered as solid-state welding. History. Some applications and patents connected with friction welding date back to the turn of the 20th century, and rotary friction welding is the oldest of these methods. W. Richter patented the method of the linear friction welding (LFW) process in 1924 in England and 1929 in the Weimar Republic; however, the description of the process was vague and H. Klopstock patented the same process in the Soviet Union in 1924. But the first description and experiments related to rotary friction welding took place in the Soviet Union in 1956, by a machinist named A. J. Chdikov (А. И. Чудиков), who after researching a myriad of scientific studies, suggested the use of this welding method as a commercial process. At first he discovered the method by accident in the Elbrussky mine where he worked: Chdikov did not pay enough attention to lubricating the lathe chuck's insides and then learned he had welded the workpiece to the lathe. He wondered if this accident could be used for joining and came to conclusion that it was necessary to work at high rotation speeds (as of 2023[ [update]] about 1000 revolutions per second), immediately brake and press down welded components. He decided to write a letter to the Ministry of Metallurgy and received the answer that this welding was inappropriate, but short notes about the method were published in the newspapers of the Union, arousing interest from Yu. Ya. Terentyeva who was manager of the national Scientific Research Institute of Electrical Welding Equipment, and with time Chdikov's method was disseminated. The process was introduced to the United States in 1960. The American companies Caterpillar Tractor Company (Caterpillar - CAT), Rockwell International, and American Manufacturing Foundry all developed machines for this process. Patents were also issued throughout Europe and the former Soviet Union. The first studies of friction welding in England were carried out by the Welding Institute in 1961. The US through Caterpillar Tractor Company and MTI developed an inertia process in 1962. Europe through KUKA AG and Thompson launched rotary friction welding for industrial applications in 1966, developed a direct-drive process and in 1974 built the rRS6 double spindle machine for heavy truck axles. newIn 1997, an international patent application was filed, entitled "Method of Friction Welding Tubular Members". Inventor A. Graham demonstrated, on welding pipes with a diameter of 152.4 mm, a method that uses radial friction welding with an intermediate ring for connecting long pipes, at long last succeeding after some attempts occurred in 1975, and after scientists in Leningrad theorized on the idea in newspapers. Another method was invented and experimentally proven at The Welding Institute (TWI) in the UK and patented in 1991, called the friction stir welding (FSW) process. In 2008 KUKA AG developed the SRS 1000 rotary friction welding machine with a forging force of 1000 tons. An improved modification is Low Force Friction Welding, a hybrid technology developed by EWI and Manufacturing Technology Inc. (MTI). The process can apply to both linear and rotary friction welding. As of 2020[ [update]] KUKA has been operating in 44 countries and has built more than 1200 systems, including for subcontract facilities; However, there are more companies in the world with experience; for example, The Welding Institute TWI has more than 50 years of expertise and insight inherent to process development. As of 2023[ [update]], with the help of more and more companies, friction welding has become popular worldwide with various materials both in scientific studies and industrial applications. Applications. Rotary friction welding is widely implemented across the manufacturing sector and has been used for numerous applications, including: Connections geometry. Rotary Friction Welding can join a wide range of part geometries Typically: Tube to Tube, Tube to Plate, Tube to Bar, Tube to Disk, Bar to Bar, Bar to Plate and in addition, to this a rotating ring is used to connect long components.Geometry of the component surface is not always flat for example it can be conical surface and not only. Types of materials to be welded. Rotary friction welding enables to weld various materials. Metallic materials of the same name or dissimilar either composite, superalloys and non-metallic e.g. thermoplastic polymers can be welded and even welding of wood is investigated, however welding a wood is not a good idea. Weldability tables of metallic alloy can be found on the Internet and in books. Sometimes an interlayer is used to connect non-compatible materials. Rotary friction welding for plastics. Friction welding is also used to join thermoplastic components. Division due to drive motor. In "direct-drive friction welding" (also called continuous drive friction welding) the drive motor and chuck are connected. The drive motor is continually driving the chuck during the heating stages. Usually, a clutch is used to disconnect the drive motor from the chuck, and a brake is then used to stop the chuck. In "inertia friction welding" the drive motor is disengaged, and the workpieces are forced together by a friction welding force. The kinetic energy stored in the rotating flywheel is dissipated as heat at the weld interface as the flywheel speed decreases. Before welding, one of the workpieces is attached to the rotary chuck along with a flywheel of a given weight. The piece is then spun up to a high rate of rotation to store the required energy in the flywheel. Once spinning at the proper speed, the motor is removed and the pieces forced together under pressure. The force is kept on the pieces after the spinning stops to allow the weld to "set". Stages of process. However, referring to the stages chart: RFW Friction work on cylindrical rods workpieces. Friction work create weld and can believe that is calculated for cylindrical workpieces from math: Work: (1) formula_0 Moment of force "M" general formula: (2) formula_1 The force F will be the frictional force "T (F=T)" so substituting for the formula (2): (3) formula_2 The friction force "T" will be the pressure "F" times by the friction coefficient μ: (4) formula_3 So moment of force "M": (5) formula_4 The alpha angle that each point will move with the axis of rotating cylindrical workpieces will be: (6) formula_5 So friction work: (7) formula_6 [""] For variable value μ over friction time: (8) formula_7 This requires verification but from equation it appears that turnover and force (or pressure no surface formula_8) is linear to friction work (W) so for example if pressure increase 2 times then friction work also increase 2 times, if turnover increase 2 times then friction work also increase 2 times and referring to rules conservation of energy this can heat 2 times more the material to the same temperature or the temperature may increase 2 times. However pressure has the same effect over the entire surface but rotation have more impact away from the axis of rotation because it is a rotary motion. Referring to thermal conductivity the friction time affects to the flash size when shorter time was used then friction work is more concentrated in a smaller area. or variable values μ, n, F over friction time: (9) formula_9 Therefore, the calculation in this way is not reliable in real is complicated. An example article considering the variable depends on the temperature coefficient of friction steel - aluminum Al60611 - Alumina is described by authors from Malaysia in for example this paper "Evaluation of Properties and FEM Model of the Friction Welded Mild Steel-Al6061-Alumina" and based on this position someone created no step by step but whatever an instructional simulation video in abaqus software and in this paper is possible to find the selection of the mesh type in the simulation described by the authors and there are some instructions such as use the Johnson-Cook material model choice, and not only, there is dissipation coefficient value, friction welding condition, the article included too the physical formulas related to rotary friction welding described by the authors such as: heat transfer equation and convection in rods, equations related to deformation processes. Article included information on the parameters of authors research, but it is not a step by step and simple instruction such as also the video and good add that it is not the only one position in literature. The conclusion include information that: "Even though the FE model proposed in this study cannot replace a more accurate analysis, it does provide guidance in weld parameter development and enhances understanding of the friction welding process, thus reducing costly and time consuming experimental approaches." The coefficient of friction changes with temperature and there are a number of factors internal friction (viscosity - e.g. Dynamic viscosity according to Carreau's fluid law), forge, properties of the material during welding are variable, also there is plastic deformation. Carreau's fluid law: Generalized Newtonian fluid where viscosity, formula_10, depends upon the shear rate, formula_11, by the following equation: (10) formula_12 Where: Modelling of the frictional heat generated within the RFW process can be realized as a function of conducted frictional work and its dissipation coefficient, incremental frictional work of a node 𝑖 on the contacting surface can be described as a function of its axial distance from the rotation centre, current frictional shear stress, rotational speed and incremental time. The dissipation coefficient 𝛽FR is often set to 0.9 meaning that 90% of frictional work is dissipated into heat. (11) 𝑑𝑞FR(𝑖) = 𝛽FR ∙ 𝑑𝑊FR(𝑖) = 𝛽FR ∙ 𝜏𝑅(𝑖) ∙ 𝜔 ∙ 𝑟𝑖 ∙ 𝑑𝑡 on contacting surface of node 𝑖 Friction work can also calculate from power of used for welding and friction time (will not be greater than the friction time multiply to the power of the welder - engine of the welder) referring to rules conservation of energy. This calculation looks the simplest. (12) "E" = Px"t" or for not constant power formula_17 However, in this case, energy can be also stored in the flywheel if is used depending on the welder construction. General flywheel energy formula: (13) formula_18 where: Sample calculations not by computer simulation also exist in the literature for example related to power input and temperature distribution can be found in the script from 1974: K. K. Wang and Wen Lin from Cornell University in "Flywheel friction welding research" manually calculates welding process and even at this time the weld structure was analysed. However, generally: The calculations can be complicated. Weld Zone Description. Heat and mechanical affected zones. Friction work is converted into rise of temperature in the welding zone area, and as a result of this the weld structure is changed. In typical rotary friction welding process rise of temperature at the beginning of process should be more extensively away from the axis of rotation because points away axis have greater linear velocity and in time of weld the temperature disperses according to thermal conductivity welded parts. ""Technically the WCZ and the TMAZ are both "thermo-mechanically affected zones" but due to the vastly different microstructures they possess they are often considered separately. The WCZ experiences significant dynamic recrystallisation (DRX), the TMAZ does not. The material in HAZ is not deformed mechanically but is affected by the heat. The region from one TMAZ/HAZ boundary to the other is often referred to as the "TMAZ thickness" or the plastically affected zone (PAZ). For the remainder of this article this region will be referred to as the PAZ."" Zones: Furthermore, in the literature, there is also a subdivided according to the type of grain. Similar terms exist in welding. During typical welding initially, the outer region heats up more, due to the higher linear velocity. Next, the heat spreads, and the material is pushed outside, thus creating an outside flash which can be cut off on the welding machine. Heat flow, heat flux in rods. It can create a hypothesis that heat flows in welding time like in a cylindrical rod it makes possible to suppose to calculate a temperature in individual places and times from the knowing of the issues of heat flow and heat flux in rods for example, temperature can be read by using thermocouples and compare with computer simulation. Weld measuring system. To provide knowledge about the process, monitoring systems are often used and this are carried out in several ways which affects the accuracy and the list of measured parameters. The list of measured and calculated parameters can looks like this: Temperature measuring systems. Examples of weld measurements. In the literature, can be found measurements of the thermal weld area with thermocouples and not only the non-contact thermographic method is also used. However, it also depends on the specific case for a very small area of the weld and HAZ there are cans by difficulties in thermal measuring in real time it can be calculated later after friction time there is heat flow. Sample source code for temperature measurement made on arduino, this is far away from the topic, however there are missing full open friction welding codes. Exists the free open source software for simulation (List of finite element software) but there is no welding open codes and detailed instructions to this software. &lt;templatestyles src="Template:Hidden begin/styles.css"/&gt;Example in Arduino IDE for temperature measurement with MAX31855. (click right show button to view) Example in Arduino IDE for temperature measurement with MAX31855. However, the DAQ thermocouple measurement system may have a chip from any manufacturer. Specification MAX31855K int DO = 11; int CS = 12; int CLK = 13; Adafruit_MAX31855 thermocouple(CLK, CS, DO); void setup() { Serial.begin(9600); delay(200); void loop() { Serial.println(thermocouple.readCelsius()); delay(15); Research, temperature, parameters in the rotary friction welding process. Quality requirements of welded joints depend on the form of application, e.g. in the space or fly industry weld errors are not allowed. Science tries to gets good quality welds, also some people have been interested in many years in welding knowledge, so there are many scientific articles describing the methods of joining, for example Bannari Amman Institute of Technology, published in 2019 year a literature review in their paper is possible to find out list of people who are interested in friction welding, however in this list not all off people are mentioned for example there is not mentioned about mti youtube channel also there are not written about low force friction welding additionally, the list of people may change over time. They are performed weld tests which give knowledge about mechanical properties of material in welded zone e.g. hardness tests, and tensile tests are performed. Based on the tensile tests the stretch curve are created which can give directly knowledge about ultimate tensile strength, breaking strength, maximum elongation and reduction in area and from these measurements the Young's modulus, Poisson's ratio, yield strength, and strain-hardening characteristics is created. Where, the articles often contain only data related to tensile tests such as: Where the units of SI are: K, kg, N, m, s and then Pa and this knowledge about this is needed for introducing data, material properties and not do errors in simulation programs. &lt;templatestyles src="Template:Hidden begin/styles.css"/&gt;Info about material units for simulation. (click right show button to view) The units are dependent on the unit system selected. For example, SI unit system: &lt;gallery widths="320"&gt; File:Ansys Unit system.jpg Research articles also often contain information about: and inclusion process parameters is obvious such as: Is also possible to find descriptions in research literature about: mechanical properties, microstructure, corrosion and wear resistance, and even cytotoxicity welded material. However, why research connect topic of cytotoxicity to welding if it is a subject not closely related (cytotoxicity is the quality of being toxic to cells). On this article can write that exist same off toxic metals and metals vapors such as polonium. It can be written than in some cases when welding at high temperatures, harmful metal vapors are released and then protection is recommended such as access to fresh air and exhaust these vapors to outside. There are several methods to determine the quality of a weld and for example the weld microstructure is examined by optical microscopy and scanning electron microscopy. The computer finite element method (FEM) is used to predict the shape of the flash and interface, not only for rotary friction welding (RFW), but also for friction stir welding (FSW), linear friction welding (LFW), FRIEX. In addition to the weld testing, the weld heat-affected zones are described. Knowledge of the maximum temperatures in the welding process make it possible to define the area structural changes. Process are analisis e.g. temperature measurements are also carried out for scientific purposes research materials, journals, by use contact thermocouples or sometimes no contact thermography methods. For example, an ultra fine grain structure of alloy or metal which is obtained by techniques such as severe plastic deformation or Powder metallurgy is desirable, and not changed by the high temperature, a large heat affected zone is unnecessary. Temperature may reduce material properties because dynamic recrystallization will occur, there may be changes in grain size and phase transformations structures of welded materials. In steel between austenite, ferrite, pearlite, bainite, cementite, martensite. Various parameters of welding are tested. The setting of the completely different parameters can obtain different weld for example the structure changes will not be the same width. It is possible to obtain a smaller heat-affected zone (HAZ) and a plastically affected zone (PAZ). The width of the weld is smaller. The results are for example not the same in welds made for the European Space Agency with a high turnover ω = 14000 rpm or another example from Warsaw technical university 12000 rpm and no typical very short friction time only 60 milliseconds instead of using an standard parameters, in addition, in this case, ultra fine grain alloy was welded, but for this example the welded rod workpiece was only 6mm in diameter so it is small rod friction welding another close to this examples with short friction time only e.g. 40 ms also exist in literature but also for small diameter. Unfortunately, welding in very short time carries the risk of welding imperfections such as weld discontinuities. Some cases of welding are made only individually or only in research such as: The welds created in with specific parameters such as welding time below 100 ms, with an appropriate front surface for example (conical contact surface), with materials that are difficult to weld (tungsten to steel), these are not always serial production. The rotations in the research literature for small diameters can be more as standard even e.g. 25000 rpm. Unfortunately the diameter of the workpiece can be a limitation to the use of high speeds of rotation. The key points to understand is that: Fine grain of the welded metal material according to Hall-Petch relation should have better strength and for the description of one technique for obtaining this material Percy Williams Bridgman won the Nobel Prize in Physics in 1946 referring to the achievements related to High Pressure Torsion (HPT). However, by High Pressure Torsion is obtained only thin film thickness material. There is also research into the introduction of interlayers. Even though dissimilar material joining is often more difficult the introduction for example nickel interlayer by an experimental electrodeposition deposition technique to increase the connection quality has been investigated by the Indian Institute of Metals, however in this case nickel interlayer thickness was of 70 formula_21m (micrometre ) and only small rods of 12mm diameter were welded. This nickel layer is only on top of the welded parts. In addition, this topic is not very related to welding but nickel layer may affect off corrosion resistance. Some scientists describe material research. Group of known materials is large includes: Ni nickel based superalloys such as Inconel, ultra-fine grain materials such as ultra-fine grain aluminum, low carbon steel e.g. Ultra Low Carbon Bainitic Steel (ULCBS). Friction welding is used for connection many materials including superalloys for example nickel-based Inconel, scientists describe connecting various materials and on the internet is possible finding articles about this and same part of the research relates to joining superalloys materials or materials with improved properties. Nickel based superalloys exhibit excellent high temperature strength, high temperature corrosion and oxidation resistance and creep resistance. However, referring to this research good add that nickel is not the most common and cheapest material: Prices list of chemical elements. Parameters. However, the parameters will be different as elements of different sizes can be welded. For example, can be produced ranging from the smallest component with a diameter of 3 mm to turbine components with a diameter in excess of 400 mm. By combining methods of connecting long elements perhaps future science may study the friction welding of rails for example for the high speeds railway industry and use the preheat Low force linear friction welding or modified Linear friction welding (LFW) method and vibrating insert (just like the rotating insert in FRIEX method) for do this if the machine are developed and also good add that most of attention are directed to safety of travelers, user safety should be preserved at the first place. Preliminary research involving similar welds and geometry has shown improved tensile strength and increased performance in the fatigue tests. Controversies in research Google scholar finds 246 articles in response to the "friction welding" phrase in 8.2022, this is only part of all available research, and for some of them financial grants are given. However, for example in 2016 The Institute of Electronic Materials Technology in Poland published t he article in Polish language about welding Al/Al2O3 Composites, after then two years later the Warsaw University of Technology publishes an article in Polish language about friction welding ultrafine grained 316L steel in 2018, although the materials was different, but the process parameters suggest that the welding machine is the same, so in this case The Institute of Electronic Materials Technology was published the data first, and summarizing in this case: in 2018 only the new material, which was steel, was welded and tested, the machine was not new but for this a grant was obtained (843 920 PLN = ~US$177476). It has not been written into the articles since the two institutes had a machine and since studies with short friction times were carried out. Students were informed about the willingness to create by the university a new innovative friction welder, but to 08.2022 there is no information about this, there are new research articles, but the device is still old (information valid in 2019 year). Low Force Friction Welding. An improved modification of the standard friction welding is Low Force Friction Welding, hybrid technology developed by EWI and Manufacturing Technology Inc. (MTI), "uses an external energy source to raise the interface temperature of the two parts being joined, thereby reducing the process forces required to make a solid-state weld compared to traditional friction welding". The process applies to both linear and rotary friction welding. Following the informations from the Manufacturing Technology blog and website, the technology is promising. Low force friction advantages: For example, those with a high melting point such as refractory metals like molybdenum, tantalum, tungsten or if there is a difference in material properties. The manufacturer also listed same advantages, which are not fully explained, this is not true for every case: Moreover, in 2021 the number of scientific articles for example on Google Scholar about Low force friction is smaller compared to description of the standard method about friction welding where an external energy source to raise the interface temperature is not used. Construction of the welding machine. Depending on the construction, but a standard welding machine may include the following systems: Producers present solutions and welding machines can include: However, there is not one manufacturer on the market and no one welder machine model and in addition, not always the same material and diameters is welded and a good presentation, technology description, design, may or not may determine the best solutions. There are also exist advertising presentations related to welding. Workpiece handles. The type of chuck depends on the technology used, their construction sometimes may be similar to a lathe and milling machine. Safety during friction welding. The description of the security rules depends on the joining method and situation - access to fresh air, electrical ground, wearing protective clothing, protect the eyes is required. However, personal protective equipment is recommended, but in some cases may be uncomfortable and in sometimes unnecessary, so protection depends on the situation. Staff negligence: -theft for example copper grounding, because it can be sold for scrap, -neglect of medical examinations, performed carelessly, even paided, because it's about earning money and not staff health, -no cleaning for example because the shift time is over, -accidents on the way to work, -alcohol, an employee's bad day, -Spinal strains - e.g. several hours of quality control of manufactured components in a forced body position because for management workforce productivity, quality and earning money is more important than staff health, -outsourcing - transferring responsibility to another company, -neglect of management, because sometimes they want to only make money, they look at production, not to employees. Terms and definitions, name shortcuts. Welding vs joining - Definitions depend on the author. Welding in Cambridge English dictionary means: "the activity of joining metal parts together" in Collins dictionary "the activity of uniting metal or plastic by softening with heat and hammering, or by fusion", which means that welding is related to connect. Join or joining has a similar meaning that welding and can mean the same in English dictionary means "to connect or fasten things together" but joining otherwise has many meanings for example "If roads or rivers join, they meet at a particular point". Joining opposed to welding, is a general term and there are several methods available for joining metals, including riveting, soldering, adhesive, brazing, coupling, fastening, press fit. Welding is only one type of joining process. Solid-state weld - connect below the melting point, welder - welding machine, but also mean a person who welds metal. weld - the place of connection where the materials are mixed. weldability - a measure of the ease of making a weld without errors. interlayer - an indirect component, indirect material. To quote ISO (the International Organization for Standardization, unfortunately the all ISO text is not free and open shared) - ISO 15620:2019(en) Welding "axial force - force in axial direction between components to be welded, burn-off length - loss of length during the friction phase, burn-off rate - rate of shortening of the components during the friction welding process, component - single item before welding, component induced braking - reduction in rotational speed resulting from friction between the interfaces, external braking - braking located externally reducing the rotational speed, faying surface - surface of one component that is to be in contact with a surface of another component to form a joint, forge force - force applied normal to the faying surfaces at the time when relative movement between the components is ceasing or has ceased, forge burn-off length - amount by which the overall length of the components is reduced during the application of the forge force, forge phase - interval time in the friction welding cycle between the start and finish of application of the forge force, forge pressure - pressure (force per unit area) on the faying surfaces resulting from the axial forge force, forge time - time for which the forge force is applied to the components, friction force - force applied perpendicularly to the faying surfaces during the time that there is relative movement between the components, friction phase - interval time in the friction welding cycle in which the heat necessary for making a weld is generated by relative motion and the friction forces between the components i.e. from contact of components to the start of deceleration, friction pressure - pressure (force per unit area) on the faying surfaces resulting from the axial friction force, friction time - time during which relative movement between the components takes place at rotational speed and under application of the friction forces, interface - contact area developed between the faying surfaces after completion of the welding operation, rotational speed - number of revolutions per minute of rotating component, stick-out - distance a component sticks out from the fixture, or chuck in the direction of the mating component, deceleration phase - interval in the friction welding cycle in which the relative motion of the components is decelerated to zero, deceleration time - time required by the moving component to decelerate from friction speed to zero speed, total length loss (upset) - loss of length that occurs as a result of friction welding, i.e. the sum of the burn-off length and the forge burn-off length, total weld time - time elapsed between component contact and end of forging phase, welding cycle - succession of operations carried out by the machine to make a weldment and return to the initial position, excluding component - handling operations, weldment - two or more components joined by welding." And more than that:
[ { "math_id": 0, "text": "W = M \\times \\alpha" }, { "math_id": 1, "text": "M = r \\times F" }, { "math_id": 2, "text": "M = r \\times T" }, { "math_id": 3, "text": "T = \\mu \\times F " }, { "math_id": 4, "text": "M = r \\times \\mu \\times F" }, { "math_id": 5, "text": "\\alpha = 2 \\pi \\times n \\times t" }, { "math_id": 6, "text": "W = n\\times \\pi \\times r\\times F\\times \\mu\\times t" }, { "math_id": 7, "text": "W = n\\times \\pi \\times r\\times F\\times \\int_{0}^{t} f(\\mu) \\,dt" }, { "math_id": 8, "text": "F = p (\\text{pressure}) * A (\\text{area})" }, { "math_id": 9, "text": "W = \\pi \\times r\\times \\int_{0}^{t} f(\\mu) f(n) f(F) \\,dt" }, { "math_id": 10, "text": "\n\\mu_{\\operatorname{eff}}" }, { "math_id": 11, "text": "\\dot \\gamma" }, { "math_id": 12, "text": "\\mu_{\\operatorname{eff}}(\\dot \\gamma) = \\mu_{\\operatorname{\\inf}} + (\\mu_0 - \\mu_{\\operatorname{\\inf}}) \\left(1+\\left(\\lambda \\dot \\gamma\\right) ^2 \\right) ^ {\\frac {n-1} {2}}" }, { "math_id": 13, "text": "\\mu_0" }, { "math_id": 14, "text": "\\mu_{\\operatorname{\\inf}}" }, { "math_id": 15, "text": "\\lambda" }, { "math_id": 16, "text": "n" }, { "math_id": 17, "text": "E = \\int_{0}^{t} f(P) \\,dt" }, { "math_id": 18, "text": "E_k = \\frac{1}{2} I \\omega^2" }, { "math_id": 19, "text": "E_k" }, { "math_id": 20, "text": " I " }, { "math_id": 21, "text": "\\mu" } ]
https://en.wikipedia.org/wiki?curid=66141540
66143322
Shoreline development index
Ratio indicating irregularity of a lake shoreline The shoreline development index of a lake is the ratio of the length of the lake's shoreline to the circumference of a circle with the same area as the lake. It is given in equation form as formula_0, where formula_1 is shoreline development, formula_2 is the length of the lake's shoreline, and formula_3 is the lake's area. The length and area should be measured in the units (e.g., m and m2, or km and km2). The shoreline development index is formula_4 for perfectly circular lakes. formula_5 for lakes with complex shapes. Patterns. Shoreline development correlates strongly with lake area, although this partly reflects the scale dependence of the index (see Limitations). To some extent, the shoreline development index reflects the mode of origin for lakes. For example, volcanic crater lakes often have shoreline development index values near 1, where are fluvial oxbow lakes often have very high shoreline development index values. Application to lakes with islands. The index can also include the length of island shoreline, modifying the formula to formula_6, where formula_7 is the combined length of the lake's islands' shoreline. Limitations. Lake shorelines are fractal. This means that measurements of shore length are longer when measured on high-resolution maps compared to low-resolution maps. Therefore, a lake's shoreline development index will be greater when calculated based on shorelines measured from high-resolution maps compared to low-resolution maps. Consequently, shoreline development index values cannot be compared for lakes with shorelines measured from maps with different scales. Additionally, the shoreline development index cannot be compared for lakes with different surface areas because large lakes automatically have higher values than smaller lakes, even if they have the same planform shape. Hence the shoreline development index can only be used to compare lakes with the same surface area that are also mapped at the same scale. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "D_L = \\frac{L}{2 \\sqrt{\\pi A}}" }, { "math_id": 1, "text": "D_L" }, { "math_id": 2, "text": "L" }, { "math_id": 3, "text": "A" }, { "math_id": 4, "text": "D_L = 1" }, { "math_id": 5, "text": "D_L > 1" }, { "math_id": 6, "text": "D_{L+L_i} = \\frac{L+L_i}{2 \\sqrt{\\pi A}}" }, { "math_id": 7, "text": "L_i" } ]
https://en.wikipedia.org/wiki?curid=66143322
66147490
Bound state in the continuum
Special state of wave and quantum systems in physics A bound state in the continuum (BIC) is an eigenstate of some particular quantum system with the following properties: BICs are observed in electronic, photonic, acoustic systems, and are a general phenomenon exhibited by systems in which wave physics applies. Bound states in the forbidden zone, where there are no finite solutions at infinity, are widely known (atoms, quantum dots, defects in semiconductors). For solutions in a continuum that are associated with this continuum, resonant states are known, which decay (lose energy) over time. They can be excited, for example, by an incident wave with the same energy. The bound states in the continuum have real energy eigenvalues and therefore do not interact with the states of the continuous spectrum and cannot decay. Classification of BICs by mechanism of occurrence. Source: BICs arising when solving the inverse problem. Wigner-von Neumann's BIC (Potential engineering). The wave function of one of the continuum states is modified to be normalizable and the corresponding potential is selected for it. Hopping rate engineering. In the tight binding approximation, the jump rates are modified so that the state becomes localized Boundary shape engineering. Sources for BICs of different types, e.g. Fabry-Perot type are replaced by scatterers so as to create BIC of the same type. BICs arising due to parameter tuning. Fabry-Perot BICs. For resonant structures, the reflection coefficient near resonance can reach unity. Two such structures can be arranged in such a way that they radiate in antiphase and compensate each other. Friedrich-Wintgen BICs. Two modes of the same symmetry of one and the same structure approach each other when the parameters of the structure are changed, and at some point an anti-crossing occurs. In this case, BIC is formed on one of the branches, since the modes as if compensate each other, being in antiphase and radiating into the same radiation channel. Single-resonance parametric BICs. Occur when a single mode can be represented as a sum of contributions, each of which varies with the structure parameters. At some point, destructive interference of all contributions occurs. Symmetry-protected BICs. Arise when the symmetry of the eigenstate differs from any of the possible symmetries of propagating modes in the continuum. Separable BICs. Arise when the eigenvalue problem is solved by the Separation of Variables Method, and the wave function is represented, for example, as formula_1, where both multipliers correspond to localized states, with the total energy lying in the continuum. Wigner-Von Neumann BICs. Bound states in the continuum were first predicted in 1929 by Eugene Wigner and John von Neumann. Two potentials were described, in which BICs appear for two different reasons. In this work, a spherically symmetric wave function is first chosen so as to be quadratically integrable over the entire space. Then a potential is chosen such that this wave function corresponds to zero energy. The potential is spherically symmetric, then the wave equation will be written as follows: formula_2 the angle derivatives disappear, since we limit ourselves to considering only spherically symmetric wave functions: formula_3 For formula_4 to be the eigenvalue for the spherically symmetric wave function formula_5, the potential must be formula_6. We obtain the specific values formula_7 and formula_8 for which the BIC will be observed. First case. Let us consider the function formula_9. While the integral formula_10 must be finite, then considering the behavior when formula_11, we get that formula_12, then considering the behavior when formula_13, we get formula_14. The regularity formula_15 in formula_16 requires formula_17. Finally, we get formula_18. Assuming formula_19, then the potential will be equal to (discarding the irrelevant multiplier formula_20): formula_21 The eigenfunction and the potential curve are shown in the figure. It seems that the electron will simply roll off the potential and the energy will belong to the solid spectrum, but there is a stationary orbit with formula_4. In the work is given the following interpretation: this behavior can be understood from an analogy with classical mechanics (considerations belong to Leo Szilard). The motion of a material point in the potential formula_22 is described by the following equation: formula_23 formula_24 It's easy to see that when formula_25, formula_26, so the asymptotic is formula_27 formula_28 that is, for a finite time formula_29 the point goes to infinity. The stationary solution formula_30 means that the point returns from infinity again, that it is as if it is reflected from there and starts oscillating. The fact that formula_30 at formula_13 tends to zero follows from the fact that it rolls down a large potential slide and has an enormous speed and therefore a short lifetime. And since the whole oscillatory process (from formula_31 to infinity and back) is periodic, it is logical that this quantum mechanical problem has a stationary solution. Second case. Let's move on to the second example, which can no longer be interpreted from such considerations. First of all, we take a function formula_32, then formula_33. These are divergent spherical waves, since the energy formula_4 is greater than the potential formula_33, the classical kinetic energy remains positive. The wave function belongs to a continuous spectrum, the integral formula_34 diverges. Let's try to change the wave function so that the quadratic integral converges and the potential varies near -1. Consider the following ansatz: formula_35 If the function formula_36 is continuous, and at formula_37 the asymptotic is formula_38 then the integral is finite. The potential would then be equal (with the corrected arithmetical error in the original article): formula_39 In order for the potential to remain near -1, and at formula_25 tend to -1, we must make the functions formula_40 small and at formula_25 tend to zero. In the first case, also formula_41 should vanish for formula_42, namely for formula_43, that is for formula_44. This is the case when formula_45 or any other function of this expression. Let assume formula_46, where formula_47 is arbitrary (here formula_36 tends to formula_38 when formula_25). Then formula_48 The expression for the potential is cumbersome, but the graphs show that for formula_25 the potential tends to -1. Furthermore, it turns out that for any formula_49 one can choose such an "A" that the potential is between formula_50 and formula_51. We can see that the potential oscillates with period formula_0 and the wave function oscillates with period formula_52. It turns out that all reflected waves from the "humps" of such a potential are in phase, and the function is localized in the center, being reflected from the potential by a mechanism similar to the reflection from a Bragg mirror. Notes. &lt;templatestyles src="Reflist/styles.css" /&gt; &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\pi" }, { "math_id": 1, "text": "\\psi=\\psi_1(x)\\psi_2(y)" }, { "math_id": 2, "text": "\n-\\frac{h^{2}}{8 \\pi^{2} m} \\Delta \\psi+(V(r)-E) \\psi=0,\n" }, { "math_id": 3, "text": "\n\\Delta=\\frac{\\partial^{2}}{\\partial x^{2}}+\\frac{\\partial^{2}}{\\partial y^{2}}+\\frac{\\partial^{2}}{\\partial z^{2}}=\\frac{\\partial^{2}}{\\partial r^{2}}+\\frac{2}{r} \\frac{\\partial}{\\partial r},\n" }, { "math_id": 4, "text": "E=0" }, { "math_id": 5, "text": "\\psi=\\psi(r)" }, { "math_id": 6, "text": "V=\\frac{h^{2}}{8 \\pi^{2} m}\\left(\\frac{\\psi^{\\prime \\prime}}{\\psi}+\\frac{2 \\psi^{\\prime}}{r \\psi}\\right)" }, { "math_id": 7, "text": "\\psi" }, { "math_id": 8, "text": "V" }, { "math_id": 9, "text": "\\psi=r^\\alpha{\\sin r^\\beta}" }, { "math_id": 10, "text": "\\int_0^\\infty 4\\pi r^2|\\psi(r)|^2\\text{d}r=\\int_0^\\infty 4\\pi r^{2\\alpha+2}{\\sin^2 r^\\beta}\\text{d}r" }, { "math_id": 11, "text": "r=0" }, { "math_id": 12, "text": "2\\alpha+\\beta+2>-1" }, { "math_id": 13, "text": "r=\\infty" }, { "math_id": 14, "text": "2\\alpha+2<-1" }, { "math_id": 15, "text": "V(r)" }, { "math_id": 16, "text": "r\\neq0" }, { "math_id": 17, "text": "2\\alpha+\\beta+1=0" }, { "math_id": 18, "text": "\\alpha=-2, \\ \\beta=3" }, { "math_id": 19, "text": "\\psi(r)=\\frac{\\sin r^3}{r^2}" }, { "math_id": 20, "text": "{h^{2}}/{8 \\pi^{2} m}" }, { "math_id": 21, "text": "\nV(r)=\\frac{2}{r^2}-9r^4\n" }, { "math_id": 22, "text": "V=\\frac{2}{r^2}-9r^4" }, { "math_id": 23, "text": "\n{\\frac{m}{2}\\left(\\frac{\\text{d} r}{\\text{d} t}\\right)^{2}+V(r)=\\mathrm{Const}}\n" }, { "math_id": 24, "text": "\n{\\frac{\\text{d} r}{\\text{d} t}=\\sqrt{\\frac{2}{m} \\mathrm{Const}-\\frac{4}{m} \\frac{{1}}{r^{2}}+\\frac{18}{m} r^{4}}}\n" }, { "math_id": 25, "text": "r\\rightarrow \\infty" }, { "math_id": 26, "text": "\\frac{\\text{d}r}{\\text{d}t}\\rightarrow \\infty" }, { "math_id": 27, "text": "\n\\frac{d r}{d t}=\\frac{3 \\sqrt{2}}{\\sqrt{m}} r^{2}; \\ \\ \\ \\ \\ \\ \\ \\frac{1}{r}=\\frac{3 \\sqrt{2}}{\\sqrt{m}}\\left(t_{0}-t\\right) , \\ \\ \n" }, { "math_id": 28, "text": "\nr=\\frac{\\sqrt{m}}{3 \\sqrt{2}\\left(t_{0}-t\\right)}\n" }, { "math_id": 29, "text": "t=t_0" }, { "math_id": 30, "text": "\\psi(r)" }, { "math_id": 31, "text": "r_{min}" }, { "math_id": 32, "text": "\\psi=\\frac{\\sin r}{r}" }, { "math_id": 33, "text": "V=-1" }, { "math_id": 34, "text": "\\int_0^\\infty 4\\pi r^2|\\psi(r)|^2\\text{d}r=4\\pi \\int_0^\\infty \\sin^2 r\\text{d}r" }, { "math_id": 35, "text": "\\psi=\\frac{\\sin r}{r}f(r)" }, { "math_id": 36, "text": "f(r)" }, { "math_id": 37, "text": "r \\rightarrow \\infty" }, { "math_id": 38, "text": "r^\\alpha, \\ \\ \\alpha<-1/2" }, { "math_id": 39, "text": "\nV=-1+2 \\operatorname{ctg} r \\frac{f^{\\prime}(r)}{f(r)}+\\frac{f^{\\prime \\prime}(r)}{f(r)}.\n" }, { "math_id": 40, "text": "\\operatorname{ctg} r \\frac{f^{\\prime}(r)}{f(r)}, \\ \\frac{f^{\\prime \\prime}(r)}{f(r)}" }, { "math_id": 41, "text": "\\frac{f^{\\prime}(r)}{f(r)}" }, { "math_id": 42, "text": "\\operatorname{ctg} r= \\infty" }, { "math_id": 43, "text": "r=0,\\pi, \\ 2\\pi, \\ 2\\pi, \\dots" }, { "math_id": 44, "text": "\\sin r=0" }, { "math_id": 45, "text": "f(r)=\\int_0^r \\sin^2r \\text{d}r=\\frac{r}2-\\frac14\\sin2r" }, { "math_id": 46, "text": "f(r)=[A^2+(\\frac{r}2-\\frac14\\sin2r)^2]^{-1}" }, { "math_id": 47, "text": "A" }, { "math_id": 48, "text": "\n\\psi=\\frac{\\sin r}{r(A^2+(2r-\\sin2r)^2)}.\n" }, { "math_id": 49, "text": "\\varepsilon>0" }, { "math_id": 50, "text": "-1-\\varepsilon" }, { "math_id": 51, "text": "-1+\\varepsilon" }, { "math_id": 52, "text": "2\\pi" } ]
https://en.wikipedia.org/wiki?curid=66147490
66150352
Assignment valuation
In economics, assignment valuation is a kind of a utility function on sets of items. It was introduced by Shapley and further studied by Lehmann, Lehmann and Nisan, who use the term OXS valuation (not to be confused with XOS valuation). Fair item allocation in this setting was studied by Benabbou, Chakraborty, Elkind, Zick and Igarashi. Assignment valuations correspond to preferences of groups. In each group, there are several individuals; each individual attributes a certain numeric value to each item. The assignment-valuation of the group to a set of items "S" is the value of the maximum weight matching of the items in "S" to the individuals in the group. The assignment valuations are a subset of the submodular valuations. Example. Suppose there are three items and two agents who value the items as follows: Then the assignment-valuation "v" corresponding to the group {Alice,George} assigns the following values:
[ { "math_id": 0, "text": "v(\\{x\\}) = 6" }, { "math_id": 1, "text": "v(\\{y\\}) = 3" }, { "math_id": 2, "text": "v(\\{z\\}) = 4.5" }, { "math_id": 3, "text": "v(\\{x,y\\}) = 9" }, { "math_id": 4, "text": "v(\\{x,z\\}) = 9.5" }, { "math_id": 5, "text": "v(\\{y,z\\}) = 7.5" }, { "math_id": 6, "text": "v(\\{x, y,z\\}) = 9.5" } ]
https://en.wikipedia.org/wiki?curid=66150352
66153716
Abelian 2-group
In mathematics, an Abelian 2-group is a higher dimensional analogue of an Abelian group, in the sense of higher algebra, which were originally introduced by Alexander Grothendieck while studying abstract structures surrounding Abelian varieties and Picard groups. More concretely, they are given by groupoids formula_0 which have a bifunctor formula_1 which acts formally like the addition an Abelian group. Namely, the bifunctor formula_2 has a notion of commutativity, associativity, and an identity structure. Although this seems like a rather lofty and abstract structure, there are several (very concrete) examples of Abelian 2-groups. In fact, some of which provide prototypes for more complex examples of higher algebraic structures, such as Abelian "n"-groups. Definition. An Abelian 2-group is a groupoid formula_0 (that is, a category in which every morphism is an isomorphism) with a bifunctor formula_3 and natural transformations formula_4 which satisfy a host of axioms ensuring these transformations behave similarly to commutativity (formula_5) and associativity formula_6 for an Abelian group. One of the motivating examples of such a category comes from the Picard category of line bundles on a scheme (see below). Examples. Picard category. For a scheme or variety formula_7, there is an Abelian 2-group formula_8 whose objects are line bundles formula_9 and morphisms are given by isomorphisms of line bundles. Notice over a given line bundle formula_9 formula_10 since the only automorphisms of a line bundle are given by a non-vanishing function on formula_7. The additive structure formula_2 is given by the tensor product formula_11 on the line bundles. This makes is more clear why there should be natural transformations instead of equality of functors. For example, we only have an isomorphism of line bundles formula_12 but not direct equality. This isomorphism is independent of the line bundles chosen and are functorial hence they give the natural transformation formula_13 switching the components. The associativity similarly follows from the associativity of tensor products of line bundles. Two term chain complexes. Another source for Picard categories is from two-term chain complexes of Abelian groups formula_14 which have a canonical groupoid structure associated to them. We can write the set of objects as the abelian group formula_15 and the set of arrows as the set formula_16. Then, the source morphism formula_17 of an arrow formula_18 is the projection map formula_19 and the target morphism formula_20 is formula_21 Notice this definition implies the automorphism group of any object formula_22 is formula_23. Notice that if we repeat this construction for sheaves of abelian groups over a site formula_7 (or topological space), we get a sheaf of Abelian 2-groups. It could be conjectured if this can be used to construct all such categories, but this is not the case. In fact, this construction must be generalized to spectra to give a precise generalization. pg 88 Example of Abelian 2-group in algebraic geometry. One example is the cotangent complex for a local complete intersection scheme formula_7 which is given by the two-term complex formula_24 for an embedding formula_25. There is a direct categorical interpretation of this Abelian 2-group from deformation theory using the Exalcomm category. Note that in addition to using a 2-term chain complex, would could instead consider a chain complex formula_26 and construct an Abelian "n"-group (or infinity-group). Abelian 2-group of morphisms. For a pair of Abelian 2-groups formula_27 there is an associated Abelian 2-group of morphisms formula_28 whose objects are given by functors between these two categories, and the arrows are given by natural transformations. Moreover, the bifunctor formula_29 on formula_30 induces a bifunctor structure on this groupoid, giving it an Abelian 2-group structure. Classifying abelian 2-groups. In order to classify abelian 2-groups, strict Picard categories using two-term chain complexes is not enough. One approach is in stable homotopy theory using spectra which only have two non-trivial homotopy groups. While studying an arbitrary Picard category, it becomes clear that there is additional data used to classify the structure of the category, it is given by the Postnikov invariant. Postnikov invariant. For an Abelian 2-group formula_0 and a fixed object formula_31 the isomorphisms of the functors formula_32 and formula_33 given by the commutativity arrow formula_34 gives an element of the automorphism group formula_35 which squares to formula_36, hence is contained in some formula_37. Sometimes this is suggestively written as formula_38. We can call this element formula_39 and this invariant induces a morphism from the isomorphism classes of objects in formula_0, denoted formula_40, to formula_35, i.e. it gives a morphism formula_41 which corresponds to the Postnikov invariant. In particular, every Picard category given as a two-term chain complex has formula_42 because they correspond under the Dold-Kan correspondence to simplicial abelian groups with topological realizations as the product of Eilenberg–MacLane spaces formula_43 For example, if we have a Picard category with formula_44 and formula_45, there is no chain complex of Abelian groups giving these homology groups since formula_37 can only be given by a projection formula_46 Instead this Picard category can be understood as a categorical realization of the truncated spectrum formula_47 of the sphere spectrum where the only two non-trivial homotopy groups of the spectrum are in degrees formula_48 and formula_36.
[ { "math_id": 0, "text": "\\mathbb{A}" }, { "math_id": 1, "text": "+:\\mathbb{A}\\times\\mathbb{A} \\to \\mathbb{A}" }, { "math_id": 2, "text": "+" }, { "math_id": 3, "text": "+: \\mathbb{A}\\times\\mathbb{A} \\to \\mathbb{A}" }, { "math_id": 4, "text": "\\begin{align}\n\\tau: & X+Y \\Rightarrow Y + X \\\\\n\\sigma: & (X+Y)+Z \\Rightarrow X+(Y+Z)\n\\end{align}" }, { "math_id": 5, "text": "\\tau" }, { "math_id": 6, "text": "(\\sigma)" }, { "math_id": 7, "text": "X" }, { "math_id": 8, "text": "\\operatorname{\\textbf{Pic}} X" }, { "math_id": 9, "text": "\\mathcal{L}" }, { "math_id": 10, "text": "\\text{End}(\\mathcal{L}) = \\text{Aut}(\\mathcal{L}) \\cong \\mathcal{O}_X^*" }, { "math_id": 11, "text": "\\otimes" }, { "math_id": 12, "text": "\\mathcal{L}\\otimes\\mathcal{L}' \\cong \\mathcal{L}'\\otimes\\mathcal{L}" }, { "math_id": 13, "text": "\\tau: (-\\otimes -) \\to (-\\otimes -)" }, { "math_id": 14, "text": "A^{-1} \\xrightarrow{d} A^0" }, { "math_id": 15, "text": "A^0" }, { "math_id": 16, "text": "A^{-1}\\oplus A^0" }, { "math_id": 17, "text": "s" }, { "math_id": 18, "text": "(a_{-1},a_0)" }, { "math_id": 19, "text": "s(a_{-1} + a_0) = a_0" }, { "math_id": 20, "text": "t" }, { "math_id": 21, "text": "t(a_{-1}+a_0) = d(a_{-1}) + a_0" }, { "math_id": 22, "text": "a_0" }, { "math_id": 23, "text": "\\operatorname{Ker} d" }, { "math_id": 24, "text": "\\mathbf{L}_{X}^\\bullet = i^*I/I^2 \\to i^*\\Omega_Y " }, { "math_id": 25, "text": "i:X \\to Y" }, { "math_id": 26, "text": "A^\\bullet \\in Ch^{\\leq 0}(\\text{Ab})" }, { "math_id": 27, "text": "\\mathbb{A},\\mathbb{A}'" }, { "math_id": 28, "text": "\\text{Hom}(\\mathbb{A},\\mathbb{A}')" }, { "math_id": 29, "text": "+'" }, { "math_id": 30, "text": "\\mathbb{A}'" }, { "math_id": 31, "text": "x \\in \\text{Ob}(\\mathbb{A})" }, { "math_id": 32, "text": "x+(-)" }, { "math_id": 33, "text": "(-)+x" }, { "math_id": 34, "text": "\\tau : x + x \\Rightarrow x+x" }, { "math_id": 35, "text": "\\text{Aut}_\\mathbb{A}(x)" }, { "math_id": 36, "text": "1" }, { "math_id": 37, "text": "\\mathbb{Z}/2" }, { "math_id": 38, "text": "\\pi_1(\\mathbb{A})" }, { "math_id": 39, "text": "\\varepsilon" }, { "math_id": 40, "text": "\\pi_0(\\mathbb{A})" }, { "math_id": 41, "text": "\\varepsilon: \\pi_0(\\mathbb{A})\\otimes\\mathbb{Z}/2 \\to \\pi_1(\\mathbb{A}) = \\text{Aut}_{\\mathbb {A} }(x)" }, { "math_id": 42, "text": "\\varepsilon = 0" }, { "math_id": 43, "text": "K(H^{-1}(A^\\bullet), 1)\\times K(H^0(A^\\bullet),0)" }, { "math_id": 44, "text": "\\pi_1(\\mathbb{A}) = \\mathbb{Z}/2" }, { "math_id": 45, "text": "\\pi_0(\\mathbb{A}) = \\mathbb{Z}" }, { "math_id": 46, "text": "\\mathbb{Z}\\xrightarrow{\\cdot 2} \\mathbb{Z} \\to \\mathbb{Z}/2" }, { "math_id": 47, "text": "\\tau_{\\leq 1} \\mathbb{S}" }, { "math_id": 48, "text": "0" } ]
https://en.wikipedia.org/wiki?curid=66153716
661661
Evolution of sexual reproduction
&lt;templatestyles src="Unsolved/styles.css" /&gt; Unsolved problem in biology: What selection pressures led to the evolution and maintenance of sexual reproduction? Evolution of sexual reproduction describes how sexually reproducing animals, plants, fungi and protists could have evolved from a common ancestor that was a single-celled eukaryotic species. Sexual reproduction is widespread in eukaryotes, though a few eukaryotic species have secondarily lost the ability to reproduce sexually, such as Bdelloidea, and some plants and animals routinely reproduce asexually (by apomixis and parthenogenesis) without entirely having lost sex. The evolution of sexual reproduction contains two related yet distinct themes: its "origin" and its "maintenance." Bacteria and Archaea (prokaryotes) have processes that can transfer DNA from one cell to another (conjugation, transformation, and transduction), but it is unclear if these processes are evolutionarily related to sexual reproduction in Eukaryotes. In eukaryotes, true sexual reproduction by meiosis and cell fusion is thought to have arisen in the last eukaryotic common ancestor, possibly via several processes of varying success, and then to have persisted. Since hypotheses for the origin of sex are difficult to verify experimentally (outside of evolutionary computation), most current work has focused on the persistence of sexual reproduction over evolutionary time. The maintenance of sexual reproduction (specifically, of its dioecious form) by natural selection in a highly competitive world has long been one of the major mysteries of biology, since both other known mechanisms of reproduction – asexual reproduction and hermaphroditism – possess apparent advantages over it. Asexual reproduction can proceed by budding, fission, or spore formation and does not involve the union of gametes, which accordingly results in a much faster rate of reproduction compared to sexual reproduction, where 50% of offspring are males and unable to produce offspring themselves. In hermaphroditic reproduction, each of the two parent organisms required for the formation of a zygote can provide either the male or the female gamete, which leads to advantages in both size and genetic variance of a population. Sexual reproduction therefore must offer significant fitness advantages because, despite the two-fold cost of sex (see below), it dominates among multicellular forms of life, implying that the fitness of offspring produced by sexual processes outweighs the costs. Sexual reproduction derives from recombination, where parent genotypes are reorganized and shared with the offspring. This stands in contrast to single-parent asexual replication, where the offspring is always identical to the parents (barring mutation). Recombination supplies two fault-tolerance mechanisms at the molecular level: "recombinational DNA repair" (promoted during meiosis because homologous chromosomes pair at that time) and "complementation" (also known as heterosis, hybrid vigor or masking of mutations). Historical perspective. Reproduction, including modes of sexual reproduction, features in the writings of Aristotle; modern philosophical-scientific thinking on the problem dates from at least Erasmus Darwin (1731–1802) in the 18th century. August Weismann picked up the thread in 1885, arguing that sex serves to generate genetic variation, as detailed in the majority of the explanations below. On the other hand, Charles Darwin (1809–1882) concluded that the effect of hybrid vigor (complementation) "is amply sufficient to account for the … genesis of the two sexes". This is consistent with the repair and complementation hypothesis, described below. Since the emergence of the modern evolutionary synthesis in the 20th century, numerous biologists including W. D. Hamilton, Alexey Kondrashov, George C. Williams, Harris Bernstein, Carol Bernstein, Michael M. Cox, Frederic A. Hopf and Richard E. Michod – have suggested competing explanations for how a vast array of different living species maintain sexual reproduction. Advantages of sex and sexual reproduction. The concept of sex includes two fundamental phenomena: the sexual process (fusion of genetic information of two individuals) and sexual differentiation (separation of this information into two parts). Depending on the presence or absence of these phenomena, all of the existing forms of reproduction can be classified as asexual, hermaphrodite or dioecious. The sexual process and sexual differentiation are different phenomena, and, in essence, are diametrically opposed. The first creates (increases) diversity of genotypes, and the second decreases it by half. Reproductive advantages of the asexual forms are in quantity of the progeny, and the advantages of the hermaphrodite forms are in maximal diversity. Transition from the hermaphrodite to dioecious state leads to a loss of at least half of the diversity. So, the primary challenge is to explain the advantages given by sexual differentiation, i.e. the benefits of two separate sexes compared to hermaphrodites rather than to explain benefits of sexual forms (hermaphrodite + dioecious) over asexual ones. It has already been understood that since sexual reproduction is not associated with any clear reproductive advantages over asexual reproduction, there should be some important advantages in evolution. Advantages due to genetic variation, DNA repair and genetic complementation. For the advantage due to genetic variation, there are three possible reasons this might happen. First, sexual reproduction can combine the effects of two beneficial mutations in the same individual (i.e. sex aids in the spread of advantageous traits) without the mutations having to have occurred one after another in a single line of descendants. Second, sex acts to bring together currently deleterious mutations to create severely unfit individuals that are then eliminated from the population (i.e. sex aids in the removal of deleterious genes). However, in organisms containing only one set of chromosomes, deleterious mutations would be eliminated immediately, and therefore removal of harmful mutations is an unlikely benefit for sexual reproduction. Lastly, sex creates new gene combinations that may be more fit than previously existing ones, or may simply lead to reduced competition among relatives. For the advantage due to DNA repair, there is an immediate large benefit of removing DNA damage by recombinational DNA repair during meiosis (assuming the initial mutation rate is higher than optimal), since this removal allows greater survival of progeny with undamaged DNA. The advantage of complementation to each sexual partner is avoidance of the bad effects of their deleterious recessive genes in progeny by the masking effect of normal dominant genes contributed by the other partner. The classes of hypotheses based on the creation of variation are further broken down below. Any number of these hypotheses may be true in any given species (they are not mutually exclusive), and different hypotheses may apply in different species. However, a research framework based on creation of variation has yet to be found that allows one to determine whether the reason for sex is universal for all sexual species, and, if not, which mechanisms are acting in each species. On the other hand, the maintenance of sex based on DNA repair and complementation applies widely to all sexual species. Protection from major genetic mutation. In contrast to the view that sex promotes genetic variation, Heng, and Gorelick and Heng reviewed evidence that sex actually acts as a constraint on genetic variation. They consider that sex acts as a coarse filter, weeding out major genetic changes, such as chromosomal rearrangements, but permitting minor variation, such as changes at the nucleotide or gene level (that are often neutral) to pass through the sexual sieve. Novel genotypes. Sex could be a method by which novel genotypes are created. Because sex combines genes from two individuals, sexually reproducing populations can more easily combine advantageous genes than can asexual populations. If, in a sexual population, two different advantageous alleles arise at different loci on a chromosome in different members of the population, a chromosome containing the two advantageous alleles can be produced within a few generations by recombination. However, should the same two alleles arise in different members of an asexual population, the only way that one chromosome can develop the other allele is to independently gain the same mutation, which would take much longer. Several studies have addressed counterarguments, and the question of whether this model is sufficiently robust to explain the predominance of sexual versus asexual reproduction remains. Ronald Fisher suggested that sex might facilitate the spread of advantageous genes by allowing them to better escape their genetic surroundings, if they should arise on a chromosome with deleterious genes. Supporters of these theories respond to the balance argument that the individuals produced by sexual and asexual reproduction may differ in other respects too – which may influence the persistence of sexuality. For example, in the heterogamous water fleas of the genus "Cladocera", sexual offspring form eggs which are better able to survive the winter versus those the fleas produce asexually. Increased resistance to parasites. One of the most widely discussed theories to explain the persistence of sex is that it is maintained to assist sexual individuals in resisting parasites, also known as the Red Queen Hypothesis. When an environment changes, previously neutral or deleterious alleles can become favourable. If the environment changed sufficiently rapidly (i.e. between generations), these changes in the environment can make sex advantageous for the individual. Such rapid changes in environment are caused by the co-evolution between hosts and parasites. Imagine, for example that there is one gene in parasites with two alleles "p" and "P" conferring two types of parasitic ability, and one gene in hosts with two alleles "h" and "H", conferring two types of parasite resistance, such that parasites with allele "p" can attach themselves to hosts with the allele "h", and "P" to "H". Such a situation will lead to cyclic changes in allele frequency – as "p" increases in frequency, "h" will be disfavoured. In reality, there will be several genes involved in the relationship between hosts and parasites. In an asexual population of hosts, offspring will only have the different parasitic resistance if a mutation arises. In a sexual population of hosts, however, offspring will have a new combination of parasitic resistance alleles. In other words, like Lewis Carroll's Red Queen, sexual hosts are continually "running" (adapting) to "stay in one place" (resist parasites). Evidence for this explanation for the evolution of sex is provided by comparison of the rate of molecular evolution of genes for kinases and immunoglobulins in the immune system with genes coding other proteins. The genes coding for immune system proteins evolve considerably faster. Further evidence for the Red Queen hypothesis was provided by observing long-term dynamics and parasite coevolution in a "mixed" (sexual and asexual) population of snails ("Potamopyrgus antipodarum"). The number of sexuals, the number asexuals, and the rates of parasite infection for both were monitored. It was found that clones that were plentiful at the beginning of the study became more susceptible to parasites over time. As parasite infections increased, the once plentiful clones dwindled dramatically in number. Some clonal types disappeared entirely. Meanwhile, sexual snail populations remained much more stable over time. However, Hanley et al. studied mite infestations of a parthenogenetic gecko species and its two related sexual ancestral species. Contrary to expectation based on the Red Queen hypothesis, they found that the prevalence, abundance and mean intensity of mites in sexual geckos was significantly higher than in asexuals sharing the same habitat. In 2011, researchers used the microscopic roundworm "Caenorhabditis elegans" as a host and the pathogenic bacteria "Serratia marcescens" to generate a host-parasite coevolutionary system in a controlled environment, allowing them to conduct more than 70 evolution experiments testing the Red Queen Hypothesis. They genetically manipulated the mating system of "C. elegans", causing populations to mate either sexually, by self-fertilization, or a mixture of both within the same population. Then they exposed those populations to the "S. marcescens" parasite. It was found that the self-fertilizing populations of "C. elegans" were rapidly driven extinct by the coevolving parasites while sex allowed populations to keep pace with their parasites, a result consistent with the Red Queen Hypothesis. In natural populations of "C. elegans", self-fertilization is the predominant mode of reproduction, but infrequent out-crossing events occur at a rate of about 1%. Critics of the Red Queen hypothesis question whether the constantly changing environment of hosts and parasites is sufficiently common to explain the evolution of sex. In particular, Otto and Nuismer presented results showing that species interactions (e.g. host vs parasite interactions) typically select against sex. They concluded that, although the Red Queen hypothesis favors sex under certain circumstances, it alone does not account for the ubiquity of sex. Otto and Gerstein further stated that "it seems doubtful to us that strong selection per gene is sufficiently commonplace for the Red Queen hypothesis to explain the ubiquity of sex". Parker reviewed numerous genetic studies on plant disease resistance and failed to uncover a single example consistent with the assumptions of the Red Queen hypothesis. Disadvantages of sex and sexual reproduction. The paradox of the existence of sexual reproduction is that though it is ubiquitous in multicellular organisms, there are ostensibly many inherent disadvantages to reproducing sexually when weighed against the relative advantages of alternative forms of reproduction, such as asexual reproduction. Thus, because sexual reproduction abounds in complex multicellular life, there must be some significant benefit(s) to sex and sexual reproduction that compensates for these fundamental disadvantages. Population expansion cost of sex. Among the most limiting disadvantages to the evolution of sexual reproduction by natural selection is that an asexual population can grow much more rapidly than a sexual one with each generation. For example, assume that the entire population of some theoretical species has 100 total organisms consisting of two sexes (i.e. males and females), with 50:50 male-to-female representation, and that only the females of this species can bear offspring. If all capable members of this population procreated once, a total of 50 offspring would be produced (the "F1" generation). Contrast this outcome with an asexual species, in which each and every member of an equally sized 100-organism population is capable of bearing young. If all capable members of this asexual population procreated once, a total of 100 offspring would be produced – twice as many as produced by the sexual population in a single generation. This idea is sometimes referred to as the two-fold cost of sexual reproduction. It was first described mathematically by John Maynard Smith. In his manuscript, Smith further speculated on the impact of an asexual mutant arising in a sexual population, which suppresses meiosis and allows eggs to develop into offspring genetically identical to the mother by mitotic division. The mutant-asexual lineage would double its representation in the population each generation, all else being equal. Technically the problem above is not one of sexual reproduction but of having a subset of organisms incapable of bearing offspring. Indeed, some multicellular organisms (isogamous) engage in sexual reproduction but all members of the species are capable of bearing offspring. The two-fold reproductive disadvantage assumes that males contribute only genes to their offspring and sexual females spend half their reproductive potential on sons. Thus, in this formulation, the principal cost of sex is that males and females must successfully copulate, which almost always involves expending energy to come together through time and space. Asexual organisms need not expend the energy necessary to find a mate. Selfish cytoplasmic genes. Sexual reproduction implies that chromosomes and alleles segregate and recombine in every generation, but not all genes are transmitted together to the offspring. There is a chance of spreading mutants that cause unfair transmission at the expense of their non-mutant colleagues. These mutations are referred to as "selfish" because they promote their own spread at the cost of alternative alleles or of the host organism; they include nuclear meiotic drivers and selfish cytoplasmic genes. Meiotic drivers are genes that distort meiosis to produce gametes containing themselves more than the 50% of the time expected by chance. A selfish cytoplasmic gene is a gene located in an organelle, plasmid or intracellular parasite that modifies reproduction to cause its own increase at the expense of the cell or organism that carries it. Genetic heritability cost of sex. A sexually reproducing organism only passes on ~50% of its own genetic material to each L2 offspring. This is a consequence of the fact that gametes from sexually reproducing species are haploid. Again, however, this is not applicable to all sexual organisms. There are numerous species which are sexual but do not have a genetic-loss problem because they do not produce males or females. Yeast, for example, are isogamous sexual organisms which have two mating types which fuse and recombine their haploid genomes. Both sexes reproduce during the haploid and diploid stages of their life cycle and have a 100% chance of passing their genes into their offspring. Some species avoid the 50% cost of sexual reproduction, although they have "sex" (in the sense of genetic recombination). In these species (e.g., bacteria, ciliates, dinoflagellates and diatoms), "sex" and reproduction occur separately. DNA repair and complementation. As discussed in the earlier part of this article, sexual reproduction is conventionally explained as an adaptation for producing genetic variation through allelic recombination. As acknowledged above, however, serious problems with this explanation have led many biologists to conclude that the benefit of sex is a major unsolved problem in evolutionary biology. An alternative "informational" approach to this problem has led to the view that the two fundamental aspects of sex, genetic recombination and outcrossing, are adaptive responses to the two major sources of "noise" in transmitting genetic information. Genetic noise can occur as either physical damage to the genome (e.g. chemically altered bases of DNA or breaks in the chromosome) or replication errors (mutations). This alternative view is referred to as the repair and complementation hypothesis, to distinguish it from the traditional variation hypothesis. The repair and complementation hypothesis assumes that genetic recombination is fundamentally a DNA repair process, and that when it occurs during meiosis it is an adaptation for repairing the genomic DNA which is passed on to progeny. Recombinational repair is the only repair process known which can accurately remove double-strand damages in DNA, and such damages are both common in nature and ordinarily lethal if not repaired. For instance, double-strand breaks in DNA occur about 50 times per cell cycle in human cells (see naturally occurring DNA damage). Recombinational repair is prevalent from the simplest viruses to the most complex multicellular eukaryotes. It is effective against many different types of genomic damage, and in particular is highly efficient at overcoming double-strand damages. Studies of the mechanism of meiotic recombination indicate that meiosis is an adaptation for repairing DNA. These considerations form the basis for the first part of the repair and complementation hypothesis. In some lines of descent from the earliest organisms, the diploid stage of the sexual cycle, which was at first transient, became the predominant stage, because it allowed complementation — the masking of deleterious recessive mutations (i.e. hybrid vigor or heterosis). Outcrossing, the second fundamental aspect of sex, is maintained by the advantage of masking mutations and the disadvantage of inbreeding (mating with a close relative) which allows expression of recessive mutations (commonly observed as inbreeding depression). This is in accord with Charles Darwin, who concluded that the adaptive advantage of sex is hybrid vigor; or as he put it, "the offspring of two individuals, especially if their progenitors have been subjected to very different conditions, have a great advantage in height, weight, constitutional vigor and fertility over the self fertilised offspring from either one of the same parents." However, outcrossing may be abandoned in favor of parthenogenesis or selfing (which retain the advantage of meiotic recombinational repair) under conditions in which the costs of mating are very high. For instance, costs of mating are high when individuals are rare in a geographic area, such as when there has been a forest fire and the individuals entering the burned area are the initial ones to arrive. At such times mates are hard to find, and this favors parthenogenic species. In the view of the repair and complementation hypothesis, the removal of DNA damage by recombinational repair produces a new, less deleterious form of informational noise, allelic recombination, as a by-product. This lesser informational noise generates genetic variation, viewed by some as the major effect of sex, as discussed in the earlier parts of this article. Deleterious mutation clearance. Mutations can have many different effects upon an organism. It is generally believed that the majority of non-neutral mutations are deleterious, which means that they will cause a decrease in the organism's overall fitness. If a mutation has a deleterious effect, it will then usually be removed from the population by the process of natural selection. Sexual reproduction is believed to be more efficient than asexual reproduction in removing those mutations from the genome. There are two main hypotheses which explain how sex may act to remove deleterious genes from the genome. Evading harmful mutation build-up. While DNA is able to recombine to modify alleles, DNA is also susceptible to mutations within the sequence that can affect an organism in a negative manner. Asexual organisms do not have the ability to recombine their genetic information to form new and differing alleles. Once a mutation occurs in the DNA or other genetic carrying sequence, there is no way for the mutation to be removed from the population until another mutation occurs that ultimately deletes the primary mutation. This is rare among organisms. Hermann Joseph Muller introduced the idea that mutations build up in asexual reproducing organisms. Muller described this occurrence by comparing the mutations that accumulate as a ratchet. Each mutation that arises in asexually reproducing organisms turns the ratchet once. The ratchet is unable to be rotated backwards, only forwards. The next mutation that occurs turns the ratchet once more. Additional mutations in a population continually turn the ratchet and the mutations, mostly deleterious, continually accumulate without recombination. These mutations are passed onto the next generation because the offspring are exact genetic clones of their parents. The genetic load of organisms and their populations will increase due to the addition of multiple deleterious mutations and decrease the overall reproductive success and fitness. For sexually reproducing populations, studies have shown that single-celled bottlenecks are beneficial for resisting mutation build-up. Passaging a population through a single-celled bottleneck involves the fertilization event occurring with haploid sets of DNA, forming one fertilized cell. For example, humans undergo a single-celled bottleneck in that the haploid sperm fertilizes the haploid egg, forming the diploid zygote, which is unicellular. This passage through a single cell is beneficial in that it lowers the chance of mutations from being passed on through multiple individuals. Instead, the mutation is only passed onto one individual. Further studies using "Dictyostelium discoideum" suggest that this unicellular initial stage is important for resisting mutations due to the importance of high relatedness. Highly related individuals are more closely related, and more clonal, whereas less related individuals are less so, increasing the likelihood that an individual in a population of low relatedness may have a detrimental mutation. Highly related populations also tend to thrive better than lowly related because the cost of sacrificing an individual is greatly offset by the benefit gained by its relatives and in turn, its genes, according to kin selection. The studies with "D. discoideum" showed that conditions of high relatedness resisted mutant individuals more effectively than those of low relatedness, suggesting the importance of high relatedness to resist mutations from proliferating. Removal of deleterious genes. This hypothesis was proposed by Alexey Kondrashov, and is sometimes known as the "deterministic mutation hypothesis". It assumes that the majority of deleterious mutations are only slightly deleterious, and affect the individual such that the introduction of each additional mutation has an increasingly large effect on the fitness of the organism. This relationship between number of mutations and fitness is known as "synergistic epistasis". By way of analogy, think of a car with several minor faults. Each is not sufficient alone to prevent the car from running, but in combination, the faults combine to prevent the car from functioning. Similarly, an organism may be able to cope with a few defects, but the presence of many mutations could overwhelm its backup mechanisms. Kondrashov argues that the slightly deleterious nature of mutations means that the population will tend to be composed of individuals with a small number of mutations. Sex will act to recombine these genotypes, creating some individuals with fewer deleterious mutations, and some with more. Because there is a major selective disadvantage to individuals with more mutations, these individuals die out. In essence, sex compartmentalises the deleterious mutations. There has been much criticism of Kondrashov's theory, since it relies on two key restrictive conditions. The first requires that the rate of deleterious mutation should exceed one per genome per generation in order to provide a substantial advantage for sex. While there is some empirical evidence for it (for example in Drosophila and E. coli), there is also strong evidence against it. Thus, for instance, for the sexual species "Saccharomyces cerevisiae" (yeast) and "Neurospora crassa" (fungus), the mutation rate per genome per replication are 0.0027 and 0.0030 respectively. For the nematode worm "Caenorhabditis elegans", the mutation rate per effective genome per sexual generation is 0.036. Secondly, there should be strong interactions among loci (synergistic epistasis), a mutation-fitness relation for which there is only limited evidence. Conversely, there is also the same amount of evidence that mutations show no epistasis (purely additive model) or antagonistic interactions (each additional mutation has a disproportionally "small" effect). Other explanations. Geodakyan's evolutionary theory of sex. Geodakyan suggested that sexual dimorphism provides a partitioning of a species' phenotypes into at least two functional partitions: a female partition that secures beneficial features of the species and a male partition that emerged in species with more variable and unpredictable environments. The male partition is suggested to be an "experimental" part of the species that allows the species to expand their ecological niche, and to have alternative configurations. This theory underlines the higher variability and higher mortality in males, in comparison to females. This functional partitioning also explains the higher susceptibility to disease in males, in comparison to females and therefore includes the idea of "protection against parasites" as another functionality of male sex. Geodakyan's evolutionary theory of sex was developed in Russia in 1960–1980 and was not known to the West till the era of the Internet. Trofimova, who analysed psychological sex differences, hypothesised that the male sex might also provide a "redundancy pruning" function. Speed of evolution. Ilan Eshel suggested that sex prevents rapid evolution. He suggests that recombination breaks up favourable gene combinations more often than it creates them, and sex is maintained because it ensures selection is longer-term than in asexual populations – so the population is less affected by short-term changes. This explanation is not widely accepted, as its assumptions are very restrictive. It has recently been shown in experiments with "Chlamydomonas" algae that sex can remove the speed limit on evolution. An information theoretic analysis using a simplified but useful model shows that in asexual reproduction, the information gain per generation of a species is limited to 1 bit per generation, while in sexual reproduction, the information gain is bounded by formula_0, where formula_1 is the size of the genome in bits. Libertine bubble theory. The evolution of sex can alternatively be described as a kind of gene exchange that is independent from reproduction. According to the Thierry Lodé's "libertine bubble theory", sex originated from an archaic gene transfer process among prebiotic bubbles. The contact among the pre-biotic bubbles could, through simple food or parasitic reactions, promote the transfer of genetic material from one bubble to another. That interactions between two organisms be in balance appear to be a sufficient condition to make these interactions evolutionarily efficient, i.e. to select bubbles that tolerate these interactions ("libertine" bubbles) through a blind evolutionary process of self-reinforcing gene correlations and compatibility. The "libertine bubble theory" proposes that meiotic sex evolved in proto-eukaryotes to solve a problem that bacteria did not have, namely a large amount of DNA material, occurring in an archaic step of proto-cell formation and genetic exchanges. So that, rather than providing selective advantages through reproduction, sex could be thought of as a series of separate events which combines step-by-step some very weak benefits of recombination, meiosis, gametogenesis and syngamy. Therefore, current sexual species could be descendants of primitive organisms that practiced more stable exchanges in the long term, while asexual species have emerged, much more recently in evolutionary history, from the conflict of interest resulting from anisogamy. Parasites and Muller's ratchet R. Stephen Howard and Curtis Lively were the first to suggest that the combined effects of parasitism and mutation accumulation can lead to an increased advantage to sex under conditions not otherwise predicted (Nature, 1994). Using computer simulations, they showed that when the two mechanisms act simultaneously the advantage to sex over asexual reproduction is larger than for either factor operating alone. Origin of sexual reproduction. Many protists reproduce sexually, as do many multicellular plants, animals, and fungi. In the eukaryotic fossil record, sexual reproduction first appeared about 2.0 billion years ago in the Proterozoic Eon, although a later date, 1.2 billion years ago, has also been presented. Nonetheless, all sexually reproducing eukaryotic organisms likely derive from a single-celled common ancestor. It is probable that the evolution of sex was an integral part of the evolution of the first eukaryotic cell. There are a few species which have secondarily lost this feature, such as Bdelloidea and some parthenocarpic plants. Diploidy. Organisms need to replicate their genetic material in an efficient and reliable manner. The necessity to repair genetic damage is one of the leading theories explaining the origin of sexual reproduction. Diploid individuals can repair a damaged section of their DNA via homologous recombination, since there are two copies of the gene in the cell and if one copy is damaged, the other copy is unlikely to be damaged at the same site. A harmful damage in a haploid individual, on the other hand, is more likely to become fixed (i.e. permanent), since any DNA repair mechanism would have no source from which to recover the original undamaged sequence. The most primitive form of sex may have been one organism with damaged DNA replicating an undamaged strand from a similar organism in order to repair itself. Meiosis. Sexual reproduction appears to have arisen very early in eukaryotic evolution, implying that the essential features of meiosis were already present in the last eukaryotic common ancestor. In extant organisms, proteins with central functions in meiosis are similar to key proteins in natural transformation in bacteria and DNA transfer in archaea. For example, recA recombinase, that catalyses the key functions of DNA homology search and strand exchange in the bacterial sexual process of transformation, has orthologs in eukaryotes that perform similar functions in meiotic recombination Natural transformation in bacteria, DNA transfer in archaea, and meiosis in eukaryotic microorganisms are induced by stressful circumstances such as overcrowding, resource depletion, and DNA damaging conditions. This suggests that these sexual processes are adaptations for dealing with stress, particularly stress that causes DNA damage. In bacteria, these stresses induce an altered physiologic state, termed competence, that allows active take-up of DNA from a donor bacterium and the integration of this DNA into the recipient genome (see Natural competence) allowing recombinational repair of the recipients' damaged DNA. If environmental stresses leading to DNA damage were a persistent challenge to the survival of early microorganisms, then selection would likely have been continuous through the prokaryote to eukaryote transition, and adaptative adjustments would have followed a course in which bacterial transformation or archaeal DNA transfer naturally gave rise to sexual reproduction in eukaryotes. Virus-like RNA-based origin. Sex might also have been present even earlier, in the hypothesized RNA world that preceded DNA cellular life forms. One proposed origin of sex in the RNA world was based on the type of sexual interaction that is known to occur in extant single-stranded segmented RNA viruses, such as influenza virus, and in extant double-stranded segmented RNA viruses such as reovirus. Exposure to conditions that cause RNA damage could have led to blockage of replication and death of these early RNA life forms. Sex would have allowed re-assortment of segments between two individuals with damaged RNA, permitting undamaged combinations of RNA segments to come together, thus allowing survival. Such a regeneration phenomenon, known as multiplicity reactivation, occurs in the influenza virus and reovirus. Parasitic DNA elements. Another theory is that sexual reproduction originated from selfish parasitic genetic elements that exchange genetic material (that is: copies of their own genome) for their transmission and propagation. In some organisms, sexual reproduction has been shown to enhance the spread of parasitic genetic elements (e.g. yeast, filamentous fungi). Bacterial conjugation is a form of genetic exchange that some sources describe as "sex", but technically is not a form of reproduction, even though it is a form of horizontal gene transfer. However, it does support the "selfish gene" part theory, since the gene itself is propagated through the F-plasmid. A similar origin of sexual reproduction is proposed to have evolved in ancient haloarchaea as a combination of two independent processes: jumping genes and plasmid swapping. Partial predation. A third theory is that sex evolved as a form of cannibalism: One primitive organism ate another one, but instead of completely digesting it, some of the eaten organism's DNA was incorporated into the DNA of the eater. Vaccination-like process. Sex may also be derived from another prokaryotic process. A comprehensive theory called "origin of sex as vaccination" proposes that eukaryan sex-as-syngamy (fusion sex) arose from prokaryan unilateral sex-as-infection, when infected hosts began swapping nuclearised genomes containing coevolved, vertically transmitted symbionts that provided protection against horizontal superinfection by other, more virulent symbionts. Consequently, sex-as-meiosis (fission sex) would evolve as a host strategy for uncoupling from (and thereby render impotent) the acquired symbiotic/parasitic genes. Mechanistic origin of sexual reproduction. While theories positing fitness benefits that led to the origin of sex are often problematic, several theories addressing the emergence of the mechanisms of sexual reproduction have been proposed. Viral eukaryogenesis. The viral eukaryogenesis (VE) theory proposes that eukaryotic cells arose from a combination of a lysogenic virus, an archaean, and a bacterium. This model suggests that the nucleus originated when the lysogenic virus incorporated genetic material from the archaean and the bacterium and took over the role of information storage for the amalgam. The archaeal host transferred much of its functional genome to the virus during the evolution of cytoplasm, but retained the function of gene translation and general metabolism. The bacterium transferred most of its functional genome to the virus as it transitioned into a mitochondrion. For these transformations to lead to the eukaryotic cell cycle, the VE hypothesis specifies a pox-like virus as the lysogenic virus. A pox-like virus is a likely ancestor because of its fundamental similarities with eukaryotic nuclei. These include a double stranded DNA genome, a linear chromosome with short telomeric repeats, a complex membrane bound capsid, the ability to produce capped mRNA, and the ability to export the capped mRNA across the viral membrane into the cytoplasm. The presence of a lysogenic pox-like virus ancestor explains the development of meiotic division, an essential component of sexual reproduction. Meiotic division in the VE hypothesis arose because of the evolutionary pressures placed on the lysogenic virus as a result of its inability to enter into the lytic cycle. This selective pressure resulted in the development of processes allowing the viruses to spread horizontally throughout the population. The outcome of this selection was cell-to-cell fusion. (This is distinct from the conjugation methods used by bacterial plasmids under evolutionary pressure, with important consequences.) The possibility of this kind of fusion is supported by the presence of fusion proteins in the envelopes of the pox viruses that allow them to fuse with host membranes. These proteins could have been transferred to the cell membrane during viral reproduction, enabling cell-to-cell fusion between the virus host and an uninfected cell. The theory proposes meiosis originated from the fusion between two cells infected with related but different viruses which recognised each other as uninfected. After the fusion of the two cells, incompatibilities between the two viruses result in a meiotic-like cell division. The two viruses established in the cell would initiate replication in response to signals from the host cell. A mitosis-like cell cycle would proceed until the viral membranes dissolved, at which point linear chromosomes would be bound together with centromeres. The homologous nature of the two viral centromeres would incite the grouping of both sets into tetrads. It is speculated that this grouping may be the origin of crossing over, characteristic of the first division in modern meiosis. The partitioning apparatus of the mitotic-like cell cycle the cells used to replicate independently would then pull each set of chromosomes to one side of the cell, still bound by centromeres. These centromeres would prevent their replication in subsequent division, resulting in four daughter cells with one copy of one of the two original pox-like viruses. The process resulting from combination of two similar pox viruses within the same host closely mimics meiosis. Neomuran revolution. An alternative theory, proposed by Thomas Cavalier-Smith, was labeled the Neomuran revolution. The designation "Neomuran revolution" refers to the appearances of the common ancestors of eukaryotes and archaea. Cavalier-Smith proposes that the first neomurans emerged 850 million years ago. Other molecular biologists assume that this group appeared much earlier, but Cavalier-Smith dismisses these claims because they are based on the "theoretically and empirically" unsound model of molecular clocks. Cavalier-Smith's theory of the Neomuran revolution has implications for the evolutionary history of the cellular machinery for recombination and sex. It suggests that this machinery evolved in two distinct bouts separated by a long period of stasis; first the appearance of recombination machinery in a bacterial ancestor which was maintained for 3 Gy(billion years), until the neomuran revolution when the mechanics were adapted to the presence of nucleosomes. The archaeal products of the revolution maintained recombination machinery that was essentially bacterial, whereas the eukaryotic products broke with this bacterial continuity. They introduced cell fusion and ploidy cycles into cell life histories. Cavalier-Smith argues that both bouts of mechanical evolution were motivated by similar selective forces: the need for accurate DNA replication without loss of viability. Questions. Some questions biologists have attempted to answer include: References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\sqrt G" }, { "math_id": 1, "text": "G" } ]
https://en.wikipedia.org/wiki?curid=661661
661675
S/KEY
One-time password system S/KEY is a one-time password system developed for authentication to Unix-like operating systems, especially from dumb terminals or untrusted public computers on which one does not want to type a long-term password. A user's real password is combined in an offline device with a short set of characters and a decrementing counter to form a single-use password. Because each password is only used once, they are useless to password sniffers. Because the short set of characters does not change until the counter reaches zero, it is possible to prepare a list of single-use passwords, in order, that can be carried by the user. Alternatively, the user can present the password, characters, and desired counter value to a local calculator to generate the appropriate one-time password that can then be transmitted over the network in the clear. The latter form is more common and practically amounts to challenge–response authentication. S/KEY is supported in Linux (via pluggable authentication modules), OpenBSD, NetBSD, and FreeBSD, and a generic open-source implementation can be used to enable its use on other systems. OpenSSH also implements S/KEY since version OpenSSH 1.2.2 was released on December 1, 1999. One common implementation is called OPIE. S/KEY is a trademark of Telcordia Technologies, formerly known as Bell Communications Research (Bellcore). S/KEY is also sometimes referred to as Lamport's scheme, after its author, Leslie Lamport. It was developed by Neil Haller, Phil Karn and John Walden at Bellcore in the late 1980s. With the expiration of the basic patents on public-key cryptography and the widespread use of laptop computers running SSH and other cryptographic protocols that can secure an entire session, not just the password, S/KEY is falling into disuse. Schemes that implement two-factor authentication, by comparison, are growing in use. Password generation. The "server" is the computer that will perform the authentication. Authentication. After password generation, the user has a sheet of paper with "n" passwords on it. If "n" is very large, either storing all "n" passwords or calculate the given password from "H"("W") become inefficient. There are methods to efficiently calculate the passwords in the required order, using only formula_0 hash calculations per step and storing formula_1 passwords. More ideally, though perhaps less commonly in practice, the user may carry a small, portable, secure, non-networked computing device capable of regenerating any needed password given the secret passphrase, the salt, and the number of iterations of the hash required, the latter two of which are conveniently provided by the server requesting authentication for login. In any case, the first password will be the same password that the server has stored. This first password will not be used for authentication (the user should scratch this password on the sheet of paper), the second one will be used instead: For subsequent authentications, the user will provide &lt;templatestyles src="Kbd/styles.css"&gt;&lt;/templatestyles&gt;password"i". (The last password on the printed list, password"n", is the first password generated by the server, "H"("W"), where "W" is the initial secret). The server will compute "H"(password"i") and will compare the result to password"i"−1, which is stored as reference on the server. Security. The security of S/KEY relies on the difficulty of reversing cryptographic hash functions. Assume an attacker manages to get hold of a password that was used for a successful authentication. Supposing this is password"i", this password is already useless for subsequent authentications, because each password can only be used once. It would be interesting for the attacker to find out password"i"−1, because this password is the one that will be used for the next authentication. However, this would require inverting the hash function that produced password"i"−1 using password"i" ("H"(password"i"−1) = password"i"), which is extremely difficult to do with current cryptographic hash functions. Nevertheless, S/KEY is vulnerable to a man in the middle attack if used by itself. It is also vulnerable to certain race conditions, such as where an attacker's software sniffs the network to learn the first "N" − 1 characters in the password (where "N" equals the password length), establishes its own TCP session to the server, and in rapid succession tries all valid characters in the "N"-th position until one succeeds. These types of vulnerabilities can be avoided by using ssh, SSL, SPKM, or other encrypted transport layer. Since each iteration of S/KEY doesn't include the salt or count, it is feasible to find collisions directly without breaking the initial password. This has a complexity of 264, which can be pre-calculated with the same amount of space. The space complexity can be optimized by storing chains of values, although collisions might reduce the coverage of this method, especially for long chains. Someone with access to an S/KEY database can break all of them in parallel with a complexity of 264. While they wouldn't get the original password, they would be able to find valid credentials for each user. In this regard, it is similar to storing unsalted 64-bit hashes of strong, unique passwords. The S/KEY protocol can loop. If such a loop were created in the S/KEY chain, an attacker could use user's key without finding the original value, and possibly without tipping off the valid user. The pathological case of this would be an OTP that hashes to itself. Usability. Internally, S/KEY uses 64-bit numbers. For human usability purposes, each number is mapped to six short words, of one to four characters each, from a publicly accessible 2048-word dictionary. For example, one 64-bit number maps to "ROY HURT SKI FAIL GRIM KNEE". References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\left\\lceil\\frac{\\log n}{2}\\right\\rceil" }, { "math_id": 1, "text": "\\lceil\\log n\\rceil" } ]
https://en.wikipedia.org/wiki?curid=661675
6617
Compactification (mathematics)
Embedding a topological space into a compact space as a dense subset In mathematics, in general topology, compactification is the process or result of making a topological space into a compact space. A compact space is a space in which every open cover of the space contains a finite subcover. The methods of compactification are various, but each is a way of controlling points from "going off to infinity" by in some way adding "points at infinity" or preventing such an "escape". An example. Consider the real line with its ordinary topology. This space is not compact; in a sense, points can go off to infinity to the left or to the right. It is possible to turn the real line into a compact space by adding a single "point at infinity" which we will denote by ∞. The resulting compactification is homeomorphic to a circle in the plane (which, as a closed and bounded subset of the Euclidean plane, is compact). Every sequence that ran off to infinity in the real line will then converge to ∞ in this compactification. The direction in which a number approaches infinity on the number line (either in the - direction or + direction) is still preserved on the circle; for if a number approaches towards infinity from the - direction on the number line, then the corresponding point on the circle can approach ∞ from the left for example. Then if a number approaches infinity from the + direction on the number line, then the corresponding point on the circle can approach ∞ from the right. Intuitively, the process can be pictured as follows: first shrink the real line to the open interval on the "x"-axis; then bend the ends of this interval upwards (in positive "y"-direction) and move them towards each other, until you get a circle with one point (the topmost one) missing. This point is our new point ∞ "at infinity"; adding it in completes the compact circle. A bit more formally: we represent a point on the unit circle by its angle, in radians, going from −π to π for simplicity. Identify each such point "θ" on the circle with the corresponding point on the real line tan("θ"/2). This function is undefined at the point π, since tan(π/2) is undefined; we will identify this point with our point ∞. Since tangents and inverse tangents are both continuous, our identification function is a homeomorphism between the real line and the unit circle without ∞. What we have constructed is called the "Alexandroff one-point compactification" of the real line, discussed in more generality below. It is also possible to compactify the real line by adding "two" points, +∞ and −∞; this results in the extended real line. Definition. An embedding of a topological space "X" as a dense subset of a compact space is called a compactification of "X". It is often useful to embed topological spaces in compact spaces, because of the special properties compact spaces have. Embeddings into compact Hausdorff spaces may be of particular interest. Since every compact Hausdorff space is a Tychonoff space, and every subspace of a Tychonoff space is Tychonoff, we conclude that any space possessing a Hausdorff compactification must be a Tychonoff space. In fact, the converse is also true; being a Tychonoff space is both necessary and sufficient for possessing a Hausdorff compactification. The fact that large and interesting classes of non-compact spaces do in fact have compactifications of particular sorts makes compactification a common technique in topology. Alexandroff one-point compactification. For any noncompact topological space "X" the (Alexandroff) one-point compactification α"X" of "X" is obtained by adding one extra point ∞ (often called a "point at infinity") and defining the open sets of the new space to be the open sets of "X" together with the sets of the form "G" ∪ {∞}, where "G" is an open subset of "X" such that formula_0 is closed and compact. The one-point compactification of "X" is Hausdorff if and only if "X" is Hausdorff and locally compact. Stone–Čech compactification. Of particular interest are Hausdorff compactifications, i.e., compactifications in which the compact space is Hausdorff. A topological space has a Hausdorff compactification if and only if it is Tychonoff. In this case, there is a unique (up to homeomorphism) "most general" Hausdorff compactification, the Stone–Čech compactification of "X", denoted by "βX"; formally, this exhibits the category of Compact Hausdorff spaces and continuous maps as a reflective subcategory of the category of Tychonoff spaces and continuous maps. "Most general" or formally "reflective" means that the space "βX" is characterized by the universal property that any continuous function from "X" to a compact Hausdorff space "K" can be extended to a continuous function from "βX" to "K" in a unique way. More explicitly, "βX" is a compact Hausdorff space containing "X" such that the induced topology on "X" by "βX" is the same as the given topology on "X", and for any continuous map "f" : "X" → "K", where "K" is a compact Hausdorff space, there is a unique continuous map "g" : "βX" → "K" for which "g" restricted to "X" is identically "f". The Stone–Čech compactification can be constructed explicitly as follows: let "C" be the set of continuous functions from "X" to the closed interval [0, 1]. Then each point in "X" can be identified with an evaluation function on "C". Thus "X" can be identified with a subset of [0, 1]"C", the space of "all" functions from "C" to [0, 1]. Since the latter is compact by Tychonoff's theorem, the closure of "X" as a subset of that space will also be compact. This is the Stone–Čech compactification. Spacetime compactification. Walter Benz and Isaak Yaglom have shown how stereographic projection onto a single-sheet hyperboloid can be used to provide a compactification for split complex numbers. In fact, the hyperboloid is part of a quadric in real projective four-space. The method is similar to that used to provide a base manifold for group action of the conformal group of spacetime. Projective space. Real projective space RP"n" is a compactification of Euclidean space R"n". For each possible "direction" in which points in R"n" can "escape", one new point at infinity is added (but each direction is identified with its opposite). The Alexandroff one-point compactification of R we constructed in the example above is in fact homeomorphic to RP1. Note however that the projective plane RP2 is "not" the one-point compactification of the plane R2 since more than one point is added. Complex projective space CP"n" is also a compactification of C"n"; the Alexandroff one-point compactification of the plane C is (homeomorphic to) the complex projective line CP1, which in turn can be identified with a sphere, the Riemann sphere. Passing to projective space is a common tool in algebraic geometry because the added points at infinity lead to simpler formulations of many theorems. For example, any two different lines in RP2 intersect in precisely one point, a statement that is not true in R2. More generally, Bézout's theorem, which is fundamental in intersection theory, holds in projective space but not affine space. This distinct behavior of intersections in affine space and projective space is reflected in algebraic topology in the cohomology rings – the cohomology of affine space is trivial, while the cohomology of projective space is non-trivial and reflects the key features of intersection theory (dimension and degree of a subvariety, with intersection being Poincaré dual to the cup product). Compactification of moduli spaces generally require allowing certain degeneracies – for example, allowing certain singularities or reducible varieties. This is notably used in the Deligne–Mumford compactification of the moduli space of algebraic curves. Compactification and discrete subgroups of Lie groups. In the study of discrete subgroups of Lie groups, the quotient space of cosets is often a candidate for more subtle compactification to preserve structure at a richer level than just topological. For example, modular curves are compactified by the addition of single points for each cusp, making them Riemann surfaces (and so, since they are compact, algebraic curves). Here the cusps are there for a good reason: the curves parametrize a space of lattices, and those lattices can degenerate ('go off to infinity'), often in a number of ways (taking into account some auxiliary structure of "level"). The cusps stand in for those different 'directions to infinity'. That is all for lattices in the plane. In "n"-dimensional Euclidean space the same questions can be posed, for example about formula_1 This is harder to compactify. There are a variety of compactifications, such as the Borel–Serre compactification, the reductive Borel–Serre compactification, and the Satake compactifications, that can be formed. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "X \\setminus G" }, { "math_id": 1, "text": "\\text{SO}(n) \\setminus \\text{SL}_n(\\textbf{R}) / \\text{SL}_n(\\textbf{Z})." } ]
https://en.wikipedia.org/wiki?curid=6617
661808
Bravais lattice
Geometry and crystallography point array In geometry and crystallography, a Bravais lattice, named after Auguste Bravais (1850), is an infinite array of discrete points generated by a set of discrete translation operations described in three dimensional space by formula_0 where the "ni" are any integers, and a"i" are "primitive translation vectors", or "primitive vectors", which lie in different directions (not necessarily mutually perpendicular) and span the lattice. The choice of primitive vectors for a given Bravais lattice is not unique. A fundamental aspect of any Bravais lattice is that, for any choice of direction, the lattice appears exactly the same from each of the discrete lattice points when looking in that chosen direction. The Bravais lattice concept is used to formally define a "crystalline arrangement" and its (finite) frontiers. A crystal is made up of one or more atoms, called the "basis" or "motif", at each lattice point. The "basis" may consist of atoms, molecules, or polymer strings of solid matter, and the lattice provides the locations of the basis. Two Bravais lattices are often considered equivalent if they have isomorphic symmetry groups. In this sense, there are 5 possible Bravais lattices in 2-dimensional space and 14 possible Bravais lattices in 3-dimensional space. The 14 possible symmetry groups of Bravais lattices are 14 of the 230 space groups. In the context of the space group classification, the Bravais lattices are also called Bravais classes, Bravais arithmetic classes, or Bravais flocks. Unit cell. In crystallography, there is the concept of a unit cell which comprises the space between adjacent lattice points as well as any atoms in that space. A unit cell is defined as a space that, when translated through a subset of all vectors described by formula_1, fills the lattice space without overlapping or voids. (I.e., a lattice space is a multiple of a unit cell.) There are mainly two types of unit cells: primitive unit cells and conventional unit cells. A primitive cell is the very smallest component of a lattice (or crystal) which, when stacked together with lattice translation operations, reproduces the whole lattice (or crystal). Note that the translations must be lattice translation operations that cause the lattice to appear unchanged after the translation. If arbitrary translations were allowed, one could make a primitive cell half the size of the true one, and translate twice as often, as an example. Another way of defining the size of a primitive cell that avoids invoking lattice translation operations, is to say that the primitive cell is the smallest possible component of a lattice (or crystal) that can be repeated to reproduce the whole lattice (or crystal), "and" that contains exactly one lattice point. In either definition, the primitive cell is characterized by its small size. There are clearly many choices of cell that can reproduce the whole lattice when stacked (two lattice halves, for instance), and the minimum size requirement distinguishes the primitive cell from all these other valid repeating units. If the lattice or crystal is 2-dimensional, the primitive cell has a minimum area; likewise in 3 dimensions the primitive cell has a minimum volume. Despite this rigid minimum-size requirement, there is not one unique choice of primitive unit cell. In fact, all cells whose borders are primitive translation vectors will be primitive unit cells. The fact that there is not a unique choice of primitive translation vectors for a given lattice leads to the multiplicity of possible primitive unit cells. Conventional unit cells, on the other hand, are not necessarily minimum-size cells. They are chosen purely for convenience and are often used for illustration purposes. They are loosely defined. Primitive unit cell. Primitive unit cells are defined as unit cells with the smallest volume for a given crystal. (A crystal is a lattice and a basis at every lattice point.) To have the smallest cell volume, a primitive unit cell must contain (1) only one lattice point and (2) the minimum amount of basis constituents (e.g., the minimum number of atoms in a basis). For the former requirement, counting the number of lattice points in a unit cell is such that, if a lattice point is shared by "m" adjacent unit cells around that lattice point, then the point is counted as 1/"m". The latter requirement is necessary since there are crystals that can be described by more than one combination of a lattice and a basis. For example, a crystal, viewed as a lattice with a single kind of atom located at every lattice point (the simplest basis form), may also be viewed as a lattice with a basis of two atoms. In this case, a primitive unit cell is a unit cell having only one lattice point in the first way of describing the crystal in order to ensure the smallest unit cell volume. There can be more than one way to choose a primitive cell for a given crystal and each choice will have a different primitive cell shape, but the primitive cell volume is the same for every choice and each choice will have the property that a one-to-one correspondence can be established between primitive unit cells and discrete lattice points over the associated lattice. All primitive unit cells with different shapes for a given crystal have the same volume by definition; For a given crystal, if "n" is the density of lattice points in a lattice ensuring the minimum amount of basis constituents and "v" is the volume of a chosen primitive cell, then "nv" = 1 resulting in "v" = 1/"n", so every primitive cell has the same volume of 1/"n". Among all possible primitive cells for a given crystal, an obvious primitive cell may be the parallelepiped formed by a chosen set of primitive translation vectors. (Again, these vectors must make a lattice with the minimum amount of basis constituents.) That is, the set of all points formula_2 where formula_3 and formula_4 is the chosen primitive vector. This primitive cell does not always show the clear symmetry of a given crystal. In this case, a conventional unit cell easily displaying the crystal symmetry is often used. The conventional unit cell volume will be an integer-multiple of the primitive unit cell volume. Origin of concept. In two dimensions, any lattice can be specified by the length of its two primitive translation vectors and the angle between them. There are an infinite number of possible lattices one can describe in this way. Some way to categorize different types of lattices is desired. One way to do so is to recognize that some lattices have inherent symmetry. One can impose conditions on the length of the primitive translation vectors and on the angle between them to produce various symmetric lattices. These symmetries themselves are categorized into different types, such as point groups (which includes mirror symmetries, inversion symmetries and rotation symmetries) and translational symmetries. Thus, lattices can be categorized based on what point group or translational symmetry applies to them. In two dimensions, the most basic point group corresponds to rotational invariance under 2π and π, or 1- and 2-fold rotational symmetry. This actually applies automatically to all 2D lattices, and is the most general point group. Lattices contained in this group (technically all lattices, but conventionally all lattices that don't fall into any of the other point groups) are called oblique lattices. From there, there are 4 further combinations of point groups with translational elements (or equivalently, 4 types of restriction on the lengths/angles of the primitive translation vectors) that correspond to the 4 remaining lattice categories: square, hexagonal, rectangular, and centered rectangular. Thus altogether there are 5 Bravais lattices in 2 dimensions. Likewise, in 3 dimensions, there are 14 Bravais lattices: 1 general "wastebasket" category (triclinic) and 13 more categories. These 14 lattice types are classified by their point groups into 7 lattice systems (triclinic, monoclinic, orthorhombic, tetragonal, cubic, rhombohedral, and hexagonal). In 2 dimensions. In two-dimensional space there are 5 Bravais lattices, grouped into four lattice systems, shown in the table below. Below each diagram is the Pearson symbol for that Bravais lattice. Note: In the unit cell diagrams in the following table the lattice points are depicted using black circles and the unit cells are depicted using parallelograms (which may be squares or rectangles) outlined in black. Although each of the four corners of each parallelogram connects to a lattice point, only one of the four lattice points technically belongs to a given unit cell and each of the other three lattice points belongs to one of the adjacent unit cells. This can be seen by imagining moving the unit cell parallelogram slightly left and slightly down while leaving all the black circles of the lattice points fixed. The unit cells are specified according to the relative lengths of the cell edges ("a" and "b") and the angle between them ("θ"). The area of the unit cell can be calculated by evaluating the norm , where a and b are the lattice vectors. The properties of the lattice systems are given below: In 3 dimensions. In three-dimensional space there are 14 Bravais lattices. These are obtained by combining one of the seven lattice systems with one of the centering types. The centering types identify the locations of the lattice points in the unit cell as follows: Not all combinations of lattice systems and centering types are needed to describe all of the possible lattices, as it can be shown that several of these are in fact equivalent to each other. For example, the monoclinic I lattice can be described by a monoclinic C lattice by different choice of crystal axes. Similarly, all A- or B-centred lattices can be described either by a C- or P-centering. This reduces the number of combinations to 14 conventional Bravais lattices, shown in the table below. Below each diagram is the Pearson symbol for that Bravais lattice. Note: In the unit cell diagrams in the following table all the lattice points on the cell boundary (corners and faces) are shown; however, not all of these lattice points technically belong to the given unit cell. This can be seen by imagining moving the unit cell slightly in the negative direction of each axis while keeping the lattice points fixed. Roughly speaking, this can be thought of as moving the unit cell slightly left, slightly down, and slightly out of the screen. This shows that only one of the eight corner lattice points (specifically the front, left, bottom one) belongs to the given unit cell (the other seven lattice points belong to adjacent unit cells). In addition, only one of the two lattice points shown on the top and bottom face in the "Base-centered" column belongs to the given unit cell. Finally, only three of the six lattice points on the faces in the "Face-centered" column belong to the given unit cell. The unit cells are specified according to six lattice parameters which are the relative lengths of the cell edges ("a", "b", "c") and the angles between them ("α", "β", "γ"), where "α" is the angle between "b" and "c", "β" is the angle between "a" and "c", and "γ" is the angle between "a" and "b". The volume of the unit cell can be calculated by evaluating the triple product a · (b × c), where a, b, and c are the lattice vectors. The properties of the lattice systems are given below: Some basic information for the lattice systems and Bravais lattices in three dimensions is summarized in the diagram at the beginning of this page. The seven sided polygon (heptagon) and the number 7 at the centre indicate the seven lattice systems. The inner heptagons indicate the lattice angles, lattice parameters, Bravais lattices and Schöenflies notations for the respective lattice systems. In 4 dimensions. In four dimensions, there are 64 Bravais lattices. Of these, 23 are primitive and 41 are centered. Ten Bravais lattices split into enantiomorphic pairs. See also. &lt;templatestyles src="Div col/styles.css"/&gt; References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\mathbf{R} = n_1 \\mathbf{a}_1 + n_2 \\mathbf{a}_2 + n_3 \\mathbf{a}_3," }, { "math_id": 1, "text": "\\mathbf{R} = n_{1}\\mathbf{a}_{1} + n_{2}\\mathbf{a}_{2} + n_{3}\\mathbf{a}_{3}" }, { "math_id": 2, "text": "\\mathbf{r} = x_{1}\\mathbf{a}_{1} + x_{2}\\mathbf{a}_{2} + x_{3}\\mathbf{a}_{3}" }, { "math_id": 3, "text": "0 \\le x_{i} < 1" }, { "math_id": 4, "text": "\\mathbf{a}_{i}" } ]
https://en.wikipedia.org/wiki?curid=661808
66181
Role-based access control
Approach to restricting system access to authorized users In computer systems security, role-based access control (RBAC) or role-based security is an approach to restricting system access to authorized users, and to implementing mandatory access control (MAC) or discretionary access control (DAC). Role-based access control is a policy-neutral access control mechanism defined around roles and privileges. The components of RBAC such as role-permissions, user-role and role-role relationships make it simple to perform user assignments. A study by NIST has demonstrated that RBAC addresses many needs of commercial and government organizations. RBAC can be used to facilitate administration of security in large organizations with hundreds of users and thousands of permissions. Although RBAC is different from MAC and DAC access control frameworks, it can enforce these policies without any complication. Design. Within an organization, roles are created for various job functions. The permissions to perform certain operations are assigned to specific roles. Since users are not assigned permissions directly, but only acquire them through their role (or roles), management of individual user rights becomes a matter of simply assigning appropriate roles to the user's account; this simplifies common operations, such as adding a user, or changing a user's department. Three primary rules are defined for RBAC: Additional constraints may be applied as well, and roles can be combined in a hierarchy where higher-level roles subsume permissions owned by sub-roles. With the concepts of role hierarchy and constraints, one can control RBAC to create or simulate lattice-based access control (LBAC). Thus RBAC can be considered to be a superset of LBAC. When defining an RBAC model, the following conventions are useful: A constraint places a restrictive rule on the potential inheritance of permissions from opposing roles. Thus it can be used to achieve appropriate separation of duties. For example, the same person should not be allowed to both create a login account and to authorize the account creation. Thus, using set theory notation: A subject may have "multiple" simultaneous sessions with/in different roles. Standardized levels. The NIST/ANSI/INCITS RBAC standard (2004) recognizes three levels of RBAC: Relation to other models. RBAC is a flexible access control technology whose flexibility allows it to implement DAC or MAC. DAC with groups (e.g., as implemented in POSIX file systems) can emulate RBAC. MAC can simulate RBAC if the role graph is restricted to a tree rather than a partially ordered set. Prior to the development of RBAC, the Bell-LaPadula (BLP) model was synonymous with MAC and file system permissions were synonymous with DAC. These were considered to be the only known models for access control: if a model was not BLP, it was considered to be a DAC model, and vice versa. Research in the late 1990s demonstrated that RBAC falls in neither category. Unlike context-based access control (CBAC), RBAC does not look at the message context (such as a connection's source). RBAC has also been criticized for leading to role explosion, a problem in large enterprise systems which require access control of finer granularity than what RBAC can provide as roles are inherently assigned to operations and data types. In resemblance to CBAC, an Entity-Relationship Based Access Control (ERBAC, although the same acronym is also used for modified RBAC systems, such as Extended Role-Based Access Control) system is able to secure instances of data by considering their association to the executing subject. Comparing to ACL. Access control lists (ACLs) are used in traditional discretionary access-control (DAC) systems to affect low-level data-objects. RBAC differs from ACL in assigning permissions to operations which change the direct-relations between several entities (see: "ACLg" below). For example, an ACL could be used for granting or denying write access to a particular system file, but it wouldn't dictate how that file could be changed. In an RBAC-based system, an operation might be to 'create a credit account' transaction in a financial application or to 'populate a blood sugar level test' record in a medical application. A Role is thus a sequence of operations within a larger activity. RBAC has been shown to be particularly well suited to separation of duties (SoD) requirements, which ensure that two or more people must be involved in authorizing critical operations. Necessary and sufficient conditions for safety of SoD in RBAC have been analyzed. An underlying principle of SoD is that no individual should be able to effect a breach of security through dual privilege. By extension, no person may hold a role that exercises audit, control or review authority over another, concurrently held role. Then again, a "minimal RBAC Model", "RBACm", can be compared with an ACL mechanism, "ACLg", where only groups are permitted as entries in the ACL. Barkley (1997) showed that "RBACm" and "ACLg" are equivalent. In modern SQL implementations, like ACL of the CakePHP framework, ACLs also manage groups and inheritance in a hierarchy of groups. Under this aspect, specific "modern ACL" implementations can be compared with specific "modern RBAC" implementations, better than "old (file system) implementations". For data interchange, and for "high level comparisons", ACL data can be translated to XACML. Attribute-based access control. Attribute-based access control or ABAC is a model which evolves from RBAC to consider additional attributes in addition to roles and groups. In ABAC, it is possible to use attributes of: ABAC is policy-based in the sense that it uses policies rather than static permissions to define what is allowed or what is not allowed. Relationship-based access control. Relationship-based access control or ReBAC is a model which evolves from RBAC. In ReBAC, a subject's permission to access a resource is defined by the presence of relationships between those subjects and resources. The advantage of this model is that allows for fine-grained permissions; for example, in a social network where users can share posts with other specific users. Use and availability. The use of RBAC to manage user privileges (computer permissions) within a single system or application is widely accepted as a best practice. A 2010 report prepared for NIST by the Research Triangle Institute analyzed the economic value of RBAC for enterprises, and estimated benefits per employee from reduced employee downtime, more efficient provisioning, and more efficient access control policy administration. In an organization with a heterogeneous IT infrastructure and requirements that span dozens or hundreds of systems and applications, using RBAC to manage sufficient roles and assign adequate role memberships becomes extremely complex without hierarchical creation of roles and privilege assignments. Newer systems extend the older NIST RBAC model to address the limitations of RBAC for enterprise-wide deployments. The NIST model was adopted as a standard by INCITS as ANSI/INCITS 359-2004. A discussion of some of the design choices for the NIST model has also been published. Potential Vulnerabilities. Role based access control interference is a relatively new issue in security applications, where multiple user accounts with dynamic access levels may lead to encryption key instability, allowing an outside user to exploit the weakness for unauthorized access. Key sharing applications within dynamic virtualized environments have shown some success in addressing this problem. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "PA \\subseteq P \\times R" }, { "math_id": 1, "text": "SA \\subseteq S \\times R" }, { "math_id": 2, "text": "RH \\subseteq R \\times R" } ]
https://en.wikipedia.org/wiki?curid=66181
66186298
Feynman's algorithm
Feynman's algorithm is an algorithm that is used to simulate the operations of a quantum computer on a classical computer. It is based on the Path integral formulation of quantum mechanics, which was formulated by Richard Feynman. Overview. An formula_0 qubit quantum computer takes in a quantum circuit formula_1 that contains formula_2 gates and an input state formula_3. It then outputs a string of bits formula_4 with probability formula_5. In Schrödinger's algorithm, formula_6 is calculated straightforwardly via matrix multiplication. That is, formula_7. The quantum state of the system can be tracked throughout its evolution. In Feynman's path algorithm, formula_6 is calculated by summing up the contributions of formula_8 histories. That is, formula_9. Schrödinger's takes less time to run compared to Feynman's while Feynman's takes more time and less space. More precisely, Schrödinger's takes formula_10 time and formula_11 space while Feynman's takes formula_12 time and formula_13 space. Example. Consider the problem of creating a Bell state. What is the probability that the resulting measurement will be formula_14? Since the quantum circuit that generates a Bell state is the H (Hadamard gate) gate followed by the CNOT gate, the unitary for this circuit is formula_15. In that case, formula_16 using Schrödinger's algorithm. So probability resulting measurement will be formula_14 is formula_17. Using Feynman's algorithm, the Bell state circuit contains formula_18 histories: formula_19 . So formula_20 = |formula_21 + formula_22 + formula_23 + formula_24. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "n" }, { "math_id": 1, "text": "U" }, { "math_id": 2, "text": "m" }, { "math_id": 3, "text": "|0\\rangle^n" }, { "math_id": 4, "text": "x \\in \\{0, 1\\} ^n" }, { "math_id": 5, "text": "P(x_{m}) = |\\langle x_{m}|U|0\\rangle^n|^2 " }, { "math_id": 6, "text": "P(x_{m})" }, { "math_id": 7, "text": "P(x_{m}) = |\\langle x_{m}|U_{m} U_{m-1} U_{m-2} U_{m-3}, ..., U_{1}|0\\rangle^n|^2" }, { "math_id": 8, "text": "(2^{n})^{m-1}" }, { "math_id": 9, "text": "P(x_{m}) = |\\langle x_{m}|U|0\\rangle^n|^2 = |\\sum_{x_{1}, x_{2}, x_{3}, ... ,x_{m-1} \\in \\{0, 1\\}^n} \\prod_{j=1}^{m} \\langle x_j|U_{j}|x_{j-1}\\rangle|^2" }, { "math_id": 10, "text": "\\sim m 2^{n}" }, { "math_id": 11, "text": "\\sim 2^{n}" }, { "math_id": 12, "text": "\\sim 4^{m}" }, { "math_id": 13, "text": "\\sim m+n" }, { "math_id": 14, "text": "00" }, { "math_id": 15, "text": " (H \\otimes I) \\times CNOT" }, { "math_id": 16, "text": "P(00) = |\\langle 00|(H \\otimes I) \\times CNOT|00\\rangle|^2 = \\frac{1}{2}" }, { "math_id": 17, "text": "\\frac{1}{2}" }, { "math_id": 18, "text": "(2^{2})^{2-1} = 4" }, { "math_id": 19, "text": "00, 01, 10, 11" }, { "math_id": 20, "text": "|\\sum_{00, 01, 10, 11} \\prod_{j=1}^{2} \\langle x_j|U_{j}|x_{j-1}\\rangle|^2" }, { "math_id": 21, "text": " \\langle 00| H \\otimes I|00\\rangle \\times \\langle 00| CNOT|00\\rangle" }, { "math_id": 22, "text": " \\langle 01| H \\otimes I|00\\rangle \\times \\langle 00| CNOT|01\\rangle" }, { "math_id": 23, "text": " \\langle 10| H \\otimes I|00\\rangle \\times \\langle 00| CNOT|10\\rangle" }, { "math_id": 24, "text": " \\langle 11| H \\otimes I|00\\rangle \\times \\langle 00| CNOT|11\\rangle|^2 = |\\frac{1}{\\sqrt{2}} + 0 + 0 + 0|^2 = \\frac{1}{2}" } ]
https://en.wikipedia.org/wiki?curid=66186298
66192010
Topological Hochschild homology
In mathematics, Topological Hochschild homology is a topological refinement of Hochschild homology which rectifies some technical issues with computations in characteristic formula_0. For instance, if we consider the formula_1-algebra formula_2 then formula_3 but if we consider the ring structure on formula_4 (as a divided power algebra structure) then there is a significant technical issue: if we set formula_5, so formula_6, and so on, we have formula_7 from the resolution of formula_2 as an algebra over formula_8, i.e. formula_9 This calculation is further elaborated on the Hochschild homology page, but the key point is the pathological behavior of the ring structure on the Hochschild homology of formula_2. In contrast, the Topological Hochschild Homology ring has the isomorphism formula_10 giving a less pathological theory. Moreover, this calculation forms the basis of many other THH calculations, such as for smooth algebras formula_11 Construction. Recall that the Eilenberg–MacLane spectrum can be embed ring objects in the derived category of the integers formula_12 into ring spectrum over the ring spectrum of the stable homotopy group of spheres. This makes it possible to take a commutative ring formula_13 and constructing a complex analogous to the Hochschild complex using the monoidal product in ring spectra, namely, formula_14 acts formally like the derived tensor product formula_15 over the integers. We define the Topological Hochschild complex of formula_13 (which could be a commutative differential graded algebra, or just a commutative algebra) as the simplicial complex, pg 33-34 called the Bar complexformula_16of spectra (note that the arrows are incorrect because of Wikipedia formatting...). Because simplicial objects in spectra have a realization as a spectrum, we form the spectrumformula_17which has homotopy groups formula_18 defining the topological Hochschild homology of the ring object formula_13.
[ { "math_id": 0, "text": "p" }, { "math_id": 1, "text": "\\mathbb{Z}" }, { "math_id": 2, "text": "\\mathbb{F}_p" }, { "math_id": 3, "text": "HH_k(\\mathbb{F}_p/\\mathbb{Z}) \\cong \\begin{cases}\n\\mathbb{F}_p & k \\text{ even} \\\\\n0 & k \\text{ odd}\n\\end{cases}" }, { "math_id": 4, "text": "\\begin{align}\nHH_*(\\mathbb{F}_p/\\mathbb{Z}) &= \\mathbb{F}_p\\langle u \\rangle \\\\\n&= \\mathbb{F}_p[u,u^2/2!, u^3/3!,\\ldots]\n\n\\end{align}" }, { "math_id": 5, "text": "u \\in HH_2(\\mathbb{F}_p/\\mathbb{Z})" }, { "math_id": 6, "text": "u^2 \\in HH_4(\\mathbb{F}_p/\\mathbb{Z})" }, { "math_id": 7, "text": "u^p = 0" }, { "math_id": 8, "text": "\\mathbb{F}_p\\otimes^\\mathbf{L}\\mathbb{F}_p" }, { "math_id": 9, "text": "HH_k(\\mathbb{F}_p/\\mathbb{Z}) = H_k(\\mathbb{F}_p\\otimes_{\n \\mathbb{F}_p\\otimes^\\mathbf{L}\\mathbb{F}_p\n}\\mathbb{F}_p)" }, { "math_id": 10, "text": "THH_*(\\mathbb{F}_p) = \\mathbb{F}_p[u]" }, { "math_id": 11, "text": "A/\\mathbb{F}_p" }, { "math_id": 12, "text": "D(\\mathbb{Z})" }, { "math_id": 13, "text": "A" }, { "math_id": 14, "text": "\\wedge_\\mathbb{S}" }, { "math_id": 15, "text": "\\otimes^\\mathbf{L}" }, { "math_id": 16, "text": "\\cdots \\to HA\\wedge_\\mathbb{S}HA\\wedge_\\mathbb{S}HA \\to HA\\wedge_\\mathbb{S}HA \\to HA\n" }, { "math_id": 17, "text": "THH(A) \\in \\text{Spectra}" }, { "math_id": 18, "text": "\\pi_i(THH(A)) " } ]
https://en.wikipedia.org/wiki?curid=66192010
66193
Prefix code
Type of code system A prefix code is a type of code system distinguished by its possession of the "prefix property", which requires that there is no whole code word in the system that is a prefix (initial segment) of any other code word in the system. It is trivially true for fixed-length codes, so only a point of consideration for variable-length codes. For example, a code with code words {9, 55} has the prefix property; a code consisting of {9, 5, 59, 55} does not, because "5" is a prefix of "59" and also of "55". A prefix code is a uniquely decodable code: given a complete and accurate sequence, a receiver can identify each word without requiring a special marker between words. However, there are uniquely decodable codes that are not prefix codes; for instance, the reverse of a prefix code is still uniquely decodable (it is a suffix code), but it is not necessarily a prefix code. Prefix codes are also known as prefix-free codes, prefix condition codes and instantaneous codes. Although Huffman coding is just one of many algorithms for deriving prefix codes, prefix codes are also widely referred to as "Huffman codes", even when the code was not produced by a Huffman algorithm. The term comma-free code is sometimes also applied as a synonym for prefix-free codes but in most mathematical books and articles (e.g.) a comma-free code is used to mean a self-synchronizing code, a subclass of prefix codes. Using prefix codes, a message can be transmitted as a sequence of concatenated code words, without any out-of-band markers or (alternatively) special markers between words to frame the words in the message. The recipient can decode the message unambiguously, by repeatedly finding and removing sequences that form valid code words. This is not generally possible with codes that lack the prefix property, for example {0, 1, 10, 11}: a receiver reading a "1" at the start of a code word would not know whether that was the complete code word "1", or merely the prefix of the code word "10" or "11"; so the string "10" could be interpreted either as a single codeword or as the concatenation of the words "1" then "0". The variable-length Huffman codes, country calling codes, the country and publisher parts of ISBNs, the Secondary Synchronization Codes used in the UMTS W-CDMA 3G Wireless Standard, and the instruction sets (machine language) of most computer microarchitectures are prefix codes. Prefix codes are not error-correcting codes. In practice, a message might first be compressed with a prefix code, and then encoded again with channel coding (including error correction) before transmission. For any uniquely decodable code there is a prefix code that has the same code word lengths. Kraft's inequality characterizes the sets of code word lengths that are possible in a uniquely decodable code. Techniques. If every word in the code has the same length, the code is called a fixed-length code, or a block code (though the term block code is also used for fixed-size error-correcting codes in channel coding). For example, ISO 8859-15 letters are always 8 bits long. UTF-32/UCS-4 letters are always 32 bits long. ATM cells are always 424 bits (53 bytes) long. A fixed-length code of fixed length "k" bits can encode up to formula_0 source symbols. A fixed-length code is necessarily a prefix code. It is possible to turn any code into a fixed-length code by padding fixed symbols to the shorter prefixes in order to meet the length of the longest prefixes. Alternately, such padding codes may be employed to introduce redundancy that allows autocorrection and/or synchronisation. However, fixed length encodings are inefficient in situations where some words are much more likely to be transmitted than others. Truncated binary encoding is a straightforward generalization of fixed-length codes to deal with cases where the number of symbols "n" is not a power of two. Source symbols are assigned codewords of length "k" and "k"+1, where "k" is chosen so that "2k &lt; n ≤ 2k+1". Huffman coding is a more sophisticated technique for constructing variable-length prefix codes. The Huffman coding algorithm takes as input the frequencies that the code words should have, and constructs a prefix code that minimizes the weighted average of the code word lengths. (This is closely related to minimizing the entropy.) This is a form of lossless data compression based on entropy encoding. Some codes mark the end of a code word with a special "comma" symbol (also called a Sentinel value), different from normal data. This is somewhat analogous to the spaces between words in a sentence; they mark where one word ends and another begins. If every code word ends in a comma, and the comma does not appear elsewhere in a code word, the code is automatically prefix-free. However, reserving an entire symbol only for use as a comma can be inefficient, especially for languages with a small number of symbols. Morse code is an everyday example of a variable-length code with a comma. The long pauses between letters, and the even longer pauses between words, help people recognize where one letter (or word) ends, and the next begins. Similarly, Fibonacci coding uses a "11" to mark the end of every code word. Self-synchronizing codes are prefix codes that allow frame synchronization. Related concepts. A suffix code is a set of words none of which is a suffix of any other; equivalently, a set of words which are the reverse of a prefix code. As with a prefix code, the representation of a string as a concatenation of such words is unique. A bifix code is a set of words which is both a prefix and a suffix code. An optimal prefix code is a prefix code with minimal average length. That is, assume an alphabet of n symbols with probabilities formula_1 for a prefix code C. If C' is another prefix code and formula_2 are the lengths of the codewords of C', then formula_3. Prefix codes in use today. Examples of prefix codes include: Techniques. Commonly used techniques for constructing prefix codes include Huffman codes and the earlier Shannon–Fano codes, and universal codes such as: Notes. &lt;templatestyles src="Reflist/styles.css" /&gt; External links.
[ { "math_id": 0, "text": "2^{k}" }, { "math_id": 1, "text": "p(A_i)" }, { "math_id": 2, "text": "\\lambda'_i" }, { "math_id": 3, "text": "\\sum_{i=1}^n { \\lambda_i p(A_i) } \\leq \\sum_{i=1}^n { \\lambda'_i p(A_i) } \\!" } ]
https://en.wikipedia.org/wiki?curid=66193
66193763
Sergey Macheret
American physicist and aerospace engineer Sergey O. Macheret (born December 25, 1957) is an American physicist and aerospace engineer known for his contributions to plasma science and engineering. Macheret formulas for endothermic exchange reactions and Macheret-Fridman model of vibration-dissociation coupling are widely used for analysis of hypersonic and other chemically reacting flows. He is a former professor at Purdue University. Career. Macheret graduated from Moscow Institute of Physics and Technology in 1980 and received his PhD from Kurchatov Institute in 1985. He worked at Ohio State University, Princeton University, and Lockheed Martin Aeronautics Company. He was a professor at Purdue University School of Aeronautics and Astronautics from 2014-2024. In 2022, he was selected as a Fellow of the American Institute of Aeronautics and Astronautics (AIAA), a distinction given in recognition of notable contributions to the field. Macheret also received AIAA Plasmadynamics and Lasers Award in 2022 "for pioneering work on novel plasma generation and control methods and on aerospace applications of plasmas." Macheret formulas. The formulas are used for estimation of non-equilibrium rate coefficients of the simple-exchange endothermic reaction: formula_0. Such reactions appear in low-temperature nonequilibrium plasmas where a substantial fraction of energy input goes into vibrational excitation of molecules. The formulas are obtained by applying classical mechanics methods for high temperatures and semi-classical approximation for moderate temperatures. Assumptions: 1. Translational-vibrational non equilibrium: formula_1 or formula_2. 2. Collisions are collinear and rotational energy effects are negligible. 3. The duration of collision is much faster than the period of molecule vibration. 4. The vibrational energy obeys Boltzmann distribution. Formulas and relations: formula_3 Nonequilibrium factor in the high temperature case (formula_4): formula_5 formula_6 Nonequilibrium factor in the low temperature case (formula_7): formula_8 Legal issues. On February 1, 2023, Macheret was taken into police custody under alleged preliminary charges of distributing and possessing methamphetamine and making an unlawful proposition for sexual favors to an undercover police officer. The methamphetamine-related charge was later dropped, and three unlawful proposition misdemeanor charges were filed. On February 28, 2024, Macheret pleaded guilty to one charge of unlawful proposition that involved an undercover police officer. Two more counts of unlawful proposition were dropped as part of a plea deal. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "XY(v)+Z \\rightarrow X+YZ" }, { "math_id": 1, "text": " T>T_v " }, { "math_id": 2, "text": " T<T_v " }, { "math_id": 3, "text": "k(T,T_v)=Z(T,T_v) k^0(T)" }, { "math_id": 4, "text": "T \\geq \\theta" }, { "math_id": 5, "text": "Z(T,T_v)=\\exp\\left[ -\\frac{E_a-W}{\\alpha_M T_v + (1-\\alpha_M)T} - \\frac{W}{T}+\\frac{E_a}{T}\\right] " }, { "math_id": 6, "text": "\\alpha_M=\\frac{m_Y(m_X+m_Y+m_Z)}{(m_X+m_Y)(m_Y+m_Z)}, \\quad W=E_a(1-\\frac{f_v}{\\alpha_M})" }, { "math_id": 7, "text": "T<\\theta" }, { "math_id": 8, "text": "Z(T,T_v)=\\left[f_v\\exp(-\\frac{T_v}{\\theta})+(1-f_v)\\exp(-\\frac{T}{\\theta})\\right]^{E_a/\\theta}\\exp\\left(\\frac{E_a}{T}\\right)" } ]
https://en.wikipedia.org/wiki?curid=66193763
6620
Cotangent space
Dual space to the tangent space in differential geometry In differential geometry, the cotangent space is a vector space associated with a point formula_0 on a smooth (or differentiable) manifold formula_1; one can define a cotangent space for every point on a smooth manifold. Typically, the cotangent space, formula_2 is defined as the dual space of the tangent space at "formula_0", formula_3, although there are more direct definitions (see below). The elements of the cotangent space are called cotangent vectors or tangent covectors. Properties. All cotangent spaces at points on a connected manifold have the same dimension, equal to the dimension of the manifold. All the cotangent spaces of a manifold can be "glued together" (i.e. unioned and endowed with a topology) to form a new differentiable manifold of twice the dimension, the cotangent bundle of the manifold. The tangent space and the cotangent space at a point are both real vector spaces of the same dimension and therefore isomorphic to each other via many possible isomorphisms. The introduction of a Riemannian metric or a symplectic form gives rise to a natural isomorphism between the tangent space and the cotangent space at a point, associating to any tangent covector a canonical tangent vector. Formal definitions. Definition as linear functionals. Let formula_1 be a smooth manifold and let formula_0 be a point in formula_1. Let formula_3 be the tangent space at formula_0. Then the cotangent space at "x" is defined as the dual space of formula_3: formula_4 Concretely, elements of the cotangent space are linear functionals on formula_3. That is, every element formula_5 is a linear map formula_6 where formula_7 is the underlying field of the vector space being considered, for example, the field of real numbers. The elements of formula_2 are called cotangent vectors. Alternative definition. In some cases, one might like to have a direct definition of the cotangent space without reference to the tangent space. Such a definition can be formulated in terms of equivalence classes of smooth functions on formula_1. Informally, we will say that two smooth functions "f" and "g" are equivalent at a point formula_0 if they have the same first-order behavior near formula_0, analogous to their linear Taylor polynomials; two functions "f" and "g" have the same first order behavior near formula_0 if and only if the derivative of the function "f" − "g" vanishes at formula_0. The cotangent space will then consist of all the possible first-order behaviors of a function near formula_0. Let formula_1 be a smooth manifold and let "x" be a point in formula_1. Let formula_8be the ideal of all functions in formula_9 vanishing at formula_0, and let formula_10 be the set of functions of the form formula_11, where formula_12. Then formula_8 and formula_10 are both real vector spaces and the cotangent space can be defined as the quotient space formula_13 by showing that the two spaces are isomorphic to each other. This formulation is analogous to the construction of the cotangent space to define the Zariski tangent space in algebraic geometry. The construction also generalizes to locally ringed spaces. The differential of a function. Let formula_14 be a smooth manifold and let formula_15 be a smooth function. The differential of formula_16 at a point formula_0 is the map formula_17 where formula_18 is a tangent vector at formula_0, thought of as a derivation. That is formula_19 is the Lie derivative of formula_16 in the direction formula_20, and one has formula_21. Equivalently, we can think of tangent vectors as tangents to curves, and write formula_22 In either case, formula_23 is a linear map on formula_24 and hence it is a tangent covector at formula_0. We can then define the differential map formula_25 at a point formula_0 as the map which sends formula_16 to formula_23. Properties of the differential map include: The differential map provides the link between the two alternate definitions of the cotangent space given above. Since for all formula_31 there exist formula_32 such that formula_33, we have, formula_34formula_35formula_36formula_37 i.e. All function in formula_38 have differential zero, it follows that for every two functions formula_39, formula_40, we have formula_41. We can now construct an isomorphism between formula_2 and formula_42 by sending linear maps formula_43 to the corresponding cosets formula_44. Since there is a unique linear map for a given kernel and slope, this is an isomorphism, establishing the equivalence of the two definitions. The pullback of a smooth map. Just as every differentiable map formula_45 between manifolds induces a linear map (called the "pushforward" or "derivative") between the tangent spaces formula_46 every such map induces a linear map (called the "pullback") between the cotangent spaces, only this time in the reverse direction: formula_47 The pullback is naturally defined as the dual (or transpose) of the pushforward. Unraveling the definition, this means the following: formula_48 where formula_49 and formula_50. Note carefully where everything lives. If we define tangent covectors in terms of equivalence classes of smooth maps vanishing at a point then the definition of the pullback is even more straightforward. Let formula_51 be a smooth function on formula_52 vanishing at formula_53. Then the pullback of the covector determined by formula_51 (denoted formula_54) is given by formula_55 That is, it is the equivalence class of functions on formula_14 vanishing at formula_0 determined by formula_56. Exterior powers. The formula_57-th exterior power of the cotangent space, denoted formula_58, is another important object in differential and algebraic geometry. Vectors in the formula_57-th exterior power, or more precisely sections of the formula_57-th exterior power of the cotangent bundle, are called differential formula_57-forms. They can be thought of as alternating, multilinear maps on formula_57 tangent vectors. For this reason, tangent covectors are frequently called "one-forms".
[ { "math_id": 0, "text": "x" }, { "math_id": 1, "text": "\\mathcal M" }, { "math_id": 2, "text": "T^*_x\\!\\mathcal M" }, { "math_id": 3, "text": "T_x\\mathcal M" }, { "math_id": 4, "text": "T^*_x\\!\\mathcal M = (T_x \\mathcal M)^*" }, { "math_id": 5, "text": "\\alpha\\in T^*_x\\mathcal M" }, { "math_id": 6, "text": "\\alpha:T_x\\mathcal M \\to F" }, { "math_id": 7, "text": "F" }, { "math_id": 8, "text": "I_x" }, { "math_id": 9, "text": "C^\\infty\\! (\\mathcal M)" }, { "math_id": 10, "text": "I_x^2" }, { "math_id": 11, "text": "\\sum_i f_i g_i" }, { "math_id": 12, "text": "f_i, g_i \\in I_x" }, { "math_id": 13, "text": "T^*_x\\!\\mathcal M = I_x/I^2_x" }, { "math_id": 14, "text": "M" }, { "math_id": 15, "text": "f\\in C^\\infty(M)" }, { "math_id": 16, "text": "f" }, { "math_id": 17, "text": "\\mathrm d f_x(X_x) = X_x(f)" }, { "math_id": 18, "text": "X_x" }, { "math_id": 19, "text": "X(f)=\\mathcal{L}_Xf" }, { "math_id": 20, "text": "X" }, { "math_id": 21, "text": "\\mathrm df(X)=X(f)" }, { "math_id": 22, "text": "\\mathrm d f_x(\\gamma'(0))=(f\\circ\\gamma)'(0)" }, { "math_id": 23, "text": "\\mathrm df_x" }, { "math_id": 24, "text": "T_xM" }, { "math_id": 25, "text": "\\mathrm d:C^\\infty(M)\\to T_x^*(M)" }, { "math_id": 26, "text": "\\mathrm d" }, { "math_id": 27, "text": "\\mathrm d(af+bg)=a\\mathrm df + b\\mathrm dg" }, { "math_id": 28, "text": "a" }, { "math_id": 29, "text": "b" }, { "math_id": 30, "text": "\\mathrm d(fg)_x=f(x)\\mathrm dg_x+g(x)\\mathrm df_x" }, { "math_id": 31, "text": " f \\in I^2_x " }, { "math_id": 32, "text": "g_i, h_i \\in I_x" }, { "math_id": 33, "text": "f=\\sum_i g_i h_i" }, { "math_id": 34, "text": "\\mathrm d f_x" }, { "math_id": 35, "text": "=\\sum_i \\mathrm d (g_i h_i)_x =" }, { "math_id": 36, "text": " \\sum_i (g_i(x)\\mathrm d(h_i)_x+\\mathrm d(g_i)_x h_{i}(x))=" }, { "math_id": 37, "text": " \\sum_i (0\\mathrm d(h_i)_x+\\mathrm d(g_i)_x 0)=0" }, { "math_id": 38, "text": "I^2_x " }, { "math_id": 39, "text": "f \\in I^2_x" }, { "math_id": 40, "text": "g \\in I_x" }, { "math_id": 41, "text": "\\mathrm d (f+g)=\\mathrm d (g)" }, { "math_id": 42, "text": "I_x/I^2_x" }, { "math_id": 43, "text": "\\alpha" }, { "math_id": 44, "text": "\\alpha + I^2_x" }, { "math_id": 45, "text": "f:M\\to N" }, { "math_id": 46, "text": "f_{*}^{}\\colon T_x M \\to T_{f(x)} N" }, { "math_id": 47, "text": "f^{*}\\colon T_{f(x)}^{*} N \\to T_{x}^{*} M ." }, { "math_id": 48, "text": "(f^{*}\\theta)(X_x) = \\theta(f_{*}^{}X_x) ," }, { "math_id": 49, "text": "\\theta\\in T_{f(x)}^*N" }, { "math_id": 50, "text": "X_x\\in T_xM" }, { "math_id": 51, "text": "g" }, { "math_id": 52, "text": "N" }, { "math_id": 53, "text": "f(x)" }, { "math_id": 54, "text": "\\mathrm d g" }, { "math_id": 55, "text": "f^{*}\\mathrm dg = \\mathrm d(g \\circ f)." }, { "math_id": 56, "text": "g\\circ f" }, { "math_id": 57, "text": "k" }, { "math_id": 58, "text": "\\Lambda^k(T_x^*\\mathcal{M})" } ]
https://en.wikipedia.org/wiki?curid=6620
66205491
Equalized odds
Measure of fairness in machine learning models Equalized odds, also referred to as conditional procedure accuracy equality and disparate mistreatment, is a measure of fairness in machine learning. A classifier satisfies this definition if the subjects in the protected and unprotected groups have equal true positive rate and equal false positive rate, satisfying the formula: formula_0 For example, formula_1 could be gender, race, or any other characteristics that we want to be free of bias, while formula_2 would be whether the person is qualified for the degree, and the output formula_3 would be the school's decision whether to offer the person to study for the degree. In this context, higher university enrollment rates of African Americans compared to whites with similar test scores might be necessary to fulfill the condition of equalized odds, if the "base rate" of formula_2 differs between the groups. The concept was originally defined for binary-valued formula_2. In 2017, Woodworth et al. generalized the concept further for multiple classes. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": " P(R = + | Y = y, A = a) = P(R = + | Y = y, A = b) \\quad y \\in \\{+,-\\} \\quad \\forall a,b \\in A " }, { "math_id": 1, "text": "A" }, { "math_id": 2, "text": "Y" }, { "math_id": 3, "text": "R" } ]
https://en.wikipedia.org/wiki?curid=66205491
66206864
Infinity cube
Foldable cube made of die An Infinity cube is a kind of mechanical puzzle toy with mathematical principles. Its shape is similar to a 2×2 Rubik's cube. It can be opened and put back together from different directions, thus creating a visually interesting effect. Construction. The principle of the infinity cube is simple and can be made by hand with simple paper cutting and pasting. First make 8 small cubes, then arrange the small cubes in a 2 by 2 by 2 way, and tape 8 edges together. When combined, there are 28 small squares exposed and 20 small squares hidden inside. Mathematics. Like the Rubik's Cube, the various states of the Infinity Cube can be represented as a group, but the Infinity Cube has far fewer permutations than the Rubik's Cube. Rubik's Cube group have formula_1 permutations and isomorphic to the below groupware formula_2 are alternating groups and formula_3 are cyclic groups: formula_4 The largest group representation for Infinity Cube only contains 6 elements, and can be represented as: formula_0 References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\mathbb Z_6" }, { "math_id": 1, "text": "43{,}252{,}003{,}274{,}489{,}856{,}000\\,\\! = \\frac{12!}{2} \\cdot 2^{12-1} \\cdot 8! \\cdot 3^{8 - 1}" }, { "math_id": 2, "text": "A_n" }, { "math_id": 3, "text": "\\mathbb Z_n" }, { "math_id": 4, "text": "(\\mathbb Z_3^7 \\times \\mathbb Z_2^{11}) \\rtimes \\,((A_8 \\times A_{12}) \\rtimes \\mathbb Z_2)." } ]
https://en.wikipedia.org/wiki?curid=66206864
66207446
Strong and weak sampling
Strong and weak sampling are two sampling approach in Statistics, and are popular in computational cognitive science and language learning. In strong sampling, it is assumed that the data are intentionally generated as positive examples of a concept, while in weak sampling, it is assumed that the data are generated without any restrictions. Formal Definition. In strong sampling, we assume observation is randomly sampled from the true hypothesis: formula_0 In weak sampling, we assume observations randomly sampled and then classified: formula_1 Consequence: Posterior computation under Weak Sampling. formula_2 Therefore the likelihood formula_3 for all hypotheses formula_4 will be "ignored". References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "P(x|h) = \\begin{cases} \\frac{1}{|h|} & \\text{, if } x \\in h \\\\ 0 & \\text{, otherwise} \\end{cases}" }, { "math_id": 1, "text": "P(x|h) = \\begin{cases} 1 & \\text{, if } x \\in h \\\\ 0 & \\text{, otherwise} \\end{cases}" }, { "math_id": 2, "text": "P(h|x) = \\frac{P(x|h) P(h)}{\\sum\\limits_{h'} P(x|h') P(h')} = \\begin{cases} \\frac{P(h)}{\\sum\\limits_{h': x \\in h'} P(h')} & \\text{, if } x \\in h \\\\ 0 & \\text{, otherwise} \\end{cases} " }, { "math_id": 3, "text": "P(x|h')" }, { "math_id": 4, "text": "h'" } ]
https://en.wikipedia.org/wiki?curid=66207446
66207727
Reinhardt polygon
Polygon with many longest diagonals In geometry, a Reinhardt polygon is an equilateral polygon inscribed in a Reuleaux polygon. As in the regular polygons, each vertex of a Reinhardt polygon participates in at least one defining pair of the diameter of the polygon. Reinhardt polygons with formula_0 sides exist, often with multiple forms, whenever formula_0 is not a power of two. Among all polygons with formula_0 sides, the Reinhardt polygons have the largest possible perimeter for their diameter, the largest possible width for their diameter, and the largest possible width for their perimeter. They are named after Karl Reinhardt, who studied them in 1922. Definition and construction. A Reuleaux polygon is a convex shape with circular-arc sides, each centered on a vertex of the shape and all having the same radius; an example is the Reuleaux triangle. These shapes are curves of constant width. Some Reuleaux polygons have side lengths that are irrational multiples of each other, but if a Reuleaux polygon has sides that can be partitioned into a system of arcs of equal length, then the polygon formed as the convex hull of the endpoints of these arcs is defined as a Reinhardt polygon. Necessarily, the vertices of the underlying Reuleaux polygon are also endpoints of arcs and vertices of the Reinhardt polygon, but the Reinhardt polygon may also have additional vertices, interior to the sides of the Reuleaux polygon. If formula_0 is a power of two, then it is not possible to form a Reinhardt polygon with formula_0 sides. If formula_0 is an odd number, then the regular polygon with formula_0 sides is a Reinhardt polygon. Any other natural number must have an odd divisor formula_1, and a Reinhardt polygon with formula_0 sides may be formed by subdividing each arc of a regular formula_1-sided Reuleaux polygon into formula_2 smaller arcs. Therefore, the possible numbers of sides of Reinhardt polygons are the polite numbers, numbers that are not powers of two. When formula_0 is an odd prime number, or two times a prime number, there is only one shape of formula_0-sided Reinhardt polygon, but all other values of formula_0 have Reinhardt polygons with multiple shapes. Dimensions and optimality. The diameter pairs of a Reinhardt polygon form many isosceles triangles with the sides of the triangle, with apex angle formula_3, from which the dimensions of the polygon may be calculated. If the side length of a Reinhardt polygon is 1, then its perimeter is just formula_0. The diameter of the polygon (the longest distance between any two of its points) equals the side length of these isosceles triangles, formula_4. The curves of constant width of the polygon (the shortest distance between any two parallel supporting lines) equals the height of this triangle, formula_5. These polygons are optimal in three ways: The relation between perimeter and diameter for these polygons was proven by Reinhardt, and rediscovered independently multiple times. The relation between diameter and width was proven by Bezdek and Fodor in 2000; their work also investigates the optimal polygons for this problem when the number of sides is a power of two (for which Reinhardt polygons do not exist). Symmetry and enumeration. The formula_0-sided Reinhardt polygons formed from formula_1-sided regular Reuleaux polygons are symmetric: they can be rotated by an angle of formula_6 to obtain the same polygon. The Reinhardt polygons that have this sort of rotational symmetry are called "periodic", and Reinhardt polygons without rotational symmetry are called "sporadic". If formula_0 is a semiprime, or the product of a power of two with an odd prime power, then all formula_0-sided Reinhardt polygons are periodic. In the remaining cases, when formula_0 has two distinct odd prime factors and is not the product of these two factors, sporadic Reinhardt polygons also exist. For each formula_0, there are only finitely many distinct formula_0-sided Reinhardt polygons. If formula_7 is the smallest prime factor of formula_0, then the number of distinct formula_0-sided periodic Reinhardt polygons is formula_8 where the formula_9 term uses little O notation. However, the number of sporadic Reinhardt polygons is less well-understood, and for most values of formula_0 the total number of Reinhardt polygons is dominated by the sporadic ones. The numbers of these polygons for small values of formula_0 (counting two polygons as the same when they can be rotated or flipped to form each other) are: References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "n" }, { "math_id": 1, "text": "d" }, { "math_id": 2, "text": "n/d" }, { "math_id": 3, "text": "\\pi/n" }, { "math_id": 4, "text": "1/2\\sin(\\pi/2n)" }, { "math_id": 5, "text": "1/2\\tan(\\pi/2n)" }, { "math_id": 6, "text": "2\\pi/d" }, { "math_id": 7, "text": "p" }, { "math_id": 8, "text": "\\frac{p2^{n/p}}{4n}\\bigl(1+o(1)\\bigr)," }, { "math_id": 9, "text": "o(1)" } ]
https://en.wikipedia.org/wiki?curid=66207727
6620973
BKL singularity
General relativity model near the beginning of the universe A Belinski–Khalatnikov–Lifshitz (BKL) singularity is a model of the dynamic evolution of the universe near the initial gravitational singularity, described by an anisotropic, chaotic solution of the Einstein field equation of gravitation. According to this model, the universe is chaotically oscillating around a gravitational singularity in which time and space become equal to zero or, equivalently, the spacetime curvature becomes infinitely big. This singularity is physically real in the sense that it is a necessary property of the solution, and will appear also in the exact solution of those equations. The singularity is not artificially created by the assumptions and simplifications made by the other special solutions such as the Friedmann–Lemaître–Robertson–Walker, quasi-isotropic, and Kasner solutions. The model is named after its authors Vladimir Belinski, Isaak Khalatnikov, and Evgeny Lifshitz, then working at the Landau Institute for Theoretical Physics. The picture developed by BKL has several important elements. These are: The study of the dynamics of the universe in the vicinity of the cosmological singularity has become a rapidly developing field of modern theoretical and mathematical physics. The generalization of the BKL model to the cosmological singularity in multidimensional (Kaluza–Klein type) cosmological models has a chaotic character in the spacetimes whose dimensionality is not higher than ten, while in the spacetimes of higher dimensionalities a universe after undergoing a finite number of oscillations enters into monotonic Kasner-type contracting regime. The development of cosmological studies based on superstring models has revealed some new aspects of the dynamics in the vicinity of the singularity. In these models, mechanisms of changing of Kasner epochs are provoked not by the gravitational interactions but by the influence of other fields present. It was proved that the cosmological models based on six main superstring models plus eleven-dimensional supergravity model exhibit the chaotic BKL dynamics towards the singularity. A connection was discovered between oscillatory BKL-like cosmological models and a special subclass of infinite-dimensional Lie algebras – the so-called hyperbolic Kac–Moody algebras. Introduction. The basis of modern cosmology are the special solutions of the Einstein field equations found by Alexander Friedmann in 1922–1924. The Universe is assumed homogeneous (space has the same metric properties (measures) in all points) and isotropic (space has the same measures in all directions). Friedmann's solutions allow two possible geometries for space: closed model with a ball-like, outwards-bowed space (positive curvature) and open model with a saddle-like, inwards-bowed space (negative curvature). In both models, the Universe is not standing still, it is constantly either expanding (becoming larger) or contracting (shrinking, becoming smaller). This was confirmed by Edwin Hubble who established the Hubble redshift of receding galaxies. The present consensus is that the isotropic model, in general, gives an adequate description of the present state of the Universe; however, isotropy of the present Universe by itself is not a reason to expect that it is adequate for describing the early stages of Universe evolution. At the same time, it is obvious that in the real world homogeneity is, at best, only an approximation. Even if one can speak about a homogeneous distribution of matter density at distances that are large compared to the intergalactic space, this homogeneity vanishes at smaller scales. On the other hand, the homogeneity assumption goes very far in a mathematical aspect: it makes the solution highly symmetric which can impart specific properties that disappear when considering a more general case. Another important property of the isotropic model is the inevitable existence of a time singularity: time flow is not continuous, but stops or reverses after time reaches some very large or very small value. Between singularities, time flows in one direction: away from the singularity (arrow of time). In the open model, there is one time singularity so time is limited at one end but unlimited at the other, while in the closed model there are two singularities that limit time at both ends (the Big Bang and Big Crunch). The only physically interesting properties of spacetimes (such as singularities) are those which are stable, i.e., those properties which still occur when the initial data is perturbed slightly. It is possible for a singularity to be stable and yet be of no physical interest: stability is a necessary but not a sufficient condition for physical relevance. For example, a singularity could be stable only in a neighbourhood of initial data sets corresponding to highly anisotropic universes. Since the actual universe is now apparently almost isotropic such a singularity could not occur in our universe. A sufficient condition for a stable singularity to be of physical interest is the requirement that the singularity be generic (or general). Roughly speaking, a stable singularity is generic if it occurs near every set of initial conditions and the non-gravitational fields are restricted in some specified way to "physically realistic" fields so that the Einstein equations, various equations of state, etc., are assumed to hold on the evolved spacetimes. It might happen that a singularity is stable under small variations of the true gravitational degrees of freedom, and yet it is not generic because the singularity depends in some way on the coordinate system, or rather on the choice of the initial hypersurface from which the spacetime is evolved. For a system of non-linear differential equations, such as the Einstein equations, a general solution is not unambiguously defined. In principle, there may be multiple general integrals, and each of those may contain only a finite subset of all possible initial conditions. Each of those integrals may contain all required independent functions which, however, may be subject to some conditions (e.g., some inequalities). Existence of a general solution with a singularity, therefore, does not preclude the existence of other additional general solutions that do not contain a singularity. For example, there is no reason to doubt the existence of a general solution without a singularity that describes an isolated body with a relatively small mass. It is impossible to find a general integral for all space and for all time. However, this is not necessary for resolving the problem: it is sufficient to study the solution near the singularity. This would also resolve another aspect of the problem: the characteristics of spacetime metric evolution in the general solution when it reaches the physical singularity, understood as a point where matter density and invariants of the Riemann curvature tensor become infinite. Existence of physical time singularity. One of the principal problems studied by the Landau group (to which BKL belong) was whether relativistic cosmological models necessarily contain a time singularity or whether the time singularity is an artifact of the assumptions used to simplify these models. The independence of the singularity on symmetry assumptions would mean that time singularities exist not only in the special, but also in the general solutions of the Einstein equations. It is reasonable to suggest that if a singularity is present in the general solution, there must be some indications that are based only on the most general properties of the Einstein equations, although those indications by themselves might be insufficient for characterizing the singularity. A criterion for generality of solutions is the number of independent space coordinate functions that they contain. These include only the "physically independent" functions whose number cannot be reduced by any choice of reference frame. In the general solution, the number of such functions must be enough to fully define the initial conditions (distribution and movement of matter, distribution of gravitational field) at some moment of time chosen as initial. This number is four for an empty (vacuum) space, and eight for a matter and/or radiation-filled space. Previous work by the Landau group; reviewed in) led to the conclusion that the general solution does not contain a physical singularity. This search for a broader class of solutions with a singularity has been done, essentially, by a trial-and-error method, since a systematic approach to the study of the Einstein equations was lacking. A negative result, obtained in this way, is not convincing by itself; a solution with the necessary degree of generality would invalidate it, and at the same time would confirm any positive results related to the specific solution. At that time, the only known indication for the existence of physical singularity in the general solution was related to the form of the Einstein equations written in a synchronous frame, that is, in a frame in which the proper time "x"0 = "t" is synchronized throughout the whole space; in this frame the space distance element "dl" is separate from the time interval "dt". The Einstein equation written in synchronous frame gives a result in which the metric determinant "g" inevitably becomes zero in a finite time irrespective of any assumptions about matter distribution. However, the efforts to find a general physical singularity were foregone after it became clear that the singularity mentioned above is linked with a specific geometric property of the synchronous frame: the crossing of time line coordinates. This crossing takes place on some encircling hypersurfaces which are four-dimensional analogs of the caustic surfaces in geometrical optics; "g" becomes zero exactly at this crossing. Therefore, although this singularity is general, it is fictitious, and not a physical one; it disappears when the reference frame is changed. This, apparently, dissuaded the researchers for further investigations along these lines. Several years passed before the interest in this problem waxed again when Penrose (1965) published his theorems that linked the existence of a singularity of unknown character with some very general assumptions that did not have anything in common with a choice of reference frame. Other similar theorems were found later on by Hawking and Geroch (see Penrose–Hawking singularity theorems). This revived interest in the search for singular solutions. Generalized homogeneous solution. In a space that is both homogeneous and isotropic the metric is determined completely, leaving free only the sign of the curvature. Assuming only space homogeneity with no additional symmetry such as isotropy leaves considerably more freedom in choosing the metric. The following pertains to the space part of the metric at a given instant of time "t" assuming a synchronous frame so that "t" is the same synchronised time for the whole space. The BKL conjecture. In their 1970 work, BKL stated that "as one approaches a singularity, terms containing time derivatives in Einstein's equations dominate over those containing spatial derivatives". This has since been known as the BKL conjecture and implies that Einstein's partial differential equations (PDE) are well approximated by ordinary differential equations (ODEs), whence the dynamics of general relativity effectively become local and oscillatory. The time evolution of fields at each spatial point is well approximated by the homogeneous cosmologies in the Bianchi classification. By separating the time and space derivatives in the Einstein equations, for example, in the way used for the classification of homogeneous spaces, and then setting the terms containing space derivatives equal to zero, one can define the so-called truncated theory of the system (truncated equations). Then, the BKL conjecture can be made more specific: "Weak conjecture": As the singularity is approached the terms containing space derivatives in the Einstein equations are negligible in comparison to the terms containing time derivatives. Thus, as the singularity is approached the Einstein equations approach those found by setting derivative terms to zero. Thus, the weak conjecture says that the Einstein equations can be well approximated by the truncated equations in the vicinity of the singularity. Note that this does not imply that the solutions of the full equations of motion will approach the solutions to the truncated equations as the singularity is approached. This additional condition is captured in the strong version as follows. "Strong conjecture": As the singularity is approached the Einstein equations approach those of the truncated theory and in addition the solutions to the full equations are well approximated by solutions to the truncated equations. In the beginning, the BKL conjecture seemed to be coordinate-dependent and rather implausible. Barrow and Tipler, for example, among the ten criticisms of BKL studies, include the inappropriate (according to them) choice of synchronous frame as a means to separate time and space derivatives. The BKL conjecture was sometimes rephrased in the literature as a statement that near the singularity only the time derivatives are important. Such a statement, taken at face value, is wrong or at best misleading since, as shown in the BKL analysis itself, space-like gradients of the metric tensor cannot be neglected for generic solutions of pure Einstein gravity in four spacetime dimensions, and in fact play a crucial role in the appearance of the oscillatory regime. However, there exist reformulations of Einstein theory in terms of new variables involving the relevant gradients, for example in Ashtekar-like variables, for which the statement about the dominant role of the time derivatives is correct. It is true that one gets at each spatial point an effective description of the singularity in terms of a finite dimensional dynamical system described by ordinary differential equations with respect to time, but the spatial gradients do enter these equations non-trivially. Subsequent analysis by a large number of authors has shown that the BKL conjecture can be made precise and by now there is an impressive body of numerical and analytical evidence in its support. It is fair to say that we are still quite far from a proof of the strong conjecture. But there has been outstanding progress in simpler models. In particular, Berger, Garfinkle, Moncrief, Isenberg, Weaver, and others showed that, in a class of models, as the singularity is approached the solutions to the full Einstein field equations approach the "velocity term dominated" (truncated) ones obtained by neglecting spatial derivatives. Andersson and Rendall showed that for gravity coupled to a massless scalar field or a stiff fluid, for every solution to the truncated equations there exists a solution to the full field equations that converges to the truncated solution as the singularity is approached, even in the absence of symmetries. These results were generalized to also include p-form gauge fields. In these truncated models the dynamics are simpler, allowing a precise statement of the conjecture that could be proven. In the general case, the strongest evidence to date comes from numerical evolutions. Berger and Moncrief began a program to analyze generic cosmological singularities. While the initial work focused on symmetry reduced cases, more recently Garfinkle performed numerical evolution of space-times with no symmetries in which, again, the mixmaster behavior is apparent. Finally, additional support for the conjecture has come from a numerical study of the behavior of test fields near the singularity of a Schwarzschild black hole. Kasner solution. The BKL approach to anisotropic (as opposed to isotropic) homogeneous spaces starts with a generalization of an exact particular solution derived by Kasner for a field in vacuum, in which the space is homogeneous and has a Euclidean metric that depends on time according to the Kasner metric ("dl" is the line element; "dx", "dy", "dz" are infinitesimal displacements in the three spatial dimensions, and "t" is time period passed since some initial moment "t"0 = 0). Here, "p"1, "p"2, "p"3 are any three numbers that satisfy the following "Kasner conditions" Because of these relations, only one of the three numbers is independent (two equations with three unknowns). All three numbers are never the same; two numbers are the same only in the sets of values formula_0 and (0, 0, 1). In all other cases the numbers are different, one number is negative and the other two are positive. This is partially proved by squaring both sides of the first condition eq. 3 and developing the square: formula_1 The term formula_2 is equal to 1 by dint of the second condition eq. 3 and therefore the term with the mixed products should be zero. This is possible if at least one of the "p"1, "p"2, "p"3 is negative. If the numbers are arranged in increasing order, "p"1 &lt; "p"2 &lt; "p"3, they change in the intervals (Fig. 4) The Kasner metric eq. 2 corresponds to a flat homogenous but anisotropic space in which all volumes increase with time in such a way that the linear distances along two axes "y" and "z" increase while the distance along the axis "x" decreases. The moment "t" = 0 causes a singularity in the solution; the singularity in the metric at "t" = 0 cannot be avoided by any reference frame transformation. At the singularity, the invariants of the four-dimensional curvature tensor go to infinity. An exception is the case "p"1 = "р"2 = 0, "р"3 = 1; these values correspond to a flat spacetime: the transformation "t" sh "z" = ζ, "t" ch "z" = τ turns the Kasner metric (eq. 2) into Galilean. BKL parametrize the numbers "p"1, "p"2, "p"3 in terms of a single independent (real) parameter "u" (Lifshitz-Khalatnikov parameter) as follows The Kasner index parametrization appears mysterious until one thinks about the two constraints on the indices eq. 3. Both constraints fix the overall scale of the indices so that only their ratios can vary. It is natural to pick one of those ratios as a new parameter, which can be done in six different ways. Picking "u" = "u"32 = "p"3 / "p"2, for example, it is trivial to express all six possible ratios in terms of it. Eliminating "p"3 = "up"2 first, and then using the linear constraint to eliminate "p"1 = 1 − "p"2 − "up"2 = 1 − (1 + "u")"p"2, the quadratic constraint reduces to a quadratic equation in "p"2 with roots "p"2 = 0 (obvious) and "p"2 = (1 + "u") / (1 + "u" + "u"2), from which "p"1 and "p"3 are then obtained by back substitution. One can define six such parameters "uab" = "pa" / "pb", for which "pc" ≤ "pb" ≤ "pa" when ("c", "b", "a") is a cyclic permutation of (1, 2, 3). All different values of "p"1, "p"2, "p"3 ordered as above are obtained with "u" running in the range "u" ≥ 1. The values "u" &lt; 1 are brought into this range according to In the generalized solution, the form corresponding to eq. 2 applies only to the asymptotic metric (the metric close to the singularity "t" = 0), respectively, to the major terms of its series expansion by powers of "t". In the synchronous reference frame it is written in the form of eq. 1 with a space distance element where The three-dimensional vectors l, m, n define the directions at which space distance changes with time by the power laws eq. 8. These vectors, as well as the numbers "pl", "pm", "pn" which, as before, are related by eq. 3, are functions of the space coordinates. The powers "pl", "pm", "pn" are not arranged in increasing order, reserving the symbols "p"1, "p"2, "p"3 for the numbers in eq. 5 that remain arranged in increasing order. The determinant of the metric of eq. 7 is where "v" = l[mn]. It is convenient to introduce the following quantities The space metric in eq. 7 is anisotropic because the powers of "t" in eq. 8 cannot have the same values. On approaching the singularity at "t" = 0, the linear distances in each space element decrease in two directions and increase in the third direction. The volume of the element decreases in proportion to "t". The Kasner metric is introduced in the Einstein equations by substituting the respective metric tensor γαβ from eq. 7 without defining "a priori" the dependence of "a", "b", "c" from "t": formula_3 where the dot above a symbol designates differentiation with respect to time. The Einstein equation eq. 11 takes the form All its terms are to a second order for the large (at "t" → 0) quantity 1/"t". In the Einstein equations eq. 12, terms of such order appear only from terms that are time-differentiated. If the components of "P"αβ do not include terms of order higher than two, then where indices "l", "m", "n" designate tensor components in the directions l, m, n. These equations together with eq. 14 give the expressions eq. 8 with powers that satisfy eq. 3. However, the presence of one negative power among the 3 powers "pl", "pm", "pn" results in appearance of terms from "P"αβ with an order greater than "t"−2. If the negative power is "pl" ("pl" = "p"1 &lt; 0), then "P"αβ contains the coordinate function λ and eq. 12 become Here, the second terms are of order "t"−2("pm" + "pn" − "pl") whereby "pm" + "pn" − "pl" = 1 + 2 |"pl"| &gt; 1. To remove these terms and restore the metric eq. 7, it is necessary to impose on the coordinate functions the condition λ = 0. The remaining three Einstein equations eq. 13 contain only first order time derivatives of the metric tensor. They give three time-independent relations that must be imposed as necessary conditions on the coordinate functions in eq. 7. This, together with the condition λ = 0, makes four conditions. These conditions bind ten different coordinate functions: three components of each of the vectors l, m, n, and one function in the powers of "t" (any one of the functions "pl", "pm", "pn", which are bound by the conditions eq. 3). When calculating the number of physically arbitrary functions, it must be taken into account that the synchronous system used here allows time-independent arbitrary transformations of the three space coordinates. Therefore, the final solution contains overall 10 − 4 − 3 = 3 physically arbitrary functions which is one less than what is needed for the general solution in vacuum. The degree of generality reached at this point is not lessened by introducing matter; matter is written into the metric eq. 7 and contributes four new coordinate functions necessary to describe the initial distribution of its density and the three components of its velocity. This makes possible to determine matter evolution merely from the laws of its movement in an "a priori" given gravitational field which are the hydrodynamic equations where "ui" is the 4-dimensional velocity, ε and σ are the densities of energy and entropy of matter (cf. and; also; for details see ). For the ultrarelativistic equation of state "p" = ε/3 the entropy σ ~ ε1/4. The major terms in eq. 17 and eq. 18 are those that contain time derivatives. From eq. 17 and the space components of eq. 18 one has formula_4 resulting in where 'const' are time-independent quantities. Additionally, from the identity "uiui" = 1 one has (because all covariant components of "u"α are to the same order) formula_5 where "un" is the velocity component along the direction of n that is connected with the highest (positive) power of "t" (supposing that "pn" = "p"3). From the above relations, it follows that or The above equations can be used to confirm that the components of the matter stress-energy-momentum tensor standing in the right hand side of the equations formula_6 are, indeed, to a lower order by 1/"t" than the major terms in their left hand sides. In the equations formula_7 the presence of matter results only in the change of relations imposed on their constituent coordinate functions. The fact that ε becomes infinite by the law eq. 21 confirms that in the solution to eq. 7 one deals with a physical singularity at any values of the powers "p"1, "p"2, "p"3 excepting only (0, 0, 1). For these last values, the singularity is non-physical and can be removed by a change of reference frame. The fictional singularity corresponding to the powers (0, 0, 1) arises as a result of time line coordinates crossing over some 2-dimensional "focal surface". As pointed out in, a synchronous reference frame can always be chosen in such a way that this inevitable time line crossing occurs exactly on such surface (instead of a 3-dimensional caustic surface). Therefore, a solution with such simultaneous for the whole space fictional singularity must exist with a full set of arbitrary functions needed for the general solution. Close to the point "t" = 0 it allows a regular expansion by whole powers of "t". For an analysis of this case, see. Oscillating mode towards the singularity. The general solution by definition is completely stable; otherwise the Universe would not exist. Any perturbation is equivalent to a change in the initial conditions in some moment of time; since the general solution allows arbitrary initial conditions, the perturbation is not able to change its character. Looked at such angle, the four conditions imposed on the coordinate functions in the solution eq. 7 are of different types: three conditions that arise from the equations formula_8= 0 are "natural"; they are a consequence of the structure of Einstein equations. However, the additional condition λ = 0 that causes the loss of one derivative function, is of entirely different type: instability caused by perturbations can break this condition. The action of such perturbation must bring the model to another, more general, mode. The perturbation cannot be considered as small: a transition to a new mode exceeds the range of very small perturbations. The analysis of the behavior of the model under perturbative action, performed by BKL, delineates a complex oscillatory mode on approaching the singularity. They could not give all details of this mode in the broad frame of the general case. However, BKL explained the most important properties and character of the solution on specific models that allow far-reaching analytical study. These models are based on a homogeneous space metric of a particular type. Supposing a homogeneity of space without any additional symmetry leaves a great freedom in choosing the metric. All possible homogeneous (but anisotropic) spaces are classified, according to Bianchi, in several Bianchi types (Type I to IX). (see also Generalized homogeneous solution) BKL investigate only spaces of Bianchi Types VIII and IX. If the metric has the form of eq. 7, for each type of homogeneous spaces exists some functional relation between the reference vectors l, m, n and the space coordinates. The specific form of this relation is not important. The important fact is that for Type VIII and IX spaces, the quantities λ, μ, ν eq. 10 are constants while all "mixed" products l rot m, l rot n, m rot l, "etc.". are zeros. For Type IX spaces, the quantities λ, μ, ν have the same sign and one can write λ = μ = ν = 1 (the simultaneous sign change of the 3 constants does not change anything). For Type VIII spaces, 2 constants have a sign that is opposite to the sign of the third constant; one can write, for example, λ = − 1, μ = ν = 1. The study of the effect of the perturbation on the "Kasner mode" is thus confined to a study on the effect of the λ-containing terms in the Einstein equations. Type VIII and IX spaces are the most suitable models for such a study. Since all 3 quantities λ, μ, ν in those Bianchi types differ from zero, the condition λ = 0 does not hold irrespective of which direction l, m, n has negative power law time dependence. The Einstein equations for the Type VIII and Type IX space models are (the remaining components formula_9, formula_10, formula_11, formula_12, formula_13, formula_14 are identically zeros). These equations contain only functions of time; this is a condition that has to be fulfilled in all homogeneous spaces. Here, the eq. 22 and eq. 23 are exact and their validity does not depend on how near one is to the singularity at "t" = 0. The time derivatives in eq. 22 and eq. 23 take a simpler form if "а", "b", "с" are substituted by their logarithms α, β, γ: substituting the variable "t" for τ according to: Then (subscripts denote differentiation by τ): Adding together equations eq. 26 and substituting in the left hand side the sum (α + β + γ)τ τ according to eq. 27, one obtains an equation containing only first derivatives which is the first integral of the system eq. 26: This equation plays the role of a binding condition imposed on the initial state of eq. 26. The Kasner mode eq. 8 is a solution of eq. 26 when ignoring all terms in the right hand sides. But such situation cannot go on (at "t" → 0) indefinitely because among those terms there are always some that grow. Thus, if the negative power is in the function "a"("t") ("pl" = "p"1) then the perturbation of the Kasner mode will arise by the terms λ2"a"4; the rest of the terms will decrease with decreasing "t". If only the growing terms are left in the right hand sides of eq. 26, one obtains the system: (compare eq. 16; below it is substituted λ2 = 1). The solution of these equations must describe the metric evolution from the initial state, in which it is described by eq. 8 with a given set of powers (with "pl" &lt; 0); let "pl" = "р"1, "pm" = "р"2, "pn" = "р"3 so that Then where Λ is constant. Initial conditions for eq. 29 are redefined as Equations eq. 29 are easily integrated; the solution that satisfies the condition eq. 32 is where "b"0 and "c"0 are two more constants. It can easily be seen that the asymptotic of functions eq. 33 at "t" → 0 is eq. 30. The asymptotic expressions of these functions and the function "t"(τ) at τ → −∞ is formula_15 Expressing "a", "b", "c" as functions of "t", one has where Then The above shows that perturbation acts in such a way that it changes one Kasner mode with another Kasner mode, and in this process the negative power of "t" flips from direction l to direction m: if before it was "pl" &lt; 0, now it is "p'm" &lt; 0. During this change the function "a"("t") passes through a maximum and "b"("t") passes through a minimum; "b", which before was decreasing, now increases: "a" from increasing becomes decreasing; and the decreasing "c"("t") decreases further. The perturbation itself (λ2"a"4α in eq. 29), which before was increasing, now begins to decrease and die away. Further evolution similarly causes an increase in the perturbation from the terms with μ2 (instead of λ2) in eq. 26, next change of the Kasner mode, and so on. It is convenient to write the power substitution rule eq. 35 with the help of the parametrization eq. 5: The greater of the two positive powers remains positive. BKL call this flip of negative power between directions a "Kasner epoch". The key to understanding the character of metric evolution on approaching singularity is exactly this process of Kasner epoch alternation with flipping of powers "pl", "pm", "pn" by the rule eq. 37. The successive alternations eq. 37 with flipping of the negative power "p"1 between directions l and m (Kasner epochs) continues by depletion of the whole part of the initial "u" until the moment at which "u" &lt; 1. The value "u" &lt; 1 transforms into "u" &gt; 1 according to eq. 6; in this moment the negative power is "pl" or "pm" while "pn" becomes the lesser of two positive numbers ("pn" = "p"2). The next series of Kasner epochs then flips the negative power between directions n and l or between n and m. At an arbitrary (irrational) initial value of "u" this process of alternation continues unlimited. In the exact solution of the Einstein equations, the powers "pl", "pm", "pn" lose their original precise sense. This circumstance introduces some "fuzziness" in the determination of these numbers (and together with them, to the parameter "u") which, although small, makes meaningless the analysis of any definite (for example, rational) values of "u". Therefore, only these laws that concern arbitrary irrational values of "u" have any particular meaning. The larger periods in which the scales of space distances along two axes oscillate while distances along the third axis decrease monotonously, are called "eras"; volumes decrease by a law close to ~ "t". On transition from one era to the next, the direction in which distances decrease monotonously, flips from one axis to another. The order of these transitions acquires the asymptotic character of a random process. The same random order is also characteristic for the alternation of the lengths of successive eras (by era length, BKL understand the number of Kasner epoch that an era contains, and not a time interval). To each era ("s"-th era) correspond a series of values of the parameter "u" starting from the greatest, formula_16, and through the values formula_16 − 1, formula_16 − 2, ..., reaching to the smallest, formula_17 &lt; 1. Then that is, "k"("s") = [formula_16] where the brackets mean the whole part of the value. The number "k"("s") is the era length, measured by the number of Kasner epochs that the era contains. For the next era In the limitless series of numbers "u", composed by these rules, there are infinitesimally small (but never zero) values "x"("s") and correspondingly infinitely large lengths "k"("s"). The era series become denser on approaching "t" = 0. However, the natural variable for describing the time course of this evolution is not the world time "t", but its logarithm, ln "t", by which the whole process of reaching the singularity is extended to −∞. According to eq. 33, one of the functions "a", "b", "c", that passes through a maximum during a transition between Kasner epochs, at the peak of its maximum is where it is supposed that "a"max is large compared to "b"0 and "c"0; in eq. 38 "u" is the value of the parameter in the Kasner epoch before transition. It can be seen from here that the peaks of consecutive maxima during each era are gradually lowered. Indeed, in the next Kasner epoch this parameter has the value "u = "u" − 1, and Λ is substituted according to eq. 36"' with Λ' = Λ(1 − 2|"p"1("u")|). Therefore, the ratio of 2 consecutive maxima is formula_18 and finally The above are solutions to Einstein equations in vacuum. As for the pure Kasner mode, matter does not change the qualitative properties of this solution and can be written into it disregarding its reaction on the field. However, if one does this for the model under discussion, understood as an exact solution of the Einstein equations, the resulting picture of matter evolution would not have a general character and would be specific for the high symmetry imminent to the present model. Mathematically, this specificity is related to the fact that for the homogeneous space geometry discussed here, the Ricci tensor components formula_19 are identically zeros and therefore the Einstein equations would not allow movement of matter (which gives non-zero stress energy-momentum tensor components formula_20). In other words, the synchronous frame must also be co-moving with respect to matter. If one substitutes in eq. 19 "u"α = 0, "u"0 = 1, it becomes ε ~ ("abc")−4/3 ~ "t"−4/3. This difficulty is avoided if one includes in the model only the major terms of the limiting (at "t" → 0) metric and writes into it a matter with arbitrary initial distribution of densities and velocities. Then the course of evolution of matter is determined by its general laws of movement eq. 17 and eq. 18 that result in eq. 21. During each Kasner epoch, density increases by the law where "p"3 is, as above, the greatest of the numbers "p"1, "p"2, "p"3. Matter density increases monotonously during all evolution towards the singularity. Metric evolution. Very large "u" values correspond to Kasner powers which are close to the values (0, 0, 1). Two values that are close to zero, are also close to each other, and therefore the changes in two out of the three types of "perturbations" (the terms with λ, μ and ν in the right hand sides of eq. 26) are also very similar. If in the beginning of such long era these terms are very close in absolute values in the moment of transition between two Kasner epochs (or made artificially such by assigning initial conditions) then they will remain close during the greatest part of the length of the whole era. In this case (BKL call this the case of "small oscillations"), analysis based on the action of one type of perturbations becomes incorrect; one must take into account the simultaneous effect of two perturbation types. Two perturbations. Consider a long era, during which two of the functions "a", "b", "c" (let them be "a" and "b") undergo small oscillations while the third function ("c") decreases monotonously. The latter function quickly becomes small; consider the solution just in the region where one can ignore "c" in comparison to "a" and "b". The calculations are first done for the Type IX space model by substituting accordingly λ = μ = ν = 1. After ignoring function "c", the first 2 equations eq. 26 give and eq. 28 can be used as a third equation, which takes the form The solution of eq. 44 is written in the form formula_21 where α0, ξ0 are positive constants, and τ0 is the upper limit of the era for the variable τ. It is convenient to introduce further a new variable (instead of τ) Then Equations eq. 45 and eq. 46 are transformed by introducing the variable χ = α − β: Decrease of τ from τ0 to −∞ corresponds to a decrease of ξ from ξ0 to 0. The long era with close "a" and "b" (that is, with small χ), considered here, is obtained if ξ0 is a very large quantity. Indeed, at large ξ the solution of eq. 49 in the first approximation by 1/ξ is where "A" is constant; the multiplier formula_22 makes χ a small quantity so it can be substituted in eq. 49 by sh 2χ ≈ 2χ. From eq. 50 one obtains formula_23 After determining α and β from eq. 48 and eq. 51 and expanding "e"α and "e"β in series according to the above approximation, one obtains finally: The relation between the variable ξ and time "t" is obtained by integration of the definition "dt" = "abc d"τ which gives The constant "c"0 (the value of "с" at ξ = ξ0) should be now "c"0 formula_24 α0· Let us now consider the domain ξ formula_24 1. Here the major terms in the solution of eq. 49 are: formula_25 where "k" is a constant in the range − 1 &lt; "k" &lt; 1; this condition ensures that the last term in eq. 49 is small (sh 2χ contains ξ2"k" and ξ−2"k"). Then, after determining α, β, and "t", one obtains This is again a Kasner mode with the negative "t" power present in the function "c"("t"). These results picture an evolution that is qualitatively similar to that, described above. During a long period of time that corresponds to a large decreasing ξ value, the two functions "a" and "b" oscillate, remaining close in magnitude formula_26; in the same time, both functions "a" and "b" slowly (formula_27) decrease. The period of oscillations is constant by the variable ξ : Δξ = 2π (or, which is the same, with a constant period by logarithmic time: Δ ln "t" = 2π"Α"2). The third function, "c", decreases monotonously by a law close to "c" = "c"0"t"/"t"0. This evolution continues until ξ ≈1 and formulas eq. 52 and eq. 53 are no longer applicable. Its time duration corresponds to change of "t" from "t"0 to the value "t"1, related to ξ0 according to The relationship between ξ and "t" during this time can be presented in the form After that, as seen from eq. 55, the decreasing function "c" starts to increase while functions "a" and "b" start to decrease. This Kasner epoch continues until terms "c"2/"a"2"b"2 in eq. 22 become ~ "t"2 and a next series of oscillations begins. The law for density change during the long era under discussion is obtained by substitution of eq. 52 in eq. 20: When ξ changes from ξ0 to ξ ≈1, the density increases formula_28 times. It must be stressed that although the function "c"("t") changes by a law, close to "c" ~ "t", the metric eq. 52 does not correspond to a Kasner metric with powers (0, 0, 1). The latter corresponds to an exact solution found by Taub which is allowed by eqs. 26–27 and in which where "p", δ1, δ2 are constant. In the asymptotic region τ → −∞, one can obtain from here "a" = "b" = const, "c" = const."t" after the substitution "ерτ" = "t". In this metric, the singularity at "t" = 0 is non-physical. Let us now describe the analogous study of the Type VIII model, substituting in eqs. eqs. 26'–'28 λ = −1, μ = ν = 1. If during the long era, the monotonically decreasing function is "a", nothing changes in the foregoing analysis: ignoring "a"2 on the right side of equations 26 and 28, goes back to the same equations 49 and 50 (with altered notation). Some changes occur, however, if the monotonically decreasing function is "b" or "c"; let it be "c". As before, one has equation 49 with the same symbols, and, therefore, the former expressions eq. 52 for the functions "a"(ξ) and "b"(ξ), but equation 50 is replaced by The major term at large ξ now becomes formula_29 so that The value of "c" as a function of time "t" is again "c" = "c"0"t"/"t"0 but the time dependence of ξ changes. The length of a long era depends on ξ0 according to On the other hand, the value ξ0 determines the number of oscillations of the functions "a" and "b" during an era (equal to ξ0/2π). Given the length of an era in logarithmic time (i.e., with given ratio "t"0/"t"1) the number of oscillations for Type VIII will be, generally speaking, less than for Type IX. For the period of oscillations one gets now Δ ln "t" = πξ/2; contrary to Type IX, the period is not constant throughout the long era, and slowly decreases along with ξ. The small-time domain. Long eras violate the "regular" course of evolution which makes it difficult to study the evolution of time intervals spanning several eras. It can be shown, however, that such "abnormal" cases appear in the spontaneous evolution of the model to a singular point in the asymptotically small times "t" at sufficiently large distances from a start point with arbitrary initial conditions. Even in long eras both oscillatory functions during transitions between Kasner epochs remain so different that the transition occurs under the influence of only one perturbation. All results in this section relate equally to models of the types VIII and IX. During each Kasner epoch "abc" = Λ"t", "i. e." α + β + γ = ln Λ + ln "t". On changing over from one epoch (with a given value of the parameter "u") to the next epoch the constant Λ is multiplied by 1 + 2"p"1 = (1 – "u" + "u"2)/(1 + "u" + "u"2) &lt; 1. Thus a systematic decrease in Λ takes place. But it is essential that the mean (with respect to the lengths "k" of eras) value of the entire variation of ln Λ during an era is finite. Actually the divergence of the mean value could be due only to a too rapid increase of this variation with increasing "k". For large value of the parameter "u", ln(1 + 2"p"1) ≈ −2/"u". For a large "k" the maximal value "u"(max) = "k" + "x" ≈ k. Hence the entire variation of ln Λ during an era is given by a sum of the form formula_30 with only the terms that correspond to large values of "u" written down. When "k" increases this sum increases as ln "k". But the probability for an appearance of an era of a large length "k" decreases as 1/"k"2 according to eq. 76; hence the mean value of the sum above is finite. Consequently, the systematic variation of the quantity ln Λ over a large number of eras will be proportional to this number. But it is seen in eq. 85 that with "t" → 0 the number "s" increases merely as ln |ln "t"|. Thus in the asymptotic limit of arbitrarily small "t" the term ln Λ can indeed be neglected as compared to ln "t". In this approximation where Ω denotes the "logarithmic time" and the process of epoch transitions can be regarded as a series of brief time flashes. The magnitudes of maxima of the oscillating scale functions are also subject to a systematic variation. From eq. 39 for u ≫ 1 it follows that formula_31. In the same way as it was done above for the quantity ln Λ, one can hence deduce that the mean decrease in the height of the maxima during an era is finite and the total decrease over a large number of eras increases with "t" → 0 merely as ln Ω. At the same time the lowering of the minima, and by the same token the increase of the amplitude of the oscillations, proceed (eq. 77) proportional to Ω. In correspondence with the adopted approximation the lowering of the maxima is neglected in comparison with the increase of the amplitudes so αmax 0, βmax 0, γmax 0 for the maximal values of all oscillating functions and the quantities α, β, γ run only through negative values that are connected with one another at each instant of time by the relation eq. 63. Considering such instant change of epochs, the transition periods are ignored as small in comparison to the epoch length; this condition is actually fulfilled. Replacement of α, β, and γ maxima with zeros requires that quantities ln (|"p"1|Λ) be small in comparison with the amplitudes of oscillations of the respective functions. As mentioned above, during transitions between eras |"p"1| values can become very small while their magnitude and probability for occurrence are not related to the oscillation amplitudes in the respective moment. Therefore, in principle, it is possible to reach so small |"p"1| values that the above condition (zero maxima) is violated. Such drastic drop of αmax can lead to various special situations in which the transition between Kasner epochs by the rule eq. 37 becomes incorrect (including the situations described above). These "dangerous" situations could break the laws used for the statistical analysis below. As mentioned, however, the probability for such deviations converges asymptotically to zero; this issue will be discussed below. Consider an era that contains "k" Kasner epochs with a parameter "u" running through the values and let α and β are the oscillating functions during this era (Fig. 4). Initial moments of Kasner epochs with parameters "un" are Ω"n". In each initial moment, one of the values α or β is zero, while the other has a minimum. Values α or β in consecutive minima, that is, in moments Ω"n" are (not distinguishing minima α and β). Values δ"n" that measure those minima in respective Ω"n" units can run between 0 and 1. Function γ monotonously decreases during this era; according to eq. 63 its value in moment Ω"n" is During the epoch starting at moment Ω"n" and ending at moment Ω"n"+1 one of the functions α or β increases from −δ"n"Ω"n" to zero while the other decreases from 0 to −δ"n"+1Ω"n"+1 by linear laws, respectively: formula_33 and formula_34 resulting in the recurrence relation and for the logarithmic epoch length where, for short, "f"("u") = 1 + "u" + "u"2. The sum of "n" epoch lengths is obtained by the formula It can be seen from eq. 68 that |α"n+1"| &gt; |α"n"|, i.e., the oscillation amplitudes of functions α and β increase during the whole era although the factors δ"n" may be small. If the minimum at the beginning of an era is deep, the next minima will not become shallower; in other words, the residue |α — β| at the moment of transition between Kasner epochs remains large. This assertion does not depend upon era length "k" because transitions between epochs are determined by the common rule eq. 37 also for long eras. The last oscillation amplitude of functions α or β in a given era is related to the amplitude of the first oscillation by the relationship |α"k"−1| = |α0| ("k" + "x") / (1 + "x"). Even at "k"'s as small as several units "x" can be ignored in comparison to "k" so that the increase of α and β oscillation amplitudes becomes proportional to the era length. For functions "a" = "e"α and "b" = "e"β this means that if the amplitude of their oscillations in the beginning of an era was "A"0, at the end of this era the amplitude will become formula_35. The length of Kasner epochs (in logarithmic time) also increases inside a given era; it is easy to calculate from eq. 69 that Δ"n"+1 &gt; Δ"n". The total era length is (the term with 1/"x" arises from the last, "k"-th, epoch whose length is great at small "x"; cf. Fig. 2). Moment Ω"n" when the "k"-th epoch of a given era ends is at the same time moment Ω'0 of the beginning of the next era. In the first Kasner epoch of the new era function γ is the first to rise from the minimal value γ"k" = − Ω"k" (1 − δ"k") that it reached in the previous era; this value plays the role of a starting amplitude δ'0Ω'0 for the new series of oscillations. It is easily obtained that: It is obvious that δ'0Ω'0 &gt; δ0Ω0. Even at not very great "k" the amplitude increase is very significant: function "c" = "e"γ begins to oscillate from amplitude formula_36. The issue about the above-mentioned "dangerous" cases of drastic lowering of the upper oscillation limit is left aside for now. According to eq. 40 the increase in matter density during the first ("k" − 1) epochs is given by the formula formula_37 For the last "k" epoch of a given era, at "u" = "x" &lt; 1 the greatest power is "p"2("x") (not "p"3("x") ). Therefore, for the density increase over the whole era one obtains Therefore, even at not very great "k" values, formula_38. During the next era (with a length "k" ' ) density will increase faster because of the increased starting amplitude "A"0': formula_39, etc. These formulae illustrate the steep increase in matter density. Statistical analysis near the singularity. The sequence of era lengths "k"("s"), measured by the number of Kasner epochs contained in them, acquires asymptotically the character of a random process. The same pertains also to the sequence of the interchanges of the pairs of oscillating functions on going over from one era to the next (it depends on whether the numbers "k"("s") are even or odd). A source of this stochasticity is the rule eqs. 41–42 according to which the transition from one era to the next is determined in an infinite numerical sequence of "u" values. This rule states, in other words, that if the entire infinite sequence begins with a certain initial value formula_40, then the lengths of the eras "k"(0), "k"(1), ..., are the numbers in the continued fraction expansion This expansion corresponds to the mapping transformation of the interval [0, 1] onto itself by the formula "Tx" = {1/"x"}, i.e., "x""s"+1 = {1/"x""s"}. This transformation belongs to the so-called expanding transformations of the interval [0, 1], i.e., transformations "x" → "f"("x") with |"f′"("x")| &gt; 1. Such transformations possess the property of exponential instability: if we take initially two close points their mutual distance increases exponentially under the iterations of the transformations. It is well known that the exponential instability leads to the appearance of strong stochastic properties. It is possible to change over to a probabilistic description of such a sequence by considering not a definite initial value "x"(0) but the values "x"(0) = x distributed in the interval from 0 to 1 in accordance with a certain probabilistic distributional law "w"0("x"). Then the values of "x"(s) terminating each era will also have distributions that follow certain laws "ws(x)". Let "ws(x)dx" be the probability that the "s"-th era terminates with the value formula_41 lying in a specified interval "dx". The value "x(s)" = "x", which terminates the "s"-th era, can result from initial (for this era) values formula_42, where "k" = 1, 2, ...; these values of formula_43 correspond to the values "x"("s"–1) = 1/("k" + "x") for the preceding era. Noting this, one can write the following recurrence relation, which expresses the distribution of the probabilities "ws(x)" in terms of the distribution "w""s"–1("x"): formula_44 or If the distribution "w""s"("x") tends with increasing "s" to a stationary (independent of "s") limiting distribution "w"("x"), then the latter should satisfy an equation obtained from eq. 73c by dropping the indices of the functions "w""s"−1("x") and "w""s"("x"). This equation has a solution (normalized to unity and taken to the first order of "x"). In order for the "s"-th era to have a length "k", the preceding era must terminate with a number "x" in the interval between 1/("k" + 1) and 1/"k". Therefore, the probability that the era will have a length "k" is equal to (in the stationary limit) At large values of "k" In relating the statistical properties of the cosmological model with the ergodic properties of the transformation "x""s"+1 = {1/"x""s"} an important point must be mentioned. In an infinite sequence of numbers "x" constructed in accordance with this rule, arbitrarily small (but never vanishing) values of "x" will be observed corresponding to arbitrarily large lengths k. Such cases can (by no means necessarily!) give rise to certain specific situations when the notion of eras, as of sequences of Kasner epochs interchanging each other according to the rule eq. 37, loses its meaning (although the oscillatory mode of evolution of the model still persists). Such an "anomalous" situation can be manifested, for instance, in the necessity to retain in the right-hand side of eq. 26 terms not only with one of the functions "a", "b", "c" (say, "a"4), as is the case in the "regular" interchange of the Kasner epochs, but simultaneously with two of them (say, "a"4, "b"4, "a"2"b"2). On emerging from an "anomalous" series of oscillations a succession of regular eras is restored. Statistical analysis of the behavior of the model which is entirely based on regular iterations of the transformations eq. 42 is corroborated by an important theorem: the probability of the appearance of anomalous cases tends asymptotically to zero as the number of iterations "s" → ∞ (i.e., the time "t" → 0) which is proved at the end of this section. The validity of this assertion is largely due to a very rapid rate of increase of the oscillation amplitudes during every era and especially in transition from one era to the next one. The process of the relaxation of the cosmological model to the "stationary" statistical regime (with t → 0 starting from a given "initial instant") is less interesting, however, than the properties of this regime itself with due account taken for the concrete laws of the variation of the physical characteristics of the model during the successive eras. An idea of the rate at which the stationary distribution sets in is obtained from the following example. Let the initial values "x"(0) be distributed in a narrow interval of width δ"x"(0) about some definite number. From the recurrence relation eq. 73c (or directly from the expansion eq. 73a) it is easy to conclude that the widths of the distributions "w""s"("x") (about other definite numbers) will then be equal to (this expression is valid only so long as it defines quantities δ"x"(s) ≪ 1). The mean value formula_45, calculated from this distribution, diverges logarithmically. For a sequence, cut off at a very large, but still finite number "N", one has formula_46. The usefulness of the mean in this case is very limited because of its instability: because of the slow decrease of "W"("k"), fluctuations in "k" diverge faster than its mean. A more adequate characteristic of this sequence is the probability that a randomly chosen number from it belongs to an era of length "K" where "K" is large. This probability is ln"K" / ln"N". It is small if formula_47. In this respect one can say that a randomly chosen number from the given sequence belongs to the long era with a high probability. It convenient to average expressions that depend simultaneously on "k"("s") and "x"("s"). Since both these quantities are derived from the same quantity "x"("s"–1) (which terminates the preceding era), in accordance with the formula "k"("s") + "x"("s") = 1/"x"("s"–1), their statistical distributions cannot be regarded as independent. The joint distribution "W""s"("k","x")"dx" of both quantities can be obtained from the distribution "w""s"–1("x")"dx" by making in the latter the substitution "x" → 1/("x" + "k"). In other words, the function "W""s"("k","x") is given by the very expression under the summation sign in the right side of eq. 73c. In the stationary limit, taking "w" from eq. 74, one obtains Summation of this distribution over "k" brings us back to eq. 74, and integration with respect to "dx" to eq. 75. The recurrent formulas defining transitions between eras are re-written with index "s" numbering the successive eras (not the Kasner epochs in a given era!), beginning from some era ("s" = 0) defined as initial. Ω("s") and ε("s") are, respectively, the initial moment and initial matter density in the "s"-th era; δ("s")Ω("s") is the initial oscillation amplitude of that pair of functions α, β, γ, which oscillates in the given era: "k"("s") is the length of "s"-th era, and "x"("s") determines the length (number of Kasner epochs) of the next era according to "k"("s"+1) = [1/"x"("s")]. According to eqs. 71–73 (ξ("s") is introduced in eq. 77 to be used further on). The quantities δ("s") have a stable stationary statistical distribution "P"(δ) and a stable (small relative fluctuations) mean value. For their determination KL in coauthorship with Ilya Lifshitz, the brother of Evgeny Lifshitz, used (with due reservations) an approximate method based on the assumption of statistical independence of the random quantity δ("s") and of the random quantities "k"("s"), "x"("s"). For the function "P"(δ) an integral equation was set up which expressed the fact that the quantities δ("s"+1) and δ("s") interconnected by the relation eq. 78 have the same distribution; this equation was solved numerically. In a later work, Khalatnikov et al. showed that the distribution "P"(δ) can actually be found exactly by an analytical method (see Fig. 5). For the statistical properties in the stationary limit, it is reasonable to introduce the so-called natural extension of the transformation "Tx" = {1/"x"} by continuing it without limit to negative indices. Otherwise stated, this is a transition from a one-sided infinite sequence of the numbers ("x"0, "x"1, "x"2, ...), connected by the equalities "Tx" = {1/"x"}, to a "doubly infinite" sequence "X" = (..., "x"−1, "x"0, "x"1, "x"2, ...) of the numbers which are connected by the same equalities for all –∞ &lt; "s" &lt; ∞. Of course, such expansion is not unique in the literal meaning of the word (since "x""s"–1 is not determined uniquely by "x""s"), but all statistical properties of the extended sequence are uniform over its entire length, i.e., are invariant with respect to arbitrary shift (and "x"0 loses its meaning of an "initial" condition). The sequence "X" is equivalent to a sequence of integers "K" = (..., "k"−1, "k"0, "k"1, "k"2, ...), constructed by the rule "k""s" = [1/"x""s"–1]. Inversely, every number of X is determined by the integers of K as an infinite continued fraction (the convenience of introducing the notation formula_48 with an index shifted by 1 will become clear in the following). For concise notation the continuous fraction is denoted simply by enumeration (in square brackets) of its denominators; then the definition of formula_49 can be written as Reverse quantities are defined by a continued fraction with a retrograde (in the direction of diminishing indices) sequence of denominators The recurrence relation eq. 78 is transformed by introducing temporarily the notation "ηs" = (1 − δ"s")/δ"s". Then eq. 78 can be rewritten as formula_50 By iteration an infinite continuous fraction is obtained formula_51 Hence formula_52 and finally This expression for δ"s" contains only two (instead of the three in ) random quantities formula_49 and formula_53, each of which assumes values in the interval [0, 1]. It follows from the definition eq. 79c that formula_54. Hence the shift of the entire sequence "X" by one step to the right means a joint transformation of the quantities formula_49 and formula_53 according to This is a one-to-one mapping in the unit square. Thus we have now a one-to-one transformation of two quantities instead of a not one-to-one transformation "Tx" = {1/"x"} of one quantity. The quantities formula_49 and formula_53 have a joint stationary distribution "P"("x"+, "x"−). Since eq. 79e is a one-to-one transformation, the condition for the distribution to be stationary is expressed simply by a function equation where "J" is the Jacobian of the transformation. A shift of the sequence "X" by one step gives rise to the following transformation "T" of the unit square: formula_55 (with formula_56, formula_57, cf. eq. 79e). The density "P"("x", "y") defines the invariant measure for this transformation. It is natural to suppose that "P"("x", "y") is a symmetric function of "x" and "y". This means that the measure is invariant with respect to the transformation "S"("x", "y") = ("y", "x") and hence with respect to the product "ST" with "ST"("x", "y") = ("x″", "y″") and formula_58 Evidently "ST" has a first integral "H" = 1/"x" + "y". On the line "H" = const ≡ "c" the transformation has the form formula_59 Hence the invariant measure density of "ST" must be of the form formula_60 Accounting for the symmetry "P"("x", "y")= "P"("y", "x"), this becomes "f"("c")= "c"−2 and hence (after normalization) (its integration over "x"+ or "x"– yields the function "w"("x") eq. 74). The reduction of the transformation to one-to-one mapping was used already by Chernoff and Barrow and they obtained a formula of the form of eq. 79g but for other variables; their paper does not contain applications to the problems which are considered in Khalatnikov et al. The correctness of eq. 79g be verified also by a direct calculation; the Jacobian of the transformation eq. 79e is formula_61 (in its calculation one must note that formula_62). Since by eq. 79d δs is expressed in terms of the random quantities "x"+ and "x"−, the knowledge of their joint distribution makes it possible to calculate the statistical distribution "P"(δ) by integrating "P"("x"+, "x"−) over one of the variables at a constant value of δ. Due to symmetry of the function eq. 79g with respect to the variables "x"+ and "x"−, "P"(δ) = "P"(1 − δ), i.e., the function "P"(δ) is symmetrical with respect to the point δ = 1/2. Then formula_63 On evaluating this integral (for 0 ≤ δ ≤ 1/2 and then making use of the aforementioned symmetry), finally The mean value formula_64= 1/2 already as a result of the symmetry of the function "P"(δ). Thus the mean value of the initial (in every era) amplitude of oscillations of the functions α, β, γ increases as Ω/2. The statistical relation between large time intervals Ω and the number of eras "s" contained in them is found by repeated application of eq. 77: Direct averaging of this equation, however, does not make sense: because of the slow decrease of function "W"("k") eq. 76, the average values of the quantity exp ξ("s") are unstable in the above sense – the fluctuations increase even more rapidly than the mean value itself with increasing region of averaging. This instability is eliminated by taking the logarithm: the "doubly-logarithmic" time interval is expressed by the sum of quantities ξ("p") which have a stable statistical distribution. The mean value of τ is formula_65. To calculate formula_66 note that eq. 77 can be rewritten as For the stationary distribution formula_67, and in virtue of the symmetry of the function "P"(δ) also formula_68. Hence formula_69 ("w"("x") from eq. 74). Thus which determines the mean doubly-logarithmic time interval containing "s" successive eras. For large "s" the number of terms in the sum eq. 81 is large and according to general theorems of the ergodic theory the values of τs are distributed around formula_70 according to Gauss's law with the density Calculation of the variance "D"τ is more complicated since not only the knowledge of formula_66 and formula_71 are needed but also of the correlations formula_72. The calculation can be simplified by rearranging the terms in the sum eq. 81. By using eq. 81a the sum can be rewritten as formula_73 The last two terms do not increase with increasing "s"; these terms can be omitted as the limiting laws for large "s" are dominating. Then (the expression eq. 79d for δp is taken into account). To the same accuracy (i.e., up to the terms which do not increase with "s") the equality is valid. Indeed, in virtue of eq. 79e formula_74 and hence formula_75 By summing this identity over "p" eq. 82c is obtained. Finally again with the same accuracy formula_76 is changed for "x""p" under the summation sign and thus represent τ"s" as The variance of this sum in the limit of large "s" is It is taken into account that in virtue of the statistical homogeneity of the sequence "X" the correlations formula_77 depend only on the differences |"p" − "p"′|. The mean value formula_78; the mean square formula_79 By taking into account also the values of correlations formula_80 with "p" = 1, 2, 3 (calculated numerically) the final result "D"τ"s" = (3.5 ± 0.1)"s" is obtained. At increasing "s" the relative fluctuation formula_81 tends to zero as "s"−1/2. In other words, the statistical relation eq. 82 becomes almost certain at large "s". This makes it possible to invert the relation, i.e., to represent it as the dependence of the average number of the eras "s"τ that are interchanged in a given interval τ of the double logarithmic time: The statistical distribution of the exact values of "s"τ around its average is also Gaussian with the variance formula_82 The respective statistical distribution is given by the same Gaussian distribution in which the random variable is now "s"τ at a given τ: From this point of view, the source of the statistical behavior is the arbitrariness in the choice of the starting point of the interval τ superimposed on the infinite sequence of the interchanging eras. Respective to matter density, eq. 79 can be re-written with account of eq. 80 in the form formula_83 and then, for the total energy change during "s" eras, The term with the sum by "p" gives the main contribution to this expression because it contains an exponent with a large power. Leaving only this term and averaging eq. 87, one gets in its right hand side the expression formula_84 which coincides with eq. 82; all other terms in the sum (also terms with η"s" in their powers) lead only to corrections of a relative order 1/"s". Therefore, By virtue of the almost certain character of the relation between τ"s" and "s" eq. 88 can be written as formula_85 which determines the value of the double logarithm of density increase averaged by given double-logarithmic time intervals τ or by a given number of eras "s". These stable statistical relationships exist specifically for doubly-logarithmic time intervals and for the density increase. For other characteristics, e.g., ln (ε("s")/ε(0)) or Ω(s) / Ω(0) = exp τs the relative fluctuation increase exponentially with the increase of the averaging range thereby voiding the term mean value of a stable meaning. The origin of the statistical relationship eq. 88 can be traced already from the initial law governing the variation of the density during the individual Kasner epochs. According to eq. 21, during the entire evolution we have formula_86 with 1 − "p"3("t") changing from epoch to epoch, running through values in the interval from 0 to 1. The term ln Ω = ln ln (1/"t") increases monotonically; on the other hand, the term ln2(1 − "p"3) can assume large values (comparable with ln Ω) only when values of "p"3 very close to unity appear (i.e., very small |"p"1|). These are precisely the "dangerous" cases that disturb the regular course of evolution expressed by the recurrent relationships eq. 77–eq. 79. It remains to show that such cases actually do not arise in the asymptotic limiting regime. The spontaneous evolution of the model starts at a certain instant at which definite initial conditions are specified in an arbitrary manner. Accordingly, by "asymptotic" is meant a regime sufficiently far away from the chosen initial instant. Dangerous cases are those in which excessively small values of the parameter "u" = "x" (and hence also |"p"1| ≈ "x") appear at the end of an era. A criterion for selection of such cases is the inequality where | α("s") | is the initial minima depth of the functions that oscillate in era "s" (it would be more appropriate to choose the final amplitude, but that would only strengthen the selection criterion). The value of "x"(0) in the first era is determined by the initial conditions. Dangerous are values in the interval δ"x"(0) ~ exp ( − |α(0)| ), and also in intervals that could result in dangerous cases in the next eras. In order for "x"("s") to fall in the dangerous interval δ"x"("s") ~ exp ( − | α("s") | ), the initial value "x"(0) should lie into an interval of a width δ"x"(0) ~ δ"x"("s") / "k"(1)^2 ... "k"("s")^2. Therefore, from a unit interval of all possible values of "x"(0), dangerous cases will appear in parts λ of this interval: (the inner sum is taken over all the values "k"(1), "k"(2), ... , "k"("s") from 1 to ∞). It is easy to show that this era converges to the value λ formula_24 1 whose order of magnitude is determined by the first term in eq. 90. This can be shown by a strong majoration of the era for which one substitutes | α("s") | = (s + 1) | α(0) |, regardless of the lengths of eras "k"(1), "k"(2), ... (In fact | α("s") | increase much faster; even in the most unfavorable case "k"(1) = "k"(2) = ... = 1 values of | α("s") | increase as "q""s" | α(0) | with "q" &gt; 1.) Noting that formula_87 one obtains formula_88 If the initial value of "x"(0) lies outside the dangerous region λ there will be no dangerous cases. If it lies inside this region dangerous cases occur, but upon their completion the model resumes a "regular" evolution with a new initial value which only occasionally (with a probability λ) may come into the dangerous interval. Repeated dangerous cases occur with probabilities λ2, λ3, ... , asymptotically converging to zero. General solution with small oscillations. In the above models, metric evolution near the singularity is studied on the example of homogeneous space metrics. It is clear from the characteristic of this evolution that the analytic construction of the general solution for a singularity of such type should be made separately for each of the basic evolution components: for the Kasner epochs, for the process of transitions between epochs caused by "perturbations", for long eras with two perturbations acting simultaneously. During a Kasner epoch (i.e. at small perturbations), the metric is given by eq. 7 without the condition λ = 0. BKL further developed a matter distribution-independent model (homogeneous or non-homogeneous) for long era with small oscillations. The time dependence of this solution turns out to be very similar to that in the particular case of homogeneous models; the latter can be obtained from the distribution-independent model by a special choice of the arbitrary functions contained in it. It is convenient, however, to construct the general solution in a system of coordinates somewhat different from synchronous reference frame: "g"0α = 0 as in the synchronous frame, but instead of "g"00 = 1 it is now "g"00 = −"g"33. Defining again the space metric tensor γαβ = −"g"αβ one has, therefore The special space coordinate is written as "x"3 = "z" and the time coordinate is written as "x"0 = ξ (as different from proper time "t"); it will be shown that ξ corresponds to the same variable defined in homogeneous models. Differentiation by ξ and "z" is designated, respectively, by dot and prime. Latin indices "a", "b", "c" take values 1, 2, corresponding to space coordinates "x"1, "x"2 which will be also written as "x", "y". Therefore, the metric is The required solution should satisfy the inequalities (these conditions specify that one of the functions "a"2, "b"2, "c"2 is small compared to the other two which was also the case with homogeneous models). Inequality eq. 94 means that components γ"a"3 are small in the sense that at any ratio of the shifts "dxa" and "dz", terms with products "dxadz" can be omitted in the square of the spatial length element "dl"2. Therefore, the first approximation to a solution is a metric eq. 92 with γ"a"3 = 0: One can be easily convinced by calculating the Ricci tensor components formula_89, formula_90, formula_91, formula_92 using metric eq. 95 and the condition eq. 93 that all terms containing derivatives by coordinates "xa" are small compared to terms with derivatives by ξ and "z" (their ratio is ~ γ33 / γ"ab"). In other words, to obtain the equations of the main approximation, γ33 and γ"ab" in eq. 95 should be differentiated as if they do not depend on "xa". Designating one obtains the following equations: Index raising and lowering is done here with the help of γ"ab". The quantities formula_93 and λ are the contractions formula_94 and formula_95 whereby As to the Ricci tensor components formula_96, formula_97, by this calculation they are identically zero. In the next approximation (i.e., with account to small γ"a"3 and derivatives by "x", "y"), they determine the quantities γ"a"3 by already known γ33 and γ"ab". Contraction of eq. 97 gives formula_98, and, hence, Different cases are possible depending on the "G" variable. In the above case "g"00 = γ33 formula_32 γ"ab" and formula_99. The case "N" &gt; 0 (quantity "N" is time-like) leads to time singularities of interest. Substituting in eq. 101 "f"1 = 1/2 (ξ + "z") sin "y", "f"2 = 1/2 (ξ − "z") sin "y" results in "G" of type This choice does not diminish the generality of conclusions; it can be shown that generality is possible (in the first approximation) just on account of the remaining permissible transformations of variables. At "N" &lt; 0 (quantity "N" is space-like) one can substitute "G" = "z" which generalizes the well-known Einstein–Rosen metric. At "N" = 0 one arrives at the Robinson–Bondi wave metric that depends only on ξ + "z" or only on ξ − "z" (cf. ). The factor sin "y" in eq. 102 is put for convenient comparison with homogeneous models. Taking into account eq. 102, equations eq. 97 – eq. 99 become The principal equations are eq. 103 defining the γ"ab" components; then, function ψ is found by a simple integration of eq. 104–eq. 105. The variable ξ runs through the values from 0 to ∞. The solution of eq. 103 is considered at two boundaries, ξ formula_32 1 and formula_24 1. At large ξ values, one can look for a solution that takes the form of a 1 / √ξ decomposition: whereby (equation 107 needs condition 102 to be true). Substituting eq. 103 in eq. 106, one obtains in the first order where quantities "aac" constitute a matrix that is inverse to matrix "aac". The solution of eq. 108 has the form where "la", "ma", ρ, are arbitrary functions of coordinates "x", "y" bound by condition eq. 110 derived from eq. 107. To find higher terms of this decomposition, it is convenient to write the matrix of required quantities γ"ab" in the form where the symbol ~ means matrix transposition. Matrix "H" is symmetric and its trace is zero. Presentation eq. 111 ensures symmetry of γ"ab" and fulfillment of condition eq. 102. If exp "H" is substituted with 1, one obtains from eq. 111 γ"ab" = ξ"aab" with "aab" from eq. 109. In other words, the first term of γ"ab" decomposition corresponds to "H" = 0; higher terms are obtained by powers decomposition of matrix "H" whose components are considered small. The independent components of matrix "H" are written as σ and φ so that Substituting eq. 111 in eq. 103 and leaving only terms linear by "H", one derives for σ and φ formula_100 If one tries to find a solution to these equations as Fourier series by the "z" coordinate, then for the series coefficients, as functions of ξ, one obtains Bessel equations. The major asymptotic terms of the solution at large ξ are formula_101 formula_102 Coefficients "A" and "B" are arbitrary complex functions of coordinates "x", "y" and satisfy the necessary conditions for real σ and φ; the base frequency ω is an arbitrary real function of "x", "y". Now from eq. 104–eq. 105 it is easy to obtain the first term of the function ψ: (this term vanishes if ρ = 0; in this case the major term is the one linear for ξ from the decomposition: ψ = ξ"q" ("x", "y") where "q" is a positive function). Therefore, at large ξ values, the components of the metric tensor γ"ab" oscillate upon decreasing ξ on the background of a slow decrease caused by the decreasing ξ factor in eq. 111. The component γ33 = "e"ψ decreases quickly by a law close to exp (ρ2ξ2); this makes it possible for condition eq. 93. Next BKL consider the case ξ formula_24 1. The first approximation to a solution of eq. 103 is found by the assumption (confirmed by the result) that in these equations terms with derivatives by coordinates can be left out: This equation together with the condition eq. 102 gives where λ"a", μ"a", "s"1, "s"2 are arbitrary functions of all 3 coordinates "x", "y", "z", which are related with other conditions Equations eq. 104–eq. 105 give now The derivatives formula_103, calculated by eq. 118, contain terms ~ ξ4"s"1 − 2 and ~ ξ4"s"2 − 2 while terms left in eq. 117 are ~ ξ−2. Therefore, application of eq. 103 instead of eq. 117 is permitted on conditions "s"1 &gt; 0, "s"2 &gt; 0; hence 1 − formula_104 &gt; 0. Thus, at small ξ oscillations of functions γ"ab" cease while function γ33 begins to increase at decreasing ξ. This is a Kasner mode and when γ33 is compared to γ"ab", the above approximation is not applicable. In order to check the compatibility of this analysis, BKL studied the equations formula_8 = 0, formula_105 = 0, and, calculating from them the components γ"a"3, confirmed that the inequality eq. 94 takes place. This study showed that in both asymptotic regions the components γ"a"3 were ~ γ33. Therefore, correctness of inequality eq. 93 immediately implies correctness of inequality eq. 94. This solution contains, as it should for the general case of a field in vacuum, four arbitrary functions of the three space coordinates "x", "y", "z". In the region ξ formula_24 1 these functions are, e.g., λ1, λ2, μ1, "s"1. In the region ξ formula_32 1 the four functions are defined by the Fourier series by coordinate "z" from eq. 115 with coefficients that are functions of "x", "y"; although Fourier series decomposition (or integral?) characterizes a special class of functions, this class is large enough to encompass any finite subset of the set of all possible initial conditions. The solution contains also a number of other arbitrary functions of the coordinates "x", "y". Such "two-dimensional" arbitrary functions appear, generally speaking, because the relationships between three-dimensional functions in the solutions of the Einstein equations are differential (and not algebraic), leaving aside the deeper problem about the geometric meaning of these functions. BKL did not calculate the number of independent two-dimensional functions because in this case it is hard to make unambiguous conclusions since the three-dimensional functions are defined by a set of two-dimensional functions (cf. for more details). Finally, BKL go on to show that the general solution contains the particular solution obtained above for homogeneous models. Substituting the basis vectors for Bianchi Type IX homogeneous space in eq. 7 the space-time metric of this model takes the form When "c"2 formula_24 "a"2, "b"2, one can ignore "c"2 everywhere except in the term "c"2 "dz"2. To move from the synchronous frame used in eq. 121 to a frame with conditions eq. 91, the transformation "dt" = "c d"ξ/2 and substitution "z" → "z"/2 are done. Assuming also that χ ≡ ln ("a"/"b") formula_24 1, one obtains from eq. 121 in the first approximation: Similarly, with the basis vectors of Bianchi Type VIII homogeneous space, one obtains According to the analysis of homogeneous spaces above, in both cases "ab" = ξ (simplifying formula_106 = ξ0) and χ is from eq. 51; function "c" (ξ) is given by formulae eq. 53 and eq. 61, respectively, for models of Types IX and VIII. Identical metric for Type VIII is obtained from eq. 112, eq. 115, eq. 116 choosing two-dimensional vectors "la" and "ma" in the form and substituting To obtain the metric for Type IX, one should substitute This analysis was done for empty space. Including matter does not make the solution less general and does not change its qualitative characteristics. A limitation of great importance for the general solution is that all 3-dimensional functions contained in the metrics eq. 122 and eq. 123 should have a single and common characteristic change interval. Only this allows to approximate in the Einstein equations all metric spatial component derivatives with simple products of these components by a characteristic wave numbers which results in ordinary differential equations of the type obtained for the Type IX homogeneous model. This is the reason for the coincidence between homogeneous and general solutions. It follows that both Type IX model and its generalisation contain an oscillatory mode with a single spatial scale of an arbitrary magnitude which is not selected among others by any physical conditions. However, it is known that in non-linear systems with infinite degrees of freedom such mode is unstable and partially dissipates to smaller oscillations. In the general case of small perturbations with an arbitrary spectrum, there will always be some whose amplitudes will increase feeding upon the total process energy. As a result, a complicated picture arises of multi-scale movements with certain energy distribution and energy exchange between oscillations of different scales. It doesn't occur only in the case when the development of small-scale oscillations is impossible because of physical conditions. For the latter, some natural physical length must exist which determines the minimal scale at which energy exits from a system with dynamical degrees of freedom (which, for example, occurs in a liquid with a certain viscosity). However, there is no innate physical scale for a gravitational field in vacuum, and, therefore, there is no impediment for the development of oscillations of arbitrarily small scales. Conclusions. BKL describe singularities in the cosmologic solution of Einstein equations that have a complicated oscillatory character. Although these singularities have been studied primarily on spatially homogeneous models, there are convincing reasons to assume that singularities in the general solution of Einstein equations have the same characteristics; this circumstance makes the BKL model important for cosmology. A basis for such statement is the fact that the oscillatory mode in the approach to singularity is caused by the single perturbation that also causes instability in the generalized Kasner solution. A confirmation of the generality of the model is the analytic construction for long era with small oscillations. Although this latter behavior is not a necessary element of metric evolution close to the singularity, it has all principal qualitative properties: metric oscillation in two spatial dimensions and monotonous change in the third dimension with a certain perturbation of this mode at the end of some time interval. However, the transitions between Kasner epochs in the general case of non-homogeneous spatial metric have not been elucidated in details. The problem connected with the possible limitations upon space geometry caused by the singularity was left aside for further study. It is clear from the outset, however, that the original BKL model is applicable to both finite or infinite space; this is evidenced by the existence of oscillatory singularity models for both closed and open spacetimes. The oscillatory mode of the approach to singularity gives a new aspect to the term 'finiteness of time'. Between any finite moment of the world time "t" and the moment "t" = 0 there is an infinite number of oscillations. In this sense, the process acquires an infinite character. Instead of time "t", a more adequate variable for its description is ln "t" by which the process is extended to formula_107. BKL consider metric evolution in the direction of decreasing time. The Einstein equations are symmetric in respect to the time sign so that a metric evolution in the direction of increasing time is equally possible. However, these two cases are fundamentally different because past and future are not equivalent in the physical sense. Future singularity can be physically meaningful only if it is possible at arbitrary initial conditions existing in a previous moment. Matter distribution and fields in some moment in the evolution of Universe do not necessarily correspond to the specific conditions required for the existence of a given special solution to the Einstein equations. The choice of solutions corresponding to the real world is related to profound physical requirements which is impossible to find using only the existing relativity theory and which can be found as a result of future synthesis of physical theories. Thus, it may turn out that this choice singles out some special (e.g., isotropic) type of singularity. Nevertheless, it is more natural to assume that because of its general character, the oscillatory mode should be the main characteristic of the initial evolutionary stages. In this respect, of considerable interest is the property of the "Mixmaster" model shown by Misner, related to propagation of light signals. In the isotropic model, a "light horizon" exists, meaning that for each moment of time, there is some longest distance, at which exchange of light signals and, thus, a causal connection, is impossible: the signal cannot reach such distances for the time since the singularity "t" = 0. Signal propagation is determined by the equation "ds" = 0. In the isotropic model near the singularity "t" = 0 the interval element is formula_108, where formula_109 is a time-independent spatial differential form. Substituting formula_110 yields The "distance" formula_111 reached by the signal is Since η, like "t", runs through values starting from 0, up to the "moment" η signals can propagate only at the distance formula_112 which fixes the farthest distance to the horizon. The existence of a light horizon in the isotropic model poses a problem in the understanding of the origin of the presently observed isotropy in the relic radiation. According to the isotropic model, the observed isotropy means isotropic properties of radiation that comes to the observer from such regions of space that can not be causally connected with each other. The situation in the oscillatory evolution model near the singularity can be different. For example, in the homogeneous model for Type IX space, a signal is propagated in a direction in which for a long era, scales change by a law close to ~ "t". The square of the distance element in this direction is "dl"2 = "t"2formula_113, and the respective element of the four-dimensional interval is formula_114. The substitution formula_115 puts this in the form and for the signal propagation one has equation of the type eq. 128 again. The important difference is that the variable η runs now through values starting from formula_107 (if metric eq. 129 is valid for all "t" starting from "t" = 0). Therefore, for each given "moment" η are found intermediate intervals Δη sufficient for the signal to cover each finite distance. In this way, during a long era a light horizon is opened in a given space direction. Although the duration of each long era is still finite, during the course of the world evolution eras change an infinite number of times in different space directions. This circumstance makes one expect that in this model a causal connection between events in the whole space is possible. Because of this property, Misner named this model "Mixmaster universe" by a brand name of a dough-blending machine. As time passes and one goes away from the singularity, the effect of matter on metric evolution, which was insignificant at the early stages of evolution, gradually increases and eventually becomes dominant. It can be expected that this effect will lead to a gradual "isotropisation" of space as a result of which its characteristics come closer to the Friedman model which adequately describes the present state of the Universe. Finally, BKL pose the problem about the feasibility of considering a "singular state" of a world with infinitely dense matter on the basis of the existing relativity theory. The physical application of the Einstein equations in their present form in these conditions can be made clear only in the process of a future synthesis of physical theories and in this sense the problem can not be solved at present. It is important that the gravitational theory itself does not lose its logical cohesion (i.e., does not lead to internal controversies) at whatever matter densities. In other words, this theory is not limited by the conditions that it imposes, which could make logically inadmissible and controversial its application at very large densities; limitations could, in principle, appear only as a result of factors that are "external" to the gravitational theory. This circumstance makes the study of singularities in cosmological models formally acceptable and necessary in the frame of existing theory. Notes. &lt;templatestyles src="Reflist/styles.css" /&gt; References. &lt;templatestyles src="Reflist/styles.css" /&gt; Bibliography. &lt;templatestyles src="Refbegin/styles.css" /&gt;
[ { "math_id": 0, "text": "(-\\frac {1}{3},\\frac{2}{3},\\frac {2}{3})" }, { "math_id": 1, "text": "\\left ( p_1 + p_2 + p_3 \\right )^2 = \\left ( p_1^2 + p_2^2 + p_3^2 \\right ) + \\left ( 2 p_1 p_2 + 2 p_2 p_3 + 2 p_1 p_3 \\right ) = 1" }, { "math_id": 2, "text": "\\left ( p_1^2 + p_2^2 + p_3^2 \\right )" }, { "math_id": 3, "text": "\\varkappa_{\\alpha}^{\\beta}=\\frac{2 \\dot a}{a}l_{\\alpha}l^{\\beta}+\\frac{2 \\dot b}{b}m_{\\alpha}m^{\\beta}+\\frac{2 \\dot c}{c}n_{\\alpha}n^{\\beta}" }, { "math_id": 4, "text": "\\frac{\\partial}{\\partial t} \\left (\\sqrt{-g} u_0 \\varepsilon^{\\frac{3}{4}} \\right ) = 0,\\ 4 \\varepsilon \\cdot \\frac{\\partial u_{\\alpha}}{\\partial t}+u_{\\alpha} \\cdot \\frac{\\partial \\varepsilon}{\\partial t} = 0," }, { "math_id": 5, "text": "u_0^2 \\approx u_n u^n = \\frac{u_n^2}{c^2}," }, { "math_id": 6, "text": "R_0^0 = T_0^0 - \\frac{1}{2}T,\\ R_{\\alpha}^{\\beta} = T_{\\alpha}^{\\beta}- \\frac{1}{2}\\delta_{\\alpha}^{\\beta}T," }, { "math_id": 7, "text": "R_{\\alpha}^0 = T_{\\alpha}^0" }, { "math_id": 8, "text": "R_{\\alpha}^0" }, { "math_id": 9, "text": "R_l^0" }, { "math_id": 10, "text": "R_m^0" }, { "math_id": 11, "text": "R_n^0" }, { "math_id": 12, "text": "R_l^m" }, { "math_id": 13, "text": "R_l^n" }, { "math_id": 14, "text": "R_m^n" }, { "math_id": 15, "text": "a \\sim e^{-\\Lambda p_1\\tau},\\ b \\sim e^{\\Lambda(p_2+2p_1)\\tau},\\ c \\sim e^{\\Lambda(p_3+2p_1)\\tau},\\ t \\sim e^{\\Lambda(1+2p_1)\\tau}." }, { "math_id": 16, "text": "u_{\\max}^{(s)}" }, { "math_id": 17, "text": "u_{\\min}^{(s)}" }, { "math_id": 18, "text": "\\frac{a'_\\max}{a_\\max}=\\left[\\frac{p_1(u-1)}{p_1(u)}\\left(1-2|p_1(u)|\\right)\\right]^{\\frac{1}{2}};" }, { "math_id": 19, "text": "R_\\alpha^0" }, { "math_id": 20, "text": "T_\\alpha^0" }, { "math_id": 21, "text": "\\alpha+\\beta=\\frac{2a_0^2}{\\xi_0}\\left(\\tau-\\tau_0\\right)+2\\ln a_0," }, { "math_id": 22, "text": "\\tfrac{1}{\\sqrt{\\xi}}" }, { "math_id": 23, "text": "\\gamma_\\xi=\\frac{1}{4}\\xi\\left(2\\chi_\\xi^2+\\chi^2\\right)=A^2,\\ \\gamma=A^2\\left(\\xi-\\xi_0\\right)+\\mathrm{const}." }, { "math_id": 24, "text": "\\ll" }, { "math_id": 25, "text": "\\chi=\\alpha-\\beta=k\\ln \\xi+\\mathrm{const},\\," }, { "math_id": 26, "text": "\\tfrac{a-b}{a} \\sim \\tfrac{1}{\\sqrt{\\xi}}" }, { "math_id": 27, "text": "\\sim \\sqrt{\\xi}" }, { "math_id": 28, "text": "\\xi^2_0" }, { "math_id": 29, "text": "\\gamma_{\\xi} \\approx \\frac{1}{8}\\xi \\cdot 2, \\quad \\gamma \\approx \\frac{1}{8} \\left (\\xi^2-\\xi_0^2 \\right )," }, { "math_id": 30, "text": "\\sum \\ln \\left ( 1 + 2p_1 \\right ) = \\dots + \\frac{1}{k-2} + \\frac{1}{k-1} + \\frac{1}{k}" }, { "math_id": 31, "text": "a_{\\max}^\\prime - a_{\\max} \\approx -1/2 u " }, { "math_id": 32, "text": "\\gg" }, { "math_id": 33, "text": "\\mathrm{const} + |p_1(u_n)|\\Omega \\," }, { "math_id": 34, "text": "\\mathrm{const} - p_2(u_n)\\Omega \\," }, { "math_id": 35, "text": "A_0^{k/(1+x)}" }, { "math_id": 36, "text": "A_0 ' \\sim A_0^{k^2}" }, { "math_id": 37, "text": "\\ln \\left ( \\frac{\\varepsilon_{n+1}}{\\varepsilon_n} \\right ) = 2 \\left [ 1 - p_3 ( u_n ) \\right ] \\Delta_{n+1}." }, { "math_id": 38, "text": "\\varepsilon_0' / \\varepsilon_0 \\sim A_0^{2k}" }, { "math_id": 39, "text": "\\varepsilon_0'' / \\varepsilon_0' \\sim A_0'^{2k''} \\sim A_0^{2k^2 k'}" }, { "math_id": 40, "text": "u_\\max^{(0)} = k^{(0)} + x^{(0)}" }, { "math_id": 41, "text": "u_\\max^{(s)} = x" }, { "math_id": 42, "text": "u_\\max^{(s)} = x + k" }, { "math_id": 43, "text": "u_\\max^{(s)}" }, { "math_id": 44, "text": "w_{s}(x)dx = \\sum_{k=1}^\\infty w_{s-1} \\left (\\frac{1}{k+x} \\right ) \\left\\vert d \\frac{1}{k+x} \\right\\vert" }, { "math_id": 45, "text": "\\bar k" }, { "math_id": 46, "text": "\\bar k \\sim \\ln N" }, { "math_id": 47, "text": "1 \\ll K \\ll N" }, { "math_id": 48, "text": "x_{s+1}^{+}" }, { "math_id": 49, "text": "x_{s}^{+}" }, { "math_id": 50, "text": "\\eta_{s+1} = \\frac{1}{\\eta_{s} x_{s-1} + k_{s}}" }, { "math_id": 51, "text": "\\eta_{s+1} x_s = \\left [ k_{s}, k_{s-1}, \\dots \\right ] = x_{s+1}^{-}" }, { "math_id": 52, "text": "\\eta_{s} = x_{s}^{-} / x_{s}^{+}" }, { "math_id": 53, "text": "x_{s}^{-}" }, { "math_id": 54, "text": "1/x_{s}^{-} = x_{s}^{-} + k_{s} = x_{s}^{-} + \\left [ 1 / x_{s}^{+} \\right ]" }, { "math_id": 55, "text": "x^{\\prime} = \\frac{1}{x}, \\quad y^{\\prime} = \\frac{1}{\\frac{1}{x} + y}" }, { "math_id": 56, "text": "x \\equiv x_0^{+}" }, { "math_id": 57, "text": "y \\equiv x_0^{-}" }, { "math_id": 58, "text": "x'' = \\frac{1}{\\frac{1}{x} + y}, \\quad y'' = \\frac{1}{x}" }, { "math_id": 59, "text": "\\frac{1}{x''} = \\left [ \\frac{1}{x} \\right ] + y = \\left [ \\frac{1}{x} \\right ] + c - \\frac{1}{x} = c - \\left \\{ \\frac{1}{x} \\right \\}" }, { "math_id": 60, "text": "f(c)\\ dc\\ d \\frac{1}{x} = f \\left ( \\frac{1}{x} + y \\right ) \\frac{1}{x^2} dx\\ dy" }, { "math_id": 61, "text": "J = \\frac{\\partial \\left ( x_{s+1}^{+}, x_{s+1}^{-} \\right )}{\\partial \\left ( x_{s}^{+}, x_{s}^{-} \\right )} = \\frac{\\partial x_{s+1}^{+}}{\\partial x_{s}^{+}} \\frac{\\partial x_{s+1}^{-}}{\\partial x_{s}^{-}} = \\left ( \\frac{x_{s+1}^{+}} {x_{s}^{+}} \\right )^2" }, { "math_id": 62, "text": " \\left [ 1/x_{s}^{+} \\right ] + \\left \\{ 1/x_{s}^{+} \\right \\} = 1/x_{s}^{+}" }, { "math_id": 63, "text": "P(\\delta)\\ d\\delta = d\\delta \\int_0^1 P \\left ( x^{+}, \\frac{x^{+} \\delta}{1 - \\delta} \\right ) \\left ( \\frac{\\partial x^{-}}{\\partial \\delta} \\right )_{x^{+}} d x^{+}" }, { "math_id": 64, "text": "\\bar{\\delta}" }, { "math_id": 65, "text": "\\bar{\\tau} = s \\bar{\\xi}" }, { "math_id": 66, "text": "\\bar{\\xi}" }, { "math_id": 67, "text": "\\overline{\\ln x_s} = \\overline{\\ln x_{s-1}}" }, { "math_id": 68, "text": "\\overline{\\ln \\delta_s} = \\overline{\\ln \\left ( \\delta_{s+1} \\right )}" }, { "math_id": 69, "text": "\\bar{\\xi} = -2 \\overline{\\ln x} = -2 \\int_0^1 w (x) \\ln x\\ dx = \\frac{\\pi^2}{6 \\ln 2} = 2.37 " }, { "math_id": 70, "text": "\\overline{\\tau_s}" }, { "math_id": 71, "text": "\\overline{\\xi^2}" }, { "math_id": 72, "text": "\\overline{\\xi_p \\xi_{p \\prime}}" }, { "math_id": 73, "text": "\\sum_{p=1}^s \\xi_p = \\ln \\prod_{p=1}^s \\frac{\\delta_p}{\\left (1 - \\delta_{p+1} \\right ) x_p x_{p-1}} = \\ln \\prod_{p=1}^s \\frac{\\delta_p}{\\left (1 - \\delta_{p} \\right ) x_{p-1}^2} + \\ln \\frac{x_0}{x_s} + \\ln \\frac{1 - \\delta_1}{1 - \\delta_{s+1}}" }, { "math_id": 74, "text": "x_{p+1}^{+} + \\frac{1}{x_{p+1}^{-}} = \\frac{1}{x_p^{+}} + x_p^{-}" }, { "math_id": 75, "text": "\\ln \\left ( 1 + x_{p+1}^{+} x_{p+1}^{-} \\right ) - \\ln x_{p+1}^{-} = \\ln \\left ( 1 + x_{p}^{+} x_{p}^{-} \\right ) - \\ln x_{p}^{+}" }, { "math_id": 76, "text": "x_p^{+}" }, { "math_id": 77, "text": "\\overline{\\eta_p \\eta_{p \\prime}}" }, { "math_id": 78, "text": "\\bar{\\eta} = \\bar{\\xi}" }, { "math_id": 79, "text": "\\overline{\\eta^2} = 4 \\int_0^1 w(x) \\ln^2 x\\ dx = \\frac{6 \\xi (3)}{\\ln 2} = 10.40" }, { "math_id": 80, "text": "\\overline{\\eta_0 \\eta_p}" }, { "math_id": 81, "text": "D_{{\\tau}_s} / \\overline{\\tau_s}" }, { "math_id": 82, "text": "D_{s_{\\tau}} = 3.5 \\frac{\\overline{s_{\\tau}}^3}{\\tau^2} = 0.26 \\tau" }, { "math_id": 83, "text": "\\ln \\ln \\frac{\\varepsilon^{(s+1)}}{\\varepsilon^{(s)}} = \\eta_s + \\sum_{p=0}^{s-1} \\xi_p, \\quad \\eta_s = \\ln \\left [ 2\\delta^{(s)} \\left ( k^{(s)} + x^{(s)} - 1 \\right ) \\Omega^{(0)} \\right ] " }, { "math_id": 84, "text": "s\\bar{\\xi}" }, { "math_id": 85, "text": "\\overline{\\ln \\ln \\left ( \\varepsilon_\\tau/\\varepsilon^{(0)} \\right )} = \\tau \\quad \\text{or} \\quad \\overline{\\ln \\ln \\left ( \\varepsilon^{(s)}/\\varepsilon^{(0)} \\right )} = 2.1 s," }, { "math_id": 86, "text": "\\ln \\ln \\varepsilon (t) = \\text{const} + \\ln \\Omega + \\ln 2 (1 - p_3 (t))," }, { "math_id": 87, "text": "\\sum_k \\frac{1}{k^{(1)^2} k^{(2)^2} \\dots k^{(s)^2}} = \\left ( \\pi^2 / 6 \\right )^s " }, { "math_id": 88, "text": "\\lambda = \\exp \\left ( \\left |-\\alpha^{(0)} \\right | \\right )\\sum_{s=0}^\\infty \\left [ \\left ( \\pi^2 / 6 \\right ) \\exp \\left ( \\left |-\\alpha^{(0)} \\right | \\right ) \\right ]^s \\approx \\exp \\left ( \\left |-\\alpha^{(0)} \\right | \\right )." }, { "math_id": 89, "text": "R_0^0" }, { "math_id": 90, "text": "R_3^0" }, { "math_id": 91, "text": "R_3^3" }, { "math_id": 92, "text": "R_a^b" }, { "math_id": 93, "text": "\\varkappa" }, { "math_id": 94, "text": "\\varkappa_a^a" }, { "math_id": 95, "text": "\\lambda_a^a" }, { "math_id": 96, "text": "R_a^0" }, { "math_id": 97, "text": "R_a^3" }, { "math_id": 98, "text": "G^{\\prime\\prime} + \\ddot G = 0" }, { "math_id": 99, "text": "N \\approx g^{00} \\left ( \\dot G \\right )^2 - \\gamma^{33} \\left ( G^\\prime \\right )^2 = 4 \\gamma^{33} \\dot{f}_1 \\dot{f}_2" }, { "math_id": 100, "text": "\\ddot{\\sigma}+\\xi^{-1}\\dot{\\sigma}-\\sigma^{\\prime\\prime}=0," }, { "math_id": 101, "text": "\\sigma = \\frac{1}{\\sqrt{\\xi}}\\sum_{n=-\\infty}^\\infty \\left ( A_{1n} e^{in\\omega\\xi}+B_{1n} e^{-in\\omega\\xi} \\right ) e^{in\\omega z}," }, { "math_id": 102, "text": "\\omega_n^2 = n^2\\omega^2+4\\rho^2. " }, { "math_id": 103, "text": "{\\lambda_a^b}^\\prime" }, { "math_id": 104, "text": "s_1^2 - s_2^2" }, { "math_id": 105, "text": "R_{\\alpha}^3" }, { "math_id": 106, "text": "a_0^2" }, { "math_id": 107, "text": "-\\infty" }, { "math_id": 108, "text": "ds^2 = dt^2 - 2 t d \\bar{l}^2" }, { "math_id": 109, "text": "d \\bar{l}^2" }, { "math_id": 110, "text": "t = \\eta^2 / 2" }, { "math_id": 111, "text": "\\Delta \\bar l" }, { "math_id": 112, "text": "\\Delta \\bar{l} \\le \\eta" }, { "math_id": 113, "text": "\\bar{l}^2" }, { "math_id": 114, "text": "ds^2 = dt^2 - t^2 \\bar{l}^2" }, { "math_id": 115, "text": "t = e^{\\eta}" } ]
https://en.wikipedia.org/wiki?curid=6620973
66213716
Solar radio emission
Radio waves produced by the Sun Solar radio emission refers to radio waves that are naturally produced by the Sun, primarily from the lower and upper layers of the atmosphere called the chromosphere and corona, respectively. The Sun produces radio emissions through four known mechanisms, each of which operates primarily by converting the energy of moving electrons into electromagnetic radiation. The four emission mechanisms are thermal bremsstrahlung (braking) emission, gyromagnetic emission, plasma emission, and electron-cyclotron maser emission. The first two are "incoherent" mechanisms, which means that they are the summation of radiation generated independently by many individual particles. These mechanisms are primarily responsible for the persistent "background" emissions that slowly vary as structures in the atmosphere evolve. The latter two processes are "coherent" mechanisms, which refers to special cases where radiation is efficiently produced at a particular set of frequencies. Coherent mechanisms can produce much larger brightness temperatures (intensities) and are primarily responsible for the intense spikes of radiation called solar radio bursts, which are byproducts of the same processes that lead to other forms of solar activity like solar flares and coronal mass ejections. History and observations. Radio emission from the Sun was first reported in the scientific literature by Grote Reber in 1944. Those were observations of 160 MHz frequency (2 meters wavelength) microwave emission emanating from the chromosphere. However, the earliest known observation was in 1942 during World War II by British radar operators who detected an intense low-frequency solar radio burst; that information was kept secret as potentially useful in evading enemy radar, but was later described in a scientific journal after the war. One of the most significant discoveries from early solar radio astronomers such as Joseph Pawsey was that the Sun produces much more radio emission than expected from standard black body radiation. The explanation for this was proposed by Vitaly Ginzburg in 1946, who suggested that thermal bremsstrahlung emission from a million-degree corona was responsible. The existence of such extraordinarily high temperatures in the corona had previously been indicated by optical spectroscopy observations, but the idea remained controversial until it was later confirmed by the radio data. Prior to 1950, observations were conducted mainly using antennas that recorded the intensity of the whole Sun at a single radio frequency. Observers such as Ruby Payne-Scott and Paul Wild used simultaneous observations at numerous frequencies to find that the onset times of radio bursts varied depending on frequency, suggesting that radio bursts were related to disturbances that propagate outward, away from the Sun, through different layers of plasma with different densities. These findings motivated the development of "radiospectrographs" that were capable of continuously observing the Sun over a range of frequencies. This type of observation is called a "dynamic spectrum", and much of the terminology used to describe solar radio emission relates to features observed in dynamic spectra, such as the classification of solar radio bursts. Examples of dynamic spectra are shown below in the radio burst section. Notable contemporary solar radiospectrographs include the Radio Solar Telescope Network, the e-CALLISTO network, and the WAVES instrument on-board the "Wind" spacecraft. Radiospectrographs do not produce images, however, and so they cannot be used to locate features spatially. This can make it very difficult to understand where a specific component of the solar radio emission is coming from and how it relates to features seen at other wavelengths. Producing a radio image of the Sun requires an interferometer, which in radio astronomy means an array of many telescopes that operate together as a single telescope to produce an image. This technique is a sub-type of interferometry called aperture synthesis. Beginning in the 1950s, a number of simple interferometers were developed that could provide limited tracking of radio bursts. This also included the invention of sea interferometry, which was used to associate radio activity with sunspots. Routine imaging of the radio Sun began in 1967 with the commissioning of the Culgoora Radioheliograph, which operated until 1986. A "radioheliograph" is simply an interferometer that is dedicated to observing the Sun. In addition to Culgoora, notable examples include the Clark Lake Radioheliograph, Nançay Radioheliograph, Nobeyama Radioheliograph, Gauribidanur Radioheliograph, Siberian Radioheliograph, and Chinese Spectral Radioheliograph. Additionally, interferometers that are used for other astrophysical observations can also be used to observe the Sun. General-purpose radio telescopes that also perform solar observations include the Very Large Array, Atacama Large Millimeter Array, Murchison Widefield Array, and Low-Frequency Array. The collage above shows antennas from several low-frequency radio telescopes used to observe the Sun. Mechanisms. All of the processes described below produce radio frequencies that depend on the properties of the plasma where the radiation originates, particularly electron density and magnetic field strength. Two plasma physics parameters are particularly important in this context: The electron plasma frequency, and the electron gyrofrequency, where formula_0 is the electron density in cm−3, formula_1 is the magnetic field strength in Gauss (G), formula_2 is the electron charge, formula_3 is the electron mass, and formula_4 is the speed of light. The relative sizes of these two frequencies largely determine which emission mechanism will dominate in a particular environment. For example, high-frequency gyromagnetic emission dominates in the chromosphere, where the magnetic field strengths are comparatively large, whereas low-frequency thermal bremsstrahlung and plasma emission dominates in the corona, where the magnetic field strengths and densities are generally lower than in the chromosphere. In the images below, the first four on the upper left are dominated by gyromagnetic emission from the chromosphere, transition region, and low-corona, while the three images on the right are dominated by thermal bremsstrahlung emission from the corona, with lower frequencies being generated at larger heights above the surface. Thermal bremsstrahlung emission. Bremsstrahlung emission, from the German "braking radiation", refers to electromagnetic waves produced when a charged particle accelerates and some of its kinetic energy is converted into radiation. "Thermal" bremsstrahlung refers to radiation from a plasma in thermal equilibrium and is primarily driven by Coulomb collisions where an electron is deflected by the electric field of an ion. This is often referred to as "free-free" emission for a fully ionized plasma like the solar corona because it involves collisions of "free" particles, as opposed to electrons transitioning between bound states in an atom. This is the main source of quiescent background emission from the corona, where "quiescent" means outside of radio burst periods. The radio frequency of bremsstrahlung emission is related to a plasma's electron density through the electron plasma frequency (formula_5) from Equation 1. A plasma with a density formula_0 can produce emission only at or below the corresponding formula_5. Density in the corona generally decreases with height above the visible "surface", or photosphere, meaning that lower-frequency emission is produced higher in the atmosphere, and the Sun appears larger at lower frequencies. This type of emission is most prominent below 300 MHz due to typical coronal densities, but particularly dense structures in the corona and chromosphere can generate bremsstrahlung emission with frequencies into the GHz range. Gyromagnetic emission. Gyromagnetic emission is also produced from the kinetic energy of a charge particle, generally an electron. However in this case, an external magnetic field causes the particle's trajectory to exhibit a spiral gyromotion, resulting in a centripetal acceleration that in turn produces the electromagnetic waves. Different terminology is used for the same basic phenomenon depending on how fast the particle is spiraling around the magnetic field, which is due to the different mathematics required to describe the physics. "Gyroresonance" emission refers to slower, non-relativistic speeds and is also called "magneto-bremsstrahlung" or "cyclotron" emission. "Gyrosynchrotron" corresponds to the mildly relativistic case, where the particles rotate at a small but significant fraction of light speed, and "synchrotron" emission refers to the relativistic case where the speeds approach that of light. Gyroresonance and gyrosynchrotron are most-important in the solar context, although there may be special cases in which synchrotron emission also operates. For any sub-type, gyromagnetic emission occurs near the electron gyrofrequency (formula_6) from Equation 2 or one of its harmonics. This mechanism dominates when the magnetic field strengths are large such that formula_6 &gt; formula_5. This is mainly true in the chromosphere, where gyroresonance emission is the primary source of quiescent (non-burst) radio emission, producing microwave radiation in the GHz range. Gyroresonance emission can also be observed from the densest structures in the corona, where it can be used to measure the coronal magnetic field strength. Gyrosynchrotron emission is responsible for certain types of microwave radio bursts from the chromosphere and is also likely responsible for certain types of coronal radio bursts. Plasma emission. Plasma emission refers to a set of related process that partially convert the energy of Langmuir waves into radiation. It is the most common form of coherent radio emission from the Sun and is commonly accepted as the emission mechanism for most types of solar radio bursts, which can exceed the background radiation level by several orders of magnitude for brief periods. "Langmuir waves", also called "electron plasma waves" or simply "plasma oscillations", are electron density oscillations that occur when a plasma is perturbed so that a population of electrons is displaced relative to the ions. Once displaced, the Coloumb force pulls the electrons back toward and ultimately past the ions, leading them to oscillate back and forth. Langmuir waves are produced in the solar corona by a plasma instability that occurs when a beam of nonthermal (fast-moving) electrons moves through the ambient plasma. The electron beam may be accelerated either by magnetic reconnection, the process that underpins solar flares, or by a shock wave, and these two basic processes operate in different contexts to produce different types of solar radio bursts. The instability that generates Langmuir waves is the "two-stream instability", which is also called the "beam" or "bump-on-tail" instability in cases such as this where an electron beam is injected into a plasma, creating a "bump" on the high-energy tail of the plasma's particle velocity distribution. This bump facilitates exponential Langmuir wave growth in the ambient plasma through the transfer of energy from the electron beam into specific Langmuir wave modes. A small fraction of the Langmuir wave energy can then be converted into electromagnetic radiation through interactions with other wave modes, namely ion sound waves. A flowchart of the plasma emission stages is shown on the right. Depending on these wave interactions, coherent radio emission may be produced at the fundamental electron plasma frequency (formula_5; Equation 1) or its harmonic (2formula_5). Emission at formula_5 is often referred to as "fundamental plasma emission", while emission at 2formula_5 is called "harmonic plasma emission". This distinction is important because the two types have different observed properties and imply different plasma conditions. For example, fundamental plasma emission exhibits a much larger circular polarization fraction and originates from plasma that is four times denser than harmonic plasma emission. Electron-cyclotron maser emission. The final, and least common, solar radio emission mechanism is electron-cyclotron maser emission (ECME). "Maser" is an acronym for "microwave amplification by stimulated emission of radiation", which originally referred to a laboratory device that can produce intense radiation of a specific frequency through stimulated emission. Stimulated emission is a process by which a group of atoms are moved into higher energy levels (above thermal equilibrium) and then stimulated to release that extra energy all at once. Such population inversions can occur naturally to produce astrophysical masers, which are sources of very intense radiation of specific spectral lines. Electron-cyclotron maser emission, however, does not involve population inversions of atomic energy levels. The term "maser" was adopted here as an analogy is somewhat of a misnomer. In ECME, the injection of nonthermal, semi-relativistic electrons into a plasma produces a population inversion analogous to that of a maser in the sense that a high-energy population was added to an equilibrium distribution. This is very similar to the beginning of the plasma emission process described in the previous section, but when the plasma density is low and/or the magnetic field strength is high such that formula_6 &gt; formula_5 (Equations 1 and 2), energy from the nonthermal electrons cannot efficiently be converted into Langmuir waves. This leads instead to direct emission at formula_6 through a plasma instability that is expressed analytically as a negative absorption coefficient (i.e. positive growth rate) for a particular particle distribution, most famously the loss-cone distribution. ECME is the accepted mechanism for microwave spike bursts from the chromosphere and is sometimes invoked to explain features of coronal radio bursts that cannot be explained by plasma emission or gyrosynchrotron emission. Magnetoionic theory and polarization. Magnetoionic theory describes the propagation of electromagnetic waves in environments where an ionized plasma is subjected to an external magnetic field, such as the solar corona and Earth's ionosphere. The corona is generally treated with the "cold plasma approach," which assumes that the characteristic velocities of the waves are much faster than the thermal velocities of the plasma particles. This assumption allows thermal effects to be neglected, and most approaches also ignore the motions of ions and assume that the particles do not interact through collisions. Under these approximations, the dispersion equation for electromagnetic waves includes two free-space modes that can escape the plasma as radiation (radio waves). These are called the "ordinary" (formula_7) and "extraordinary" (formula_8) modes. The ordinary mode is "ordinary" in the sense that the plasma response is the same as if there were no magnetic field, while the "formula_8"-mode has a somewhat different refractive index. Importantly, each mode is polarized in opposite senses that depend on the angle with respect to the magnetic field. A quasi-circular approximation generally applies, in which case both modes are 100% circularly polarized with opposite senses. The formula_8- and formula_7-modes are produced at different rates depending on the emission mechanism and plasma parameters, which leads to a net circular polarization signal. For example, thermal bremsstrahlung slightly favors the formula_8-mode, while plasma emission heavily favors the formula_7-mode. This makes circular polarization an extremely important property for studies of solar radio emission, as it can be used to help understand how the radiation was produced. While circular polarization is most prevalent in solar radio observations, it is also possible to produce linear polarizations in certain circumstances. However, the presence of intense magnetic fields leads to Faraday rotation that distorts linearly-polarized signals, making them extremely difficult or impossible to detect. However, it is possible to detect linearly-polarized background astrophysical sources that are occulted by the corona, in which case the impact of Faraday rotation can be used to measure the coronal magnetic field strength. Propagation effects. The appearance of solar radio emission, particularly at low frequencies, is heavily influenced by propagation effects. A "propagation effect" is anything that impacts the path or state of an electromagnetic wave after it is produced. These effects therefore depend on whatever mediums the wave passed through before being observed. The most dramatic impacts to solar radio emission occur in the corona and in Earth's ionosphere. There are three primary effects: refraction, scattering, and mode coupling. Refraction is the bending of light's path as it enters a new medium or passes through a material with varying density. The density of the corona generally decreases with distance from the Sun, which causes radio waves to refract toward the radial direction. When solar radio emission enters Earth's ionosphere, refraction may also severely distort the source's apparent location depending on the viewing angle and ionospheric conditions. The formula_8- and formula_7-modes discussed in the previous section also have slightly different refractive indices, which can lead to separation of the two modes. The counterpart to refraction is reflection. A radio wave can be reflected in the solar atmosphere when it encounters a region of particularly high density compared to where it was produced, and such reflections can occur many times before a radio wave escapes the atmosphere. This process of many successive reflections is called "scattering", and it has many important consequences. Scattering increases the apparent size of the entire Sun and compact sources within it, which is called "angular broadening". Scattering increases the cone-angle over which directed emission can be observed, which can even allow for the observation of low-frequency radio bursts that occurred on the far-side of the Sun. Because the high-density fibers that are primarily responsible for scattering are not randomly aligned and are generally radial, random scattering against them may also systematically shift the observed location of a radio burst to a larger height than where it was actually produced. Finally, scattering tends to depolarize emission and is likely why radio bursts often exhibit much lower circular polarization fractions than standard theories predict. "Mode coupling" refers to polarization state changes of the formula_8- and formula_7-modes in response to different plasma conditions. If a radio wave passes through a region where the magnetic field orientation is nearly perpendicular to the direction of travel, which is called a quasi-transverse region, the polarization sign (i.e. left or right; positive or negative) may flip depending on the radio frequency and plasma parameters. This concept is crucial to interpreting polarization observations of solar microwave radiation and may also be important for certain low-frequency radio bursts. Solar radio bursts. Solar radio bursts are brief periods during which the Sun's radio emission is elevated above the background level. They are signatures of the same processes that lead to the more widely-known forms of solar activity such as sunspots, solar flares, and coronal mass ejections. Radio bursts can exceed the background radiation level only slightly or by several orders of magnitude (e.g. by 10 to 10,000 times) depending on a variety of factors that include the amount of energy released, the plasma parameters of the source region, the viewing geometry, and the mediums through which the radiation propagated before being observed. Most types of solar radio bursts are produced by the plasma emission mechanism operating in different contexts, although some are caused by (gyro)synchrotron and/or electron-cyclotron maser emission. Solar radio bursts are classified largely based on how they appear in dynamic spectrum observations from radiospectrographs. The first three types, shown in the image on the right, were defined by Paul Wild and Lindsay McCready in 1950 using the earliest radiospectrograph observations of metric (low-frequency) bursts. This classification scheme is based primarily on how a burst's frequency drifts over time. Types IV and V were added within a few years of the initial three, and a number of other types and sub-types have since been identified. Type I. Type I bursts are radiation spikes that last around one second and occur over a relatively narrow frequency range (formula_9) with little-to-no discernible drift in frequency. They tend to occur in groups called "noise storms" that are often superimposed on enhanced continuum (broad-spectrum) emission with the same frequency range. While each individual Type I burst does not drift in frequency, a chain of Type I bursts in a noise storm may slowly drift from higher to lower frequencies over a few minutes. Noise storms can last from hours to weeks, and they are generally observed at relatively low frequencies between around 50 and 500 MHz. Noise storms are associated with "active regions". Active regions are regions in the solar atmosphere with high concentrations of magnetic fields, and they include a sunspot at their base in the photosphere except in cases where the magnetic fields are fairly weak. The association with active regions has been known for decades, but the conditions required to produce noise storms are still mysterious. Not all active regions that produce other forms of activity such as flares generate noise storms, and unlike other types of solar radio bursts, it is often difficult to identify non-radio signatures of Type I bursts. The emission mechanism for Type I bursts is generally agreed to be fundamental plasma emission due to the high circular polarization fractions that are frequently observed. However, there is no consensus yet on what process accelerates the electrons needed to stimulate plasma emission. The leading ideas are minor magnetic reconnection events or shock waves driven by upward-propagating waves. Since the year 2000, different magnetic reconnection scenarios have generally been favored. One scenario involves reconnection between the open and closed magnetic fields at the boundaries of active regions, and another involves moving magnetic features in the photosphere. Type II. Type II bursts exhibit a relatively slow drift from high to low frequencies of around 0.05 MHz per second, typically over the course of a few minutes. They often exhibit two distinct bands of emission that correspond to fundamental and harmonic plasma emission emanating from the same region. Type II bursts are associated with coronal mass ejections (CMEs) and are produced at the leading edge of a CME, where a shock wave accelerates the electrons responsible for stimulating plasma emission. The frequency drifts from higher to lower values because it depends on the electron density, and the shock propagates outward away from the Sun through lower and lower densities. By using a model for the Sun's atmospheric density, the frequency drift rate can then be used to estimate the speed of the shock wave. This exercise typically results in speeds of around 1000 km/s, which matches that of CME shocks determined from other methods. While plasma emission is the accepted mechanism, Type II bursts do not exhibit significant amounts of circular polarization as would be expected by standard plasma emission theory. The reason for this is unknown, but a leading hypothesis is that the polarization level is suppressed by dispersion effects related to having an inhomogeneous magnetic field near a magnetohydrodynamic shock. Type II bursts sometimes exhibit fine structures called herringbone bursts that emanate from the main burst, as it appears in a dynamic spectrum, and extend to lower frequencies. Herringbone structures are believed to result from shock-accelerated electrons that were able to escape far beyond the shock region to excite Langmuir waves in plasma of lower density than the primary burst region. Type III. Like Type II bursts, Type IIIs also drift from high to low frequencies and are widely attributed to the plasma emission mechanism. However, Type III bursts drift much more rapidly, around 100 MHz per second, and must therefore be related to disturbances that move more quickly than the shock waves responsible for Type IIs. Type III bursts are associated with electrons beams that are accelerated to small fractions of light speed (formula_10 0.1 to 0.3 c) by magnetic reconnection, the process responsible for solar flares. In the image below, the chain of color contours show the locations of three Type III bursts at different frequencies. The progression from violet to red corresponds to the trajectories of electron beams moving away from the Sun and exciting lower and lower frequency plasma emission as they encounter lower and lower densities. Given that they are ultimately caused by magnetic reconnection, Type IIIs are strongly associated with X-ray flares and are indeed observed during nearly all large flares. However, small-to-moderate X-ray flares do not always exhibit Type III bursts and vice versa due to the somewhat different conditions that are required for the high- and low-energy emission to be produced and observed. Type III bursts can occur alone, in small groups, or in chains referred to as Type III storms that may last many minutes. They are often subdivided into two types, "coronal" and "interplanetary" Type III bursts. Coronal refers to the case for which an electron beam is traveling in the corona within a few solar radii of the photosphere. They typically start at frequencies in the hundreds of MHz and drift down to tens of MHz over a few seconds. The electron beams that excite radiation travel along specific magnetic field lines that may be closed or open to interplanetary space. Electron beams that escape into interplanetary space may excite Langmuir waves in the solar wind plasma to produce interplanetary Type III bursts that can extend down to 20 kHz and below for beams that reach 1 Astronomical Unit and beyond. The very low frequencies of interplanetary bursts are below the ionospheric cutoff (formula_10 10 MHz), meaning they are blocked by Earth's ionosphere and are observable only from space. Direct, in situ observations of the electrons and Langmuir waves (plasma oscillations) associated with interplanetary Type III bursts are among the most important pieces of evidence for the plasma emission theory of solar radio bursts. Type III bursts exhibit moderate levels of circular polarization, typically less than 50%. This is lower than expected from plasma emission and is likely due to depolarization from scattering by density inhomogeneities and other propagation effects. Type IV. Type IV bursts are spikes of broad-band continuum emission that include a few distinct sub-types associated with different phenomena and different emission mechanisms. The first type to be defined was the "moving" Type IV burst, which requires imaging observations (i.e. interferometry) to detect. They are characterized by an outward-moving continuum source that is often preceded by a Type II burst in association with a coronal mass ejection (CME). The emission mechanism for Type IV bursts is generally attributed to gyrosynchrotron emission, plasma emission, or some combination of both that results from fast-moving electrons trapped within the magnetic fields of an erupting CME. Stationary Type IV bursts are more common and are not associated with CMEs. They are broad-band continuum emissions associated either with solar flares or Type I bursts. Flare-associated Type IV bursts are also called "flare continuum" bursts, and they typically begin at or shortly after a flare's impulsive phase. Larger flares often include a "storm continuum" phase that follows after the flare continuum. The storm continuum can last from hours to days and may transition into an ordinary Type I noise storm in long-duration events. Both flare and storm continuum Type IV bursts are attributed to plasma emission, but the storm continuum exhibits much larger degrees of circular polarization for reasons that are not fully known. Type V. Type V bursts are the least common of the standard 5 types. They are continuum emissions that last from one to a few minutes immediately after a group of Type III bursts, generally occurring below around 120 MHz. Type Vs are generally thought to be caused by harmonic plasma emission associated with same streams of electrons responsible for the associated Type III bursts. They sometimes exhibit significant positional offsets from the Type III bursts, which may be due to the electrons traveling along somewhat different magnetic field structures. Type V bursts persist for much longer than Type IIIs because they are driven by a slower and less-collimated electron population, which produces broader-band emission and also leads to a reversal in the circular polarization sign from that of the associated Type III bursts due to the different Langmuir wave distribution. While plasma emission is the commonly-accepted mechanism, electron-cyclotron maser emission has also been proposed. Other types. In addition to the classic five types, there are a number of additional types of solar radio bursts. These include variations of the standard types, fine structure within another type, and entirely distinct phenomena. Variant examples include Types J and U bursts, which are Type III bursts for which the frequency drift reverses to go from lower to higher frequencies, suggesting that an electron beam first traveled away and then back toward the Sun along a closed magnetic field trajectory. Fine structure bursts include zebra patterns and fibre bursts that may be observed within Type IV bursts, along with the herringbone bursts that sometimes accompany Type IIs. Type S bursts, which last only milliseconds, are an example of a distinct class. There are also a variety of high-frequency microwave burst types, such as microwave Type IV bursts, impulsive bursts, postbursts, and spike bursts. Radio emission from other stars. Due to its proximity to Earth, the Sun is the brightest source of astronomical radio emission. But of course, other stars also produce radio emission and may produce much more intense radiation in absolute terms than is observed from the Sun. For "normal" main sequence stars, the mechanisms that produce stellar radio emission are the same as those that produce solar radio emission. However, emission from "radio stars" may exhibit significantly different properties compared to the Sun, and the relative importance of the different mechanisms may change depending on the properties of the star, particularly with respect to size and rotation rate, the latter of which largely determines the strength of a star's magnetic field. Notable examples of stellar radio emission include quiescent steady emission from stellar chromospheres and coronae, radio bursts from flare stars, radio emission from massive stellar winds, and radio emission associated with close binary stars. Pre-main-sequence stars such as T Tauri stars also exhibit radio emission through reasonably well-understood processes, namely gyrosynchrotron and electron cyclotron maser emission. Different radio emission processes also exist for certain pre-main-sequence stars, along with post-main sequence stars such as neutron stars. These objects have very high rotation rates, which leads to very intense magnetic fields that are capable of accelerating large amounts of particles to highly-relativistic speeds. Of particular interest is the fact that there is no consensus yet on the coherent radio emission mechanism responsible for pulsars, which cannot be explained by the two well-established coherent mechanisms discussed here, plasma emission and electron cyclotron maser emission. Proposed mechanisms for pulsar radio emission include coherent curvature emission, relativistic plasma emission, anomalous Doppler emission, and linear acceleration emission or free-electron maser emission. All of these processes still involve the transfer of energy from moving electrons into radiation. However, in this case the electrons are moving at nearly the speed of light, and the debate revolves around what process accelerates these electrons and how their energy is converted into radiation. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "n_e" }, { "math_id": 1, "text": "B" }, { "math_id": 2, "text": "e" }, { "math_id": 3, "text": "m_e" }, { "math_id": 4, "text": "c" }, { "math_id": 5, "text": "f_p" }, { "math_id": 6, "text": "f_B" }, { "math_id": 7, "text": "o" }, { "math_id": 8, "text": "x" }, { "math_id": 9, "text": "\\Delta{}f/f \\approx 0.025" }, { "math_id": 10, "text": "\\approx" } ]
https://en.wikipedia.org/wiki?curid=66213716
662174
Heterojunction
Interface between two layers or regions of dissimilar semiconductors A heterojunction is an interface between two layers or regions of dissimilar semiconductors. These semiconducting materials have unequal band gaps as opposed to a homojunction. It is often advantageous to engineer the electronic energy bands in many solid-state device applications, including semiconductor lasers, solar cells and transistors. The combination of multiple heterojunctions together in a device is called a heterostructure, although the two terms are commonly used interchangeably. The requirement that each material be a semiconductor with unequal band gaps is somewhat loose, especially on small length scales, where electronic properties depend on spatial properties. A more modern definition of heterojunction is the interface between any two solid-state materials, including crystalline and amorphous structures of metallic, insulating, fast ion conductor and semiconducting materials. Manufacture and applications. Heterojunction manufacturing generally requires the use of molecular beam epitaxy (MBE) or chemical vapor deposition (CVD) technologies in order to precisely control the deposition thickness and create a cleanly lattice-matched abrupt interface. A recent alternative under research is the mechanical stacking of layered materials into van der Waals heterostructures. Despite their expense, heterojunctions have found use in a variety of specialized applications where their unique characteristics are critical: "Catalysis": Using heterojuntions as photocatalyst have demostrated that they exibith better performance in CO2 photoreduction, H2 production and photodegradation of pollutants in water than single metl oxides. The performance of the heterojuntion can be further improved by incorporation of oxygen vacacies, crystatal facet engineeringor incorporation of carbonaceous materials. Energy band alignment. The behaviour of a semiconductor junction depends crucially on the alignment of the energy bands at the interface. Semiconductor interfaces can be organized into three types of heterojunctions: straddling gap (type I), staggered gap (type II) or broken gap (type III) as seen in the figure. Away from the junction, the band bending can be computed based on the usual procedure of solving Poisson's equation. Various models exist to predict the band alignment. The typical method for measuring band offsets is by calculating them from measuring exciton energies in the luminescence spectra. Effective mass mismatch. When a heterojunction is formed by two different semiconductors, a quantum well can be fabricated due to difference in band structure. In order to calculate the static energy levels within the achieved quantum well, understanding variation or mismatch of the effective mass across the heterojunction becomes substantial. The quantum well defined in the heterojunction can be treated as a finite well potential with width of formula_2. In addition, in 1966, Conley et al. and BenDaniel and Duke reported a boundary condition for the envelope function in a quantum well, known as BenDaniel–Duke boundary condition. According to them, the envelope function in a fabricated quantum well must satisfy a boundary condition which states that formula_3 and formula_4 are both continuous in interface regions. &lt;templatestyles src="Template:Hidden begin/styles.css"/&gt;Mathematical details worked out for quantum well example. Using the Schrödinger equation for a finite well with width of formula_5and center at 0, the equation for the achieved quantum well can be written as: formula_6 formula_7 formula_8 Solution for above equations are well-known, only with different(modified) k and formula_9 formula_10. At the z = formula_11 even-parity solution can be gained from formula_12. By taking derivative of (5) and multiplying both sides by formula_13 formula_14. Dividing (6) by (5), even-parity solution function can be obtained, formula_15. Similarly, for odd-parity solution, formula_16. For numerical solution, taking derivatives of (7) and (8) gives even parity: formula_17 odd parity: formula_18 where formula_19. The difference in effective mass between materials results in a larger difference in ground state energies. Nanoscale heterojunctions. In quantum dots the band energies are dependent on crystal size due to the quantum size effects. This enables band offset engineering in nanoscale heterostructures. It is possible to use the same materials but change the type of junction, say from straddling (type I) to staggered (type II), by changing the size or thickness of the crystals involved. The most common nanoscale heterostructure system is ZnS on CdSe (CdSe@ZnS) which has a straddling gap (type I) offset. In this system the much larger band gap ZnS passivates the surface of the fluorescent CdSe core thereby increasing the quantum efficiency of the luminescence. There is an added bonus of increased thermal stability due to the stronger bonds in the ZnS shell as suggested by its larger band gap. Since CdSe and ZnS both grow in the zincblende crystal phase and are closely lattice matched, core shell growth is preferred. In other systems or under different growth conditions it may be possible to grow anisotropic structures such as the one seen in the image on the right. The driving force for charge transfer between conduction bands in these structures is the conduction band offset. By decreasing the size of CdSe nanocrystals grown on TiO2, Robel et al. found that electrons transferred faster from the higher CdSe conduction band into TiO2. In CdSe the quantum size effect is much more pronounced in the conduction band due to the smaller effective mass than in the valence band, and this is the case with most semiconductors. Consequently, engineering the conduction band offset is typically much easier with nanoscale heterojunctions. For staggered (type II) offset nanoscale heterojunctions, photoinduced charge separation can occur since there the lowest energy state for holes may be on one side of the junction whereas the lowest energy for electrons is on the opposite side. It has been suggested that anisotropic staggered gap (type II) nanoscale heterojunctions may be used for photocatalysis, specifically for water splitting with solar energy. References. &lt;templatestyles src="Reflist/styles.css" /&gt; Further reading. &lt;templatestyles src="Refbegin/styles.css" /&gt;
[ { "math_id": 0, "text": "\\Delta E_C/\\Delta E_V" }, { "math_id": 1, "text": "\\Delta E_C / \\Delta E_V = 0.73/0.27" }, { "math_id": 2, "text": " l_w" }, { "math_id": 3, "text": " \\psi (z) " }, { "math_id": 4, "text": " {\\frac {1} {m^*} }{\\partial \\over {\\partial z}} \\psi (z) \\," }, { "math_id": 5, "text": "l_w " }, { "math_id": 6, "text": "-\\frac{\\hbar^2}{2m_b^*} \\frac{\\mathrm{d}^2 \\psi(z)}{\\mathrm{d}z^2} + V \\psi(z) = E \\psi(z) \\quad \\quad \\text{ for } z < - \\frac {l_w}{2} \\quad \\quad (1)" }, { "math_id": 7, "text": " \\quad \\quad -\\frac{\\hbar^2}{2m_w^*} \\frac{\\mathrm{d}^2 \\psi(z)}{\\mathrm{d}z^2} = E \\psi(z) \\quad \\quad \\text{ for } - \\frac {l_w}{2} < z < + \\frac {l_w}{2} \\quad \\quad (2)" }, { "math_id": 8, "text": "-\\frac{\\hbar^2}{2m_b^*} \\frac{\\mathrm{d}^2 \\psi(z)}{\\mathrm{d}z^2} + V \\psi(z) = E \\psi(z) \\quad \\text{ for } z > + \\frac {l_w}{2} \\quad \\quad (3)" }, { "math_id": 9, "text": "\\kappa " }, { "math_id": 10, "text": " k = \\frac {\\sqrt{2 m_w E}} {\\hbar} \\quad \\quad \\kappa = \\frac {\\sqrt{2 m_b (V-E)}} {\\hbar} \\quad \\quad (4)" }, { "math_id": 11, "text": " + \\frac {l_w} {2} " }, { "math_id": 12, "text": "<Math> A\\cos(\\frac {k l_w} {2}) = B \\exp(- \\frac {\\kappa l_w} {2}) \\quad \\quad (5)</math>" }, { "math_id": 13, "text": " \\frac {1} {m^*}" }, { "math_id": 14, "text": "<Math> -\\frac {kA} {m_w^*} \\sin(\\frac {k l_w} {2}) = -\\frac {\\kappa B} {m_b^*} \\exp(- \\frac {\\kappa l_w} {2}) \\quad \\quad (6)</math>" }, { "math_id": 15, "text": "<Math> f(E) = -\\frac {k} {m_w^*} \\tan(\\frac {k l_w} {2}) -\\frac {\\kappa } {m_b^*} = 0 \\quad \\quad (7)</math>" }, { "math_id": 16, "text": "<Math> f(E) = -\\frac {k} {m_w^*} \\cot(\\frac {k l_w} {2}) +\\frac {\\kappa } {m_b^*} = 0 \\quad \\quad (8)</math>" }, { "math_id": 17, "text": " \\frac {df}{dE} = \\frac {1}{m_w^*} \\frac {dk}{dE} \\tan(\\frac {k l_w} {2}) + \\frac {k} {m_w^*} \\sec^2(\\frac {k l_w} {2}) \\times \\frac {l_w} {2} \\frac {dk} {dE} - \\frac {1}{m_b^*} \\frac {d \\kappa} {dE} \\quad \\quad (9-1)" }, { "math_id": 18, "text": " \\frac {df}{dE} = \\frac {1}{m_w^*} \\frac {dk}{dE} \\cot(\\frac {k l_w} {2}) - \\frac {k} {m_w^*} \\csc^2(\\frac {k l_w} {2}) \\times \\frac {l_w} {2} \\frac {dk} {dE} + \\frac {1}{m_b^*} \\frac {d \\kappa} {dE} \\quad \\quad (9-2)" }, { "math_id": 19, "text": " \\frac {dk}{dE} = \\frac {\\sqrt {2 m_w^*}}{2 \\sqrt E \\hbar} \\quad \\quad \\quad \\frac {d \\kappa}{dE} = - \\frac {\\sqrt {2 m_b^*}}{2 \\sqrt {V-E} \\hbar}" } ]
https://en.wikipedia.org/wiki?curid=662174
6621919
Locally finite measure
In mathematics, a locally finite measure is a measure for which every point of the measure space has a neighbourhood of finite measure. Definition. Let formula_0 be a Hausdorff topological space and let formula_1 be a formula_2-algebra on formula_3 that contains the topology formula_4 (so that every open set is a measurable set, and formula_1 is at least as fine as the Borel formula_2-algebra on formula_3). A measure/signed measure/complex measure formula_5 defined on formula_1 is called locally finite if, for every point formula_6 of the space formula_7 there is an open neighbourhood formula_8 of formula_6 such that the formula_5-measure of formula_8 is finite. In more condensed notation, formula_5 is locally finite if and only if formula_9 References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "(X, T)" }, { "math_id": 1, "text": "\\Sigma" }, { "math_id": 2, "text": "\\sigma" }, { "math_id": 3, "text": "X" }, { "math_id": 4, "text": "T" }, { "math_id": 5, "text": "\\mu" }, { "math_id": 6, "text": "p" }, { "math_id": 7, "text": "X," }, { "math_id": 8, "text": "N_p" }, { "math_id": 9, "text": "\\text{for all } p \\in X, \\text{ there exists } N_p \\in T \\mbox{ such that } p \\in N_p \\mbox{ and } \\left|\\mu\\left(N_p\\right)\\right| < + \\infty." } ]
https://en.wikipedia.org/wiki?curid=6621919
66221186
Representations of classical Lie groups
In mathematics, the finite-dimensional representations of the complex classical Lie groups formula_0, formula_1, formula_2, formula_3, formula_4, can be constructed using the general representation theory of semisimple Lie algebras. The groups formula_1, formula_3, formula_4 are indeed simple Lie groups, and their finite-dimensional representations coincide with those of their maximal compact subgroups, respectively formula_5, formula_6, formula_7. In the classification of simple Lie algebras, the corresponding algebras are formula_8 However, since the complex classical Lie groups are linear groups, their representations are tensor representations. Each irreducible representation is labelled by a Young diagram, which encodes its structure and properties. General linear group, special linear group and unitary group. Weyl's construction of tensor representations. Let formula_9 be the defining representation of the general linear group formula_0. Tensor representations are the subrepresentations of formula_10 (these are sometimes called polynomial representations). The irreducible subrepresentations of formula_10 are the images of formula_11 by Schur functors formula_12 associated to integer partitions formula_13 of formula_14 into at most formula_15 integers, i.e. to Young diagrams of size formula_16 with formula_17. (If formula_18 then formula_19.) Schur functors are defined using Young symmetrizers of the symmetric group formula_20, which acts naturally on formula_10. We write formula_21. The dimensions of these irreducible representations are formula_22 where formula_23 is the hook length of the cell formula_24 in the Young diagram formula_13. Examples of tensor representations: General irreducible representations. Not all irreducible representations of formula_28 are tensor representations. In general, irreducible representations of formula_28 are mixed tensor representations, i.e. subrepresentations of formula_29, where formula_30 is the dual representation of formula_31 (these are sometimes called rational representations). In the end, the set of irreducible representations of formula_32 is labeled by non increasing sequences of formula_33 integers formula_34. If formula_35, we can associate to formula_36 the pair of Young tableaux formula_37. This shows that irreducible representations of formula_28 can be labeled by pairs of Young tableaux . Let us denote formula_38 the irreducible representation of formula_28 corresponding to the pair formula_39 or equivalently to the sequence formula_40. With these notations, formula_52 where formula_53. See for an interpretation as a product of n-dependent factors divided by products of hook lengths. Case of the special linear group. Two representations formula_54 of formula_0 are equivalent as representations of the special linear group formula_1 if and only if there is formula_55 such that formula_56. For instance, the determinant representation formula_57 is trivial in formula_1, i.e. it is equivalent to formula_58. In particular, irreducible representations of formula_59 can be indexed by Young tableaux, and are all tensor representations (not mixed). Case of the unitary group. The unitary group is the maximal compact subgroup of formula_28. The complexification of its Lie algebra formula_60 is the algebra formula_61. In Lie theoretic terms, formula_62 is the compact real form of formula_28, which means that complex linear, continuous irreducible representations of the latter are in one-to-one correspondence with complex linear, algebraic irreps of the former, via the inclusion formula_63. Tensor products. Tensor products of finite-dimensional representations of formula_0 are given by the following formula: formula_64 where formula_65 unless formula_66 and formula_67. Calling formula_68 the number of lines in a tableau, if formula_69, then formula_70 where the natural integers formula_71 are Littlewood-Richardson coefficients. Below are a few examples of such tensor products: In the case of tensor representations, 3-j symbols and 6-j symbols are known. Orthogonal group and special orthogonal group. "In addition to the Lie group representations described here, the orthogonal group formula_2 and special orthogonal group formula_3 have spin representations, which are projective representations of these groups, i.e. representations of their universal covering groups." Construction of representations. Since formula_2 is a subgroup of formula_0, any irreducible representation of formula_0 is also a representation of formula_2, which may however not be irreducible. In order for a tensor representation of formula_2 to be irreducible, the tensors must be traceless. Irreducible representations of formula_2 are parametrized by a subset of the Young diagrams associated to irreducible representations of formula_0: the diagrams such that the sum of the lengths of the first two columns is at most formula_15. The irreducible representation formula_72 that corresponds to such a diagram is a subrepresentation of the corresponding formula_0 representation formula_73. For example, in the case of symmetric tensors, formula_74 Case of the special orthogonal group. The antisymmetric tensor formula_75 is a one-dimensional representation of formula_2, which is trivial for formula_3. Then formula_76 where formula_77 is obtained from formula_13 by acting on the length of the first column as formula_78. For example, the irreducible representations of formula_82 correspond to Young diagrams of the types formula_83. The irreducible representations of formula_84 correspond to formula_85, and formula_86. On the other hand, the dimensions of the spin representations of formula_84 are even integers. Dimensions. The dimensions of irreducible representations of formula_3 are given by a formula that depends on the parity of formula_15: formula_87 formula_88 There is also an expression as a factorized polynomial in formula_15: formula_89 where formula_90 are respectively row lengths, column lengths and hook lengths. In particular, antisymmetric representations have the same dimensions as their formula_0 counterparts, formula_91, but symmetric representations do not, formula_92 Tensor products. In the stable range formula_93, the tensor product multiplicities that appear in the tensor product decomposition formula_94 are Newell-Littlewood numbers, which do not depend on formula_15. Beyond the stable range, the tensor product multiplicities become formula_15-dependent modifications of the Newell-Littlewood numbers. For example, for formula_95, we have formula_96 Branching rules from the general linear group. Since the orthogonal group is a subgroup of the general linear group, representations of formula_97 can be decomposed into representations of formula_98. The decomposition of a tensor representation is given in terms of Littlewood-Richardson coefficients formula_71 by the Littlewood restriction rule formula_99 where formula_100 is a partition into even integers. The rule is valid in the stable range formula_101. The generalization to mixed tensor representations is formula_102 Similar branching rules can be written for the symplectic group. Symplectic group. Representations. The finite-dimensional irreducible representations of the symplectic group formula_4 are parametrized by Young diagrams with at most formula_15 rows. The dimension of the corresponding representation is formula_103 There is also an expression as a factorized polynomial in formula_15: formula_104 Tensor products. Just like in the case of the orthogonal group, tensor product multiplicities are given by Newell-Littlewood numbers in the stable range, and modifications thereof beyond the stable range. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "GL(n,\\mathbb{C})" }, { "math_id": 1, "text": "SL(n,\\mathbb{C})" }, { "math_id": 2, "text": "O(n,\\mathbb{C})" }, { "math_id": 3, "text": "SO(n,\\mathbb{C})" }, { "math_id": 4, "text": "Sp(2n,\\mathbb{C})" }, { "math_id": 5, "text": "SU(n)" }, { "math_id": 6, "text": "SO(n)" }, { "math_id": 7, "text": "Sp(n)" }, { "math_id": 8, "text": "\n\\begin{align}\nSL(n,\\mathbb{C})&\\to A_{n-1} \n\\\\\nSO(n_\\text{odd},\\mathbb{C})&\\to B_{\\frac{n-1}{2}}\n\\\\\nSO(n_\\text{even},\\mathbb{C}) &\\to D_{\\frac{n}{2}}\n\\\\\nSp(2n,\\mathbb{C})&\\to C_n \n\\end{align}\n" }, { "math_id": 9, "text": "V=\\mathbb{C}^n" }, { "math_id": 10, "text": "V^{\\otimes k}" }, { "math_id": 11, "text": "V" }, { "math_id": 12, "text": "\\mathbb{S}^\\lambda" }, { "math_id": 13, "text": "\\lambda" }, { "math_id": 14, "text": "k" }, { "math_id": 15, "text": "n" }, { "math_id": 16, "text": "\\lambda_1+\\cdots + \\lambda_n = k" }, { "math_id": 17, "text": "\\lambda_{n+1}=0" }, { "math_id": 18, "text": "\\lambda_{n+1}>0" }, { "math_id": 19, "text": "\\mathbb{S}^\\lambda(V)=0" }, { "math_id": 20, "text": "S_k" }, { "math_id": 21, "text": "V_\\lambda = \\mathbb{S}^\\lambda(V)" }, { "math_id": 22, "text": "\n\\dim V_\\lambda = \\prod_{1\\leq i < j \\leq n}\\frac{\\lambda_i-\\lambda_j +j-i}{j-i} \n= \\prod_{(i,j)\\in \\lambda} \\frac{n-i+j}{h_\\lambda(i,j)}\n" }, { "math_id": 23, "text": "h_\\lambda(i,j)" }, { "math_id": 24, "text": "(i,j)" }, { "math_id": 25, "text": "\n\\chi_\\lambda(g) = s_\\lambda(x_1,\\dots, x_n)\n" }, { "math_id": 26, "text": "x_1,\\dots ,x_n" }, { "math_id": 27, "text": "g\\in GL(n,\\mathbb{C})" }, { "math_id": 28, "text": " GL(n,\\mathbb C) " }, { "math_id": 29, "text": " V^{\\otimes r} \\otimes (V^*)^{\\otimes s}" }, { "math_id": 30, "text": " V^* " }, { "math_id": 31, "text": " V " }, { "math_id": 32, "text": " GL(n,\\mathbb C)" }, { "math_id": 33, "text": " n " }, { "math_id": 34, "text": " \\lambda_1\\geq \\dots \\geq \\lambda_n " }, { "math_id": 35, "text": " \\lambda_k \\geq 0, \\lambda_{k+1} \\leq 0 " }, { "math_id": 36, "text": " (\\lambda_1, \\dots ,\\lambda_n) " }, { "math_id": 37, "text": " ([\\lambda_1\\dots\\lambda_k],[-\\lambda_n,\\dots,-\\lambda_{k+1}]) " }, { "math_id": 38, "text": " V_{\\lambda\\mu} = V_{\\lambda_1,\\dots,\\lambda_n} " }, { "math_id": 39, "text": "(\\lambda,\\mu)" }, { "math_id": 40, "text": " (\\lambda_1,\\dots,\\lambda_n) " }, { "math_id": 41, "text": "V_{\\lambda}=V_{\\lambda()}, V = V_{(1)()}" }, { "math_id": 42, "text": " (V_{\\lambda\\mu})^* = V_{\\mu\\lambda} " }, { "math_id": 43, "text": " k \\in \\mathbb Z " }, { "math_id": 44, "text": " D_k " }, { "math_id": 45, "text": " (\\det)^k " }, { "math_id": 46, "text": " V_{\\lambda_1,\\dots,\\lambda_n} = V_{\\lambda_1+k,\\dots,\\lambda_n+k} \\otimes D_{-k} " }, { "math_id": 47, "text": " k " }, { "math_id": 48, "text": " \\lambda_n + k \\geq 0 " }, { "math_id": 49, "text": " V_{\\lambda_1, \\dots,\\lambda_n} " }, { "math_id": 50, "text": " V_{\\lambda\\mu} " }, { "math_id": 51, "text": " \\lambda = (\\lambda_1,\\dots,\\lambda_r), \\mu=(\\mu_1,\\dots,\\mu_s) " }, { "math_id": 52, "text": " \\dim(V_{\\lambda\\mu}) = d_\\lambda d_\\mu \\prod_{i=1}^r \\frac{(1-i-s+n)_{\\lambda_i}}{(1-i+r)_{\\lambda_i}} \\prod_{j=1}^s \\frac{(1-j-r+n)_{\\mu_i}}{(1-j+s)_{\\mu_i}}\\prod_{i=1}^r \\prod_{j=1}^s \\frac{n+1 + \\lambda_i + \\mu_j - i- j }{n+1 -i -j }\n" }, { "math_id": 53, "text": " d_\\lambda = \\prod_{1 \\leq i < j \\leq r} \\frac{\\lambda_i - \\lambda_j + j - i}{j-i} " }, { "math_id": 54, "text": "V_{\\lambda},V_{\\lambda'}" }, { "math_id": 55, "text": "k\\in\\mathbb{Z}" }, { "math_id": 56, "text": "\\forall i,\\ \\lambda_i-\\lambda'_i=k" }, { "math_id": 57, "text": "V_{(1^n)}" }, { "math_id": 58, "text": "V_{()}" }, { "math_id": 59, "text": " SL(n,\\mathbb C) " }, { "math_id": 60, "text": "\\mathfrak u(n) = \\{a \\in \\mathcal M(n,\\mathbb C), a^\\dagger + a = 0\\}" }, { "math_id": 61, "text": "\\mathfrak{gl}(n,\\mathbb C)" }, { "math_id": 62, "text": " U(n) " }, { "math_id": 63, "text": " U(n) \\rightarrow GL(n,\\mathbb C) " }, { "math_id": 64, "text": " \nV_{\\lambda_1\\mu_1} \\otimes V_{\\lambda_2\\mu_2} = \\bigoplus_{\\nu,\\rho} V_{\\nu\\rho}^{\\oplus \\Gamma^{\\nu\\rho}_{\\lambda_1\\mu_1,\\lambda_2\\mu_2}},\n" }, { "math_id": 65, "text": " \\Gamma^{\\nu\\rho}_{\\lambda_1\\mu_1,\\lambda_2\\mu_2} = 0 " }, { "math_id": 66, "text": " |\\nu| \\leq |\\lambda_1| + |\\lambda_2|" }, { "math_id": 67, "text": " |\\rho| \\leq |\\mu_1| + |\\mu_2|" }, { "math_id": 68, "text": " l(\\lambda)" }, { "math_id": 69, "text": " l(\\lambda_1) + l(\\lambda_2) + l(\\mu_1) + l(\\mu_2) \\leq n " }, { "math_id": 70, "text": "\n \\Gamma^{\\nu\\rho}_{\\lambda_1\\mu_1,\\lambda_2\\mu_2} = \\sum_{\\alpha,\\beta,\\eta,\\theta} \\left(\\sum_\\kappa c^{\\lambda_1}_{\\kappa,\\alpha} c^{\\mu_2}_{\\kappa,\\beta}\\right)\\left(\\sum_\\gamma c^{\\lambda_2}_{\\gamma,\\eta}c^{\\mu_1}_{\\gamma,\\theta}\\right)c^{\\nu}_{\\alpha,\\theta}c^{\\rho}_{\\beta,\\eta},\n" }, { "math_id": 71, "text": "c_{\\lambda,\\mu}^\\nu" }, { "math_id": 72, "text": "U_\\lambda" }, { "math_id": 73, "text": "V_\\lambda" }, { "math_id": 74, "text": "\nV_{(k)} = U_{(k)} \\oplus V_{(k-2)}\n" }, { "math_id": 75, "text": "U_{(1^n)}" }, { "math_id": 76, "text": "U_{(1^n)}\\otimes U_\\lambda = U_{\\lambda'}" }, { "math_id": 77, "text": "\\lambda'" }, { "math_id": 78, "text": "\\tilde{\\lambda}_1\\to n-\\tilde{\\lambda}_1" }, { "math_id": 79, "text": "\\tilde{\\lambda}_1\\leq\\frac{n-1}{2}" }, { "math_id": 80, "text": "\\tilde{\\lambda}_1\\leq\\frac{n}{2}-1" }, { "math_id": 81, "text": "\\tilde{\\lambda}_1=\\frac{n}{2}" }, { "math_id": 82, "text": "O(3,\\mathbb{C})" }, { "math_id": 83, "text": "(k\\geq 0),(k\\geq 1,1),(1,1,1)" }, { "math_id": 84, "text": "SO(3,\\mathbb{C})" }, { "math_id": 85, "text": "(k\\geq 0)" }, { "math_id": 86, "text": "\\dim U_{(k)}=2k+1" }, { "math_id": 87, "text": "\n(n\\text{ even}) \\qquad \\dim U_\\lambda = \\prod_{1\\leq i<j\\leq \\frac{n}{2}} \\frac{\\lambda_i-\\lambda_j-i+j}{-i+j}\\cdot \\frac{\\lambda_i+\\lambda_j+n-i-j}{n-i-j}\n" }, { "math_id": 88, "text": "\n(n\\text{ odd}) \\qquad \\dim U_\\lambda = \\prod_{1\\leq i<j\\leq \\frac{n-1}{2}} \\frac{\\lambda_i-\\lambda_j-i+j}{-i+j}\n\\prod_{1\\leq i\\leq j\\leq \\frac{n-1}{2}} \\frac{\\lambda_i+\\lambda_j+n-i-j}{n-i-j}\n" }, { "math_id": 89, "text": "\n\\dim U_\\lambda = \\prod_{(i,j)\\in \\lambda,\\ i\\geq j}\n \\frac{n+\\lambda_i+\\lambda_j-i-j}{h_\\lambda(i,j)} \n \\prod_{(i,j)\\in \\lambda,\\ i< j}\n \\frac{n-\\tilde{\\lambda}_i-\\tilde{\\lambda}_j+i+j-2}{h_\\lambda(i,j)}\n" }, { "math_id": 90, "text": "\\lambda_i,\\tilde{\\lambda}_i,h_\\lambda(i,j)" }, { "math_id": 91, "text": "\\dim U_{(1^k)}=\\dim V_{(1^k)}" }, { "math_id": 92, "text": "\n\\dim U_{(k)} = \\dim V_{(k)} - \\dim V_{(k-2)} = \\binom{n+k-1}{k}- \\binom{n+k-3}{k}\n" }, { "math_id": 93, "text": "|\\mu|+|\\nu|\\leq \\left[\\frac{n}{2}\\right]" }, { "math_id": 94, "text": "U_\\lambda\\otimes U_\\mu = \\oplus_\\nu N_{\\lambda,\\mu,\\nu} U_\\nu" }, { "math_id": 95, "text": "n\\geq 12" }, { "math_id": 96, "text": "\n\\begin{align} {}\n [1]\\otimes [1] &= [2] + [11] + [] \n \\\\ {}\n [1]\\otimes [2] &= [21] + [3] + [1]\n \\\\ {}\n [1]\\otimes [11] &= [111] + [21] + [1] \n \\\\ {}\n [1]\\otimes [21] &= [31]+[22]+[211]+ [2] + [11] \n \\\\ {}\n [1] \\otimes [3] &= [4]+[31]+[2] \n \\\\ {}\n [2]\\otimes [2] &= [4]+[31]+[22]+[2]+[11]+[] \n \\\\ {}\n [2]\\otimes [11] &= [31]+[211] + [2]+[11] \n \\\\ {}\n [11]\\otimes [11] &= [1111] + [211] + [22] + [2] + [11] + []\n \\\\ {}\n [21]\\otimes [3] &=[321]+[411]+[42]+[51]+ [211]+[22]+2[31]+[4]+ [11]+[2]\n \\end{align}\n" }, { "math_id": 97, "text": "GL(n)" }, { "math_id": 98, "text": "O(n)" }, { "math_id": 99, "text": "\nV_\\nu^{GL(n)} = \\sum_{\\lambda,\\mu} c_{\\lambda,2\\mu}^\\nu U_\\lambda^{O(n)}\n" }, { "math_id": 100, "text": "2\\mu" }, { "math_id": 101, "text": "2|\\nu|,\\tilde{\\lambda}_1+\\tilde{\\lambda}_2\\leq n " }, { "math_id": 102, "text": "\nV_{\\lambda\\mu}^{GL(n)} = \\sum_{\\alpha,\\beta,\\gamma,\\delta} c_{\\alpha,2\\gamma}^\\lambda c_{\\beta,2\\delta}^\\mu c_{\\alpha,\\beta}^\\nu U_\\nu^{O(n)}\n" }, { "math_id": 103, "text": "\n\\dim W_\\lambda = \\prod_{i=1}^n \\frac{\\lambda_i+n-i+1}{n-i+1} \\prod_{1\\leq i<j\\leq n} \\frac{\\lambda_i-\\lambda_j+j-i}{j-i} \\cdot \\frac{\\lambda_i+\\lambda_j+2n-i-j+2}{2n-i-j+2}\n" }, { "math_id": 104, "text": "\n\\dim W_\\lambda = \\prod_{(i,j)\\in \\lambda,\\ i> j}\n \\frac{n+\\lambda_i+\\lambda_j-i-j+2}{h_\\lambda(i,j)} \n \\prod_{(i,j)\\in \\lambda,\\ i\\leq j}\n \\frac{n-\\tilde{\\lambda}_i-\\tilde{\\lambda}_j+i+j}{h_\\lambda(i,j)}\n" } ]
https://en.wikipedia.org/wiki?curid=66221186
662256
Von Neumann conjecture
Disproven mathematical theory concerning Banach-Tarski and amenable groups In mathematics, the von Neumann conjecture stated that a group "G" is non-amenable if and only if "G" contains a subgroup that is a free group on two generators. The conjecture was disproved in 1980. In 1929, during his work on the Banach–Tarski paradox, John von Neumann defined the concept of amenable groups and showed that no amenable group contains a free subgroup of rank 2. The suggestion that the converse might hold, that is, that every non-amenable group contains a free subgroup on two generators, was made by a number of different authors in the 1950s and 1960s. Although von Neumann's name is popularly attached to the conjecture, its first written appearance seems to be due to Mahlon Marsh Day in 1957. The Tits alternative is a fundamental theorem which, in particular, establishes the conjecture within the class of linear groups. The historically first potential counterexample is Thompson group "F". While its amenability is a wide-open problem, the general conjecture was shown to be false in 1980 by Alexander Ol'shanskii; he demonstrated that Tarski monster groups, constructed by him, which are easily seen not to have free subgroups of rank 2, are not amenable. Two years later, Sergei Adian showed that certain Burnside groups are also counterexamples. None of these counterexamples are finitely presented, and for some years it was considered possible that the conjecture held for finitely presented groups. However, in 2003, Alexander Ol'shanskii and Mark Sapir exhibited a collection of finitely presented groups which do not satisfy the conjecture. In 2013, Nicolas Monod found an easy counterexample to the conjecture. Given by piecewise projective homeomorphisms of the line, the group is remarkably simple to understand. Even though it is not amenable, it shares many known properties of amenable groups in a straightforward way. In 2013, Yash Lodha and Justin Tatch Moore isolated a finitely presented non-amenable subgroup of Monod's group. This provides the first torsion-free finitely presented counterexample, and admits a presentation with 3 generators and 9 relations. Lodha later showed that this group satisfies the property formula_0, which is a stronger finiteness property.
[ { "math_id": 0, "text": "F_{\\infty}" } ]
https://en.wikipedia.org/wiki?curid=662256
66228158
Numerical analytic continuation
In many-body physics, the problem of analytic continuation is that of numerically extracting the spectral density of a Green function given its values on the imaginary axis. It is a necessary post-processing step for calculating dynamical properties of physical systems from Quantum Monte Carlo simulations, which often compute Green function values only at imaginary times or Matsubara frequencies. Mathematically, the problem reduces to solving a Fredholm integral equation of the first kind with an ill-conditioned kernel. As a result, it is an ill-posed inverse problem with no unique solution and where a small noise on the input leads to large errors in the unregularized solution. There are different methods for solving this problem including the maximum entropy method, the average spectrum method and Pade approximation methods. Examples. A common analytic continuation problem is obtaining the spectral function formula_0 at real frequencies formula_1 from the Green function values formula_2 at Matsubara frequencies formula_3 by numerically inverting the integral equation formula_4 where formula_5 for fermionic systems or formula_6 for bosonic ones and formula_7 is the inverse temperature. This relation is an example of Kramers-Kronig relation. The spectral function can also be related to the imaginary-time Green function formula_8 be applying the inverse Fourier transform to the above equation formula_9 with formula_10. Evaluating the summation over Matsubara frequencies gives the desired relation formula_11 where the upper sign is for fermionic systems and the lower sign is for bosonic ones. Another example of the analytic continuation is calculating the optical conductivity formula_12 from the current-current correlation function values formula_13 at Matsubara frequencies. The two are related as following formula_14 References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "A(\\omega)" }, { "math_id": 1, "text": "\\omega" }, { "math_id": 2, "text": "\\mathcal{G}(i\\omega_n)" }, { "math_id": 3, "text": "\\omega_n" }, { "math_id": 4, "text": "\\mathcal{G}(i\\omega_n) = \\int_{-\\infty}^{\\infty} \\frac{d\\omega}{2\\pi} \\frac{1}{i\\omega_n - \\omega}\\; A(\\omega)" }, { "math_id": 5, "text": "\\omega_n = (2n+1) \\pi/\\beta" }, { "math_id": 6, "text": "\\omega_n = 2n \\pi/\\beta" }, { "math_id": 7, "text": "\\beta=1/ T" }, { "math_id": 8, "text": "\\mathcal{G}(\\tau)" }, { "math_id": 9, "text": "\\mathcal{G}(\\tau)\\ \\colon = \\frac{1}{\\beta}\\sum_{\\omega_n} e^{-i\\omega_n \\tau} \\mathcal{g}(i\\omega_n) = \\int_{-\\infty}^{\\infty} \\frac{d\\omega}{2\\pi} A(\\omega) \\frac{1}{\\beta}\\sum_{\\omega_n} \\frac{e^{-i\\omega_n \\tau} }{i\\omega_n - \\omega}" }, { "math_id": 10, "text": "\\tau \\in [0,\\beta]" }, { "math_id": 11, "text": "\\mathcal{G}(\\tau) = \\int_{-\\infty}^{\\infty} \\frac{d\\omega}{2\\pi} \\frac{-e^{-\\tau \\omega}}{1\\pm e^{-\\beta\\omega}} A(\\omega)" }, { "math_id": 12, "text": "\\sigma(\\omega)" }, { "math_id": 13, "text": "\\Pi(i\\omega_n)" }, { "math_id": 14, "text": "\\Pi(i\\omega_n) = \\int_{0}^{\\infty} \\frac{d\\omega}{\\pi} \\frac{2 \\omega^2}{\\omega_n^2 +\\omega^2}\\; A(\\omega)" } ]
https://en.wikipedia.org/wiki?curid=66228158
66228478
Cost distance analysis
Spatial analysis techniques for minimizing cost In spatial analysis and geographic information systems, cost distance analysis or cost path analysis is a method for determining one or more optimal routes of travel through unconstrained (two-dimensional) space. The optimal solution is that which minimizes the total cost of the route, based on a field of cost density (cost per linear unit) that varies over space due to local factors. It is thus based on the fundamental geographic principle of Friction of distance. It is an optimization problem with multiple deterministic algorithm solutions, implemented in most GIS software. The various problems, algorithms, and tools of cost distance analysis operate over an unconstrained two-dimensional space, meaning that a path could be of any shape. Similar cost optimization problems can also arise in a constrained space, especially a one-dimensional linear network such as a road or telecommunications network. Although they are similar in principle, the problems in network space require very different (usually simpler) algorithms to solve, largely adopted from graph theory. The collection of GIS tools for solving these problems are called "network analysis". History. Humans seem to have an innate desire to travel with minimal effort and time. Historic, even ancient, roads show patterns similar to what modern computational algorithms would generate, traveling straight across flat spaces, but curving around mountains, canyons, and thick vegetation. However, it was not until the 20th century that geographers developed theories to explain this route optimization, and algorithms to reproduce it. In 1957, during the Quantitative revolution in Geography, with its propensity to adopt principles or mathematical formalisms from the "hard" sciences (known as social physics), William Warntz used refraction as an analogy for how minimizing travel cost will make transportation routes change direction at the boundary between two landscapes with very different friction of distance (e.g., emerging from a forest into a prairie). His principle of "parsimonious movement," changing direction to minimize cost, was widely accepted, but the refraction analogy and mathematics (Snell's law) was not, largely because it does not scale well to normally complex geographic situations. Warntz and others then adopted another analogy that proved much more successful in the common situation where travel cost varies continuously over space, by comparing it to terrain. They compared the cost rate (i.e., cost per unit distance, the inverse of velocity if the cost is time) to the slope of a terrain surface (i.e., elevation change per unit distance), both being mathematical derivatives of an accumulated function or field: total elevation above a vertical datum (sea level) in the case of terrain. Integrating the cost rate field from a given starting point would create an analogous surface of total accumulated cost of travel from that point. In the same way that a stream follows the path of least resistance downhill, the streamline on the cost accumulation surface from any point "down" to the source will be the minimum-cost path. Additional lines of research in the 1960s further developed the nature of the cost rate field as a manifestation of the concept of friction of distance, studying how it was affected by various geographic features. At the time, this solution was only theoretical, lacking the data and computing power for the continuous solution. Raster GIS provided the first feasible platform for implementing the theoretical solution by converting the continuous integration into a discrete summation procedure. Dana Tomlin implemented cost distance analysis in his Map Analysis Package by 1986, and Ronald Eastman added it to IDRISI by 1989, with a more efficient "pushbroom" cost accumulation algorithm. Douglas (1994) further refined the accumulation algorithm, which is basically what is implemented in most current GIS software. Cost raster. The primary data set used in cost distance analysis is the "cost raster", sometimes called the cost-of-passage surface, the friction image, the cost-rate field, or cost surface. In most implementations, this is a raster grid, in which the value of each cell represents the cost (i.e., expended resources, such as time, money, or energy) of a route crossing the cell in a horizontal or vertical direction. It is thus a discretization of a field of cost rate (cost per linear unit), a spatially intensive property. This cost is a manifestation of the principle of friction of distance. A number of different types of cost may be relevant in a given routing problem: Some of these costs are easily quantifiable and measurable, such as transit time, fuel consumption, and construction costs, thus naturally lending themselves to computational solutions. That said, there may be significant uncertainty in predicting the cost prior to implementing the route. Other costs are much more difficult to measure due to their qualitative or subjective nature, such as political protest or ecological impact; these typically require operationalization through the creation of a scale. In many situations, multiple types of cost may be simultaneously relevant, and the total cost is a combination of them. Because different costs are expressed in different units (or, in the case of scales, no units at all), they usually cannot be directly summed, but must be combined by creating an index. A common type of index is created by scaling each factor to a consistent range (say, [0,1]), then combining them using weighted linear combination. An important part of the creation of an index model like this is Calibration (statistics), adjusting the parameters of the formula(s) to make the modeled relative cost match real-world costs, using methods such as the Analytic hierarchy process. The index model formula is typically implemented in a raster GIS using map algebra tools from raster grids representing each cost factor, resulting in a single cost raster grid. Directional cost. One limitation of the traditional method is that the cost field is "isotropic" or omni-directional: the cost at a given location does not depend on the direction of traversal. This is appropriate in many situations, but not others. For example, if one is flying in a windy location, an airplane flying in the direction of the wind incurs a much lower cost than an airplane flying against it. Some research has been done on extending cost distance analysis algorithms to incorporate directional cost, but it is not yet widely implemented in GIS software. IDRISI has some support for anisotropy. Least-cost-path algorithm. The most common cost distance task is to determine the single path through the space between a given source location and a destination location that has the least total accumulated cost. The typical solution algorithm is a discrete raster implementation of the cost integration strategy of Warntz and Lindgren, which is a deterministic (NP-complete) optimization. Corridor analysis. A slightly different version of the least-cost path problem, which could be considered a fuzzy version of it, is to look for "corridors" more than one cell in width, thus providing some flexibility in applying the results. Corridors are commonly used in transportation planning and in wildlife management. The solution to this problem is to compute, for every cell in the study space, the total accumulated cost of the optimal path between a given source and destination that passes through that cell. Thus, every cell in the optimal path derived above would have the same minimum value. Cells near this path would be reached by paths deviating only slightly from the optimal path, so they would have relatively low cost values, collectively forming a corridor with fuzzy edges as more distant cells have increasing cost values. The algorithm to derive this corridor field is created by generating two cost accumulation grids: one using the source as described above. Then the algorithm is repeated, but using the destination as the source. Then these two grids are added using map algebra. This works because for each cell, the optimal source-destination path passing through that cell is the optimal path from that cell to the source, added to the optimal path from that cell to the destination. This can be accomplished using the cost accumulation tool above, along with a map algebra tool, although ArcGIS provides a Corridor tool that automates the process. Cost-based allocation. Another use of the cost accumulation algorithm is to partition space among multiple sources, with each cell assigned to the source it can reach with the lowest cost, creating a series of regions in which each source is the "nearest". In the terrain analogy, these would correspond to watersheds (one could thus call these "cost-sheds," but this term is not in common usage). They are directly related to a voronoi diagram, which is essentially an allocation over a space with constant cost. They are also conceptually (if not computationally) similar to location-allocation tools for network analysis. A cost-based allocation can be created using two methods. The first is to use a modified version of the cost accumulation algorithm, which substitutes the backlink grid for an allocation grid, in which each cell is assigned the same source identifier of its lowest-cost neighbor, causing the domain of each source to gradually grow until they meet each other. This is the approach taken in ArcGIS Pro. The second solution is to first run the basic accumulation algorithm, then use the backlink grid to determine the source into which each cell "flows." GRASS GIS uses this approach; in fact, the same tool is used as for computing watersheds from terrain. Implementations. Cost distance tools are available in most raster GIS software: Applications. Cost distance analysis has found applications in a wide range of geography related disciplines including archeaology and landscape ecology. See also. &lt;templatestyles src="Div col/styles.css"/&gt; References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\sqrt{2}" } ]
https://en.wikipedia.org/wiki?curid=66228478
66228643
Elementary Number Theory, Group Theory and Ramanujan Graphs
Elementary Number Theory, Group Theory and Ramanujan Graphs is a book in mathematics whose goal is to make the construction of Ramanujan graphs accessible to undergraduate-level mathematics students. In order to do so, it covers several other significant topics in graph theory, number theory, and group theory. It was written by Giuliana Davidoff, Peter Sarnak, and Alain Valette, and published in 2003 by the Cambridge University Press, as volume 55 of the London Mathematical Society Student Texts book series. Background. In graph theory, expander graphs are undirected graphs with high connectivity: every small-enough subset of vertices has many edges connecting it to the remaining parts of the graph. Sparse expander graphs have many important applications in computer science, including the development of error correcting codes, the design of sorting networks, and the derandomization of randomized algorithms. For these applications, the graph must be constructed explicitly, rather than merely having its existence proven. One way to show that a graph is an expander is to study the eigenvalues of its adjacency matrix. For an formula_0-regular graph, these are real numbers in the interval formula_1, and the largest eigenvalue (corresponding to the all-1s eigenvector) is exactly formula_0. The spectral expansion of the graph is defined from the difference between the largest and second-largest eigenvalues, the "spectral gap", which controls how quickly a random walk on the graph settles to its stable distribution; this gap can be at most formula_2. The Ramanujan graphs are defined as the graphs that are optimal from the point of view of spectral expansion: they are formula_0-regular graphs whose spectral gap is exactly formula_2. Although Ramanujan graphs with high degree, such as the complete graphs, are easy to construct, expander graphs of low degree are needed for the applications of these graphs. Several constructions of low-degree Ramanujan graphs are now known, the first of which were by and . Reviewer Jürgen Elstrod writes that "while the description of these graphs is elementary, the proof that they have the desired properties is not". "Elementary Number Theory, Group Theory and Ramanujan Graphs" aims to make as much of this theory accessible at an elementary level as possible. Topics. Its authors have divided "Elementary Number Theory, Group Theory and Ramanujan Graphs" into four chapters. The first of these provides background in graph theory, including material on the girth of graphs (the length of the shortest cycle), on graph coloring, and on the use of the probabilistic method to prove the existence of graphs for which both the girth and the number of colors needed are large. This provides additional motivation for the construction of Ramanujan graphs, as the ones constructed in the book provide explicit examples of the same phenomenon. This chapter also provides the expected material on spectral graph theory, needed for the definition of Ramanujan graphs. Chapter 2, on number theory, includes the sum of two squares theorem characterizing the positive integers that can be represented as sums of two squares of integers (closely connected to the norms of Gaussian integers), Lagrange's four-square theorem according to which all positive integers can be represented as sums of four squares (proved using the norms of Hurwitz quaternions), and quadratic reciprocity. Chapter 3 concerns group theory, and in particular the theory of the projective special linear groups formula_3 and projective linear groups formula_4 over the finite fields whose order is a prime number formula_5, and the representation theory of finite groups. The final chapter constructs the Ramanujan graph formula_6 for two prime numbers formula_7 and formula_5 as a Cayley graph of the group formula_3 or formula_4 (depending on quadratic reciprocity) with generators defined by taking modulo formula_5 a set of formula_8 quaternions coming from representations of formula_7 as a sum of four squares. These graphs are automatically formula_9-regular. The chapter provides formulas for their numbers of vertices, and estimates of their girth. While not fully proving that these graphs are Ramanujan graphs, the chapter proves that they are spectral expanders, and describes how the claim that they are Ramanujan graphs follows from Pierre Deligne's proof of the Ramanujan conjecture (the connection to Ramanujan from which the name of these graphs was derived). Audience and reception. This book is intended for advanced undergraduates who have already seen some abstract algebra and real analysis. Reviewer Thomas Shemanske suggests using it as the basis of a senior seminar, as a quick path to many important topics and an interesting example of how these seemingly-separate topics join forces in this application. On the other hand, Thomas Pfaff thinks it would be difficult going even for most senior-level undergraduates, but could be a good choice for independent study or an elective graduate course. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "r" }, { "math_id": 1, "text": "[-r,r]" }, { "math_id": 2, "text": "2\\sqrt{r-1}" }, { "math_id": 3, "text": "PSL(2,\\mathbb{F}_q)" }, { "math_id": 4, "text": "PGL(2,\\mathbb{F}_q)" }, { "math_id": 5, "text": "q" }, { "math_id": 6, "text": "X^{p,q}" }, { "math_id": 7, "text": "p" }, { "math_id": 8, "text": "p+1" }, { "math_id": 9, "text": "(p+1)" } ]
https://en.wikipedia.org/wiki?curid=66228643
662349
Metabolic theory of ecology
Theory concerning metabolism and observed patterns in ecology The metabolic theory of ecology (MTE) is the ecological component of the more general Metabolic Scaling Theory and Kleiber's law. It posits that the metabolic rate of organisms is the fundamental biological rate that governs most observed patterns in ecology. MTE is part of a larger set of theory known as metabolic scaling theory that attempts to provide a unified theory for the importance of metabolism in driving pattern and process in biology from the level of cells all the way to the biosphere. MTE is based on an interpretation of the relationships between body size, body temperature, and metabolic rate across all organisms. Small-bodied organisms tend to have higher mass-specific metabolic rates than larger-bodied organisms. Furthermore, organisms that operate at warm temperatures through endothermy or by living in warm environments tend towards higher metabolic rates than organisms that operate at colder temperatures. This pattern is consistent from the unicellular level up to the level of the largest animals and plants on the planet. In MTE, this relationship is considered to be the primary constraint that influences biological processes (via their rates and times) at all levels of organization (from individual up to ecosystem level). MTE is a macroecological theory that aims to be universal in scope and application. Fundamental concepts in MTE. Metabolism. Metabolic pathways consist of complex networks, which are responsible for the processing of both energy and material. The metabolic rate of a heterotroph is defined as the rate of respiration in which energy is obtained by oxidation of a carbon compound. The rate of photosynthesis on the other hand, indicates the metabolic rate of an autotroph. According to MTE, both body size and temperature affect the metabolic rate of an organism. Metabolic rate scales as 3/4 power of body size, and its relationship with temperature is described by the Van’t Hoff-Arrhenius equation over the range of 0 to 40 °C. Stoichiometry. From the ecological perspective, stoichiometry is concerned with the proportion of elements in both living organisms and their environment. In order to survive and maintain metabolism, an organism must be able to obtain crucial elements and excrete waste products. As a result, the elemental composition of an organism would be different from the exterior environment. Through metabolism, body size can affect stoichiometry. For example, small organism tend to store most of their phosphorus in rRNA due to their high metabolic rate, whereas large organisms mostly invest this element inside the skeletal structure. Thus, concentration of elements to some extent can limit the rate of biological processes. Inside an ecosystem, the rate of flux and turn over of elements by inhabitants, combined with the influence of abiotic factors, determine the concentration of elements. Theoretical background. Metabolic rate scales with the mass of an organism of a given species according to Kleiber's law where "B" is whole organism metabolic rate (in watts or other unit of power), "M" is organism mass (in kg), and "B"o is a mass-independent normalization constant (given in a unit of power divided by a unit of mass. In this case, watts per kilogram): formula_0 At increased temperatures, chemical reactions proceed faster. This relationship is described by the Boltzmann factor, where "E" is activation energy in electronvolts or joules, "T" is absolute temperature in kelvins, and "k" is the Boltzmann constant in eV/K or J/K: formula_1 While "B"o in the previous equation is mass-independent, it is not explicitly independent of temperature. To explain the relationship between body mass and temperature, building on earlier work showing that the effects of both body mass and temperature could be combined multiplicatively in a single equation, the two equations above can be combined to produce the primary equation of the MTE, where "b"o is a normalization constant that is independent of body size or temperature: formula_2 According to this relationship, metabolic rate is a function of an organism's body mass and body temperature. By this equation, large organisms have higher metabolic rates (in watts) than small organisms, and organisms at high body temperatures have higher metabolic rates than those that exist at low body temperatures. However, specific metabolic rate (SMR, in watts/kg) is given by formula_3 Hence SMR for large organisms are lower than small organisms. Past debate over mechanisms and the allometric exponent. Researchers have debated two main aspects of this theory, the pattern and the mechanism. Past debated have focused on the question whether metabolic rate scales to the power of &lt;templatestyles src="Fraction/styles.css" /&gt;3⁄4 or &lt;templatestyles src="Fraction/styles.css" /&gt;2⁄3w, or whether either of these can even be considered a universal exponent. In addition to debates concerning the exponent, some researchers also disagree about the underlying mechanisms generating the scaling exponent. Various authors have proposed at least eight different types of mechanisms that predict an allometric scaling exponent of either &lt;templatestyles src="Fraction/styles.css" /&gt;2⁄3 or &lt;templatestyles src="Fraction/styles.css" /&gt;3⁄4. The majority view is that while the &lt;templatestyles src="Fraction/styles.css" /&gt;3⁄4 exponent is indeed the mean observed exponent within and across taxa, there is intra- and interspecific variability in the exponent that can include shallower exponents such as&lt;templatestyles src="Fraction/styles.css" /&gt;2⁄3. Past debates on the exact value of the exponent are settled in part because the observed variability in the metabolic scaling exponent is consistent with a 'relaxed' version of metabolic scaling theory where additional selective pressures lead to a constrained set of variation around the predicted optimal &lt;templatestyles src="Fraction/styles.css" /&gt;3⁄4 exponent. Much of past debate have focused on two particular types of mechanisms. One of these assumes energy or resource transport across the "external" surface area of three-dimensional organisms is the key factor driving the relationship between metabolic rate and body size. The surface area in question may be skin, lungs, intestines, or, in the case of unicellular organisms, cell membranes. In general, the surface area (SA) of a three dimensional object scales with its volume (V) as "SA = cV"&lt;templatestyles src="Fraction/styles.css" /&gt;2⁄3, where c is a proportionality constant. The Dynamic Energy Budget model predicts exponents that vary between &lt;templatestyles src="Fraction/styles.css" /&gt;2⁄3 – 1, depending on the organism's developmental stage, basic body plan and resource density. DEB is an alternative to metabolic scaling theory, developed before the MTE. DEB also provides a basis for population, community and ecosystem level processes to be studied based on energetics of the constituent organisms. In this theory, the biomass of the organism is separated into structure (what is built during growth) and reserve (a pool of polymers generated by assimilation). DEB is based on the first principles dictated by the kinetics and thermodynamics of energy and material fluxes, has a similar number of parameters per process as MTE, and the parameters have been estimated for over 3000 animal species While some of these alternative models make several testable predictions, others are less comprehensive and of these proposed models only DEB can make as many predictions with a minimal set of assumptions as metabolic scaling theory. In contrast, the arguments for a &lt;templatestyles src="Fraction/styles.css" /&gt;3⁄4 scaling factor are based on resource transport network models, where the limiting resources are distributed via some optimized network to all resource consuming cells or organelles. These models are based on the assumption that metabolism is proportional to the rate at which an organism's distribution networks (such as circulatory systems in animals or xylem and phloem in plants) deliver nutrients and energy to body tissues. Larger organisms are necessarily less efficient because more resource is in transport at any one time than in smaller organisms: size of the organism and length of the network imposes an inefficiency due to size. It therefore takes somewhat longer for large organisms to distribute nutrients throughout the body and thus they have a slower mass-specific metabolic rate. An organism that is twice as large cannot metabolize twice the energy—it simply has to run more slowly because more energy and resources are wasted being in transport, rather than being processed. Nonetheless, natural selection appears to have minimized this inefficiency by favoring resource transport networks that maximize rate of delivery of resources to the end points such as cells and organelles. This selection to maximize metabolic rate and energy dissipation results in the allometric exponent that tends to "D"/("D+1"), where "D" is the primary dimension of the system. A three dimensional system, such as an individual, tends to scale to the 3/4 power, whereas a two dimensional network, such as a river network in a landscape, tends to scale to the 2/3 power. Despite past debates over the value of the exponent, the implications of metabolic scaling theory and the extensions of the theory to ecology (metabolic theory of ecology) the theory might remain true regardless of its precise numerical value. Implications of the theory. The metabolic theory of ecology's main implication is that metabolic rate, and the influence of body size and temperature on metabolic rate, provide the fundamental constraints by which ecological processes are governed. If this holds true from the level of the individual up to ecosystem level processes, then life history attributes, population dynamics, and ecosystem processes could be explained by the relationship between metabolic rate, body size, and body temperature. While different underlying mechanisms make somewhat different predictions, the following provides an example of some of the implications of the metabolism of individuals. Organism level. Small animals tend to grow fast, breed early, and die young. According to MTE, these patterns in life history traits are constrained by metabolism. An organism's metabolic rate determines its rate of food consumption, which in turn determines its rate of growth. This increased growth rate produces trade-offs that accelerate senescence. For example, metabolic processes produce free radicals as a by-product of energy production. These in turn cause damage at the cellular level, which promotes senescence and ultimately death. Selection favors organisms which best propagate given these constraints. As a result, smaller, shorter lived organisms tend to reproduce earlier in their life histories. Population and community level. MTE has profound implications for the interpretation of population growth and community diversity. Classically, species are thought of as being either "r" selected (where populations tend to grow exponentially, and are ultimately limited by extrinsic factors) or "K" selected (where population size is limited by density-dependence and carrying capacity). MTE explains this diversity of reproductive strategies as a consequence of the metabolic constraints of organisms. Small organisms and organisms that exist at high body temperatures tend to be "r" selected, which fits with the prediction that "r" selection is a consequence of metabolic rate. Conversely, larger and cooler bodied animals tend to be "K" selected. The relationship between body size and rate of population growth has been demonstrated empirically, and in fact has been shown to scale to "M"−1/4 across taxonomic groups. The optimal population growth rate for a species is therefore thought to be determined by the allometric constraints outlined by the MTE, rather than strictly as a life history trait that is selected for based on environmental conditions. Regarding density, MTE predicts carrying capacity of populations to scale as M-3/4, and to exponentially decrease with increasing temperature. The fact that larger organisms reach carrying capacity sooner than smaller one is intuitive, however, temperature can also decrease carrying capacity due to the fact that in warmer environments, higher metabolic rate of organisms demands a higher rate of supply. Empirical evidence in terrestrial plants, also suggests that density scales as -3/4 power of the body size. Observed patterns of diversity can be similarly explained by MTE. It has long been observed that there are more small species than large species. In addition, there are more species in the tropics than at higher latitudes. Classically, the latitudinal gradient in species diversity has been explained by factors such as higher productivity or reduced seasonality. In contrast, MTE explains this pattern as being driven by the kinetic constraints imposed by temperature on metabolism. The rate of molecular evolution scales with metabolic rate, such that organisms with higher metabolic rates show a higher rate of change at the molecular level. If a higher rate of molecular evolution causes increased speciation rates, then adaptation and ultimately speciation may occur more quickly in warm environments and in small bodied species, ultimately explaining observed patterns of diversity across body size and latitude. MTE's ability to explain patterns of diversity remains controversial. For example, researchers analyzed patterns of diversity of New World coral snakes to see whether the geographical distribution of species fit within the predictions of MTE (i.e. more species in warmer areas). They found that the observed pattern of diversity could not be explained by temperature alone, and that other spatial factors such as primary productivity, topographic heterogeneity, and habitat factors better predicted the observed pattern. Extensions of metabolic theory to diversity that include eco-evolutionary theory show that an elaborated metabolic theory can account for differences in diversity gradients by including feedbacks between ecological interactions (size-dependent competition and predation) and evolutionary rates (speciation and extinction) Ecosystem processes. At the ecosystem level, MTE explains the relationship between temperature and production of total biomass. The average production to biomass ratio of organisms is higher in small organisms than large ones. This relationship is further regulated by temperature, and the rate of production increases with temperature. As production consistently scales with body mass, MTE provides a framework to assess the relative importance of organismal size, temperature, functional traits, soil and climate on variation in rates of production within and across ecosystems. Metabolic theory shows that variation in ecosystem production is characterized by a common scaling relationship, suggesting that global change models can incorporate the mechanisms governing this relationship to improve predictions of future ecosystem function. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "B = B_oM ^ {3/4}\\," }, { "math_id": 1, "text": "e^{-\\frac{E}{k\\,T}}" }, { "math_id": 2, "text": "B = b_oM^{3/4}e^{-\\frac{E}{k\\,T}}" }, { "math_id": 3, "text": "SMR = (B/M) = b_oM^{-1/4}e^{-\\frac{E}{k\\,T}}" } ]
https://en.wikipedia.org/wiki?curid=662349
66241271
Ergodicity economics
Theory that attempts to blend economics and ergodic theory Ergodicity economics is a research programme aimed at reworking the theoretical foundations of economics in the context of ergodic theory. The project's main goal is to understand how traditional economic theory, framed in terms of the "expectation values" of ensembles, changes when replacing expectation value averages with time averages. In particular, the programme is interested in understanding the effect of non-ergodic processes in economics, that is processes where the expectation value of an observable does not equal its time average. Background. Ergodicity economics questions whether expected value is a useful indicator of performance over time. In doing so it builds on existing critiques of the use of expected value in the modeling of economic decisions. Such critiques started soon after the introduction of expected value in 1654. For instance, expected-utility theory was proposed in 1738 by Daniel Bernoulli as a way of modeling behavior which is inconsistent with expected-value maximization. In 1956, John Kelly devised the Kelly criterion by optimizing the use of available information, and Leo Breiman later noted that this is equivalent to optimizing time-average performance, as opposed to expected value. The ergodicity economics research programme originates in two papers by Ole Peters in 2011, a theoretical physicist and current external professor at the Santa Fe Institute. The first studied the problem of optimal leverage in finance and how this may be achieved by considering the non-ergodic properties of geometric brownian motion. The second paper applied principles of non-ergodicity to propose a possible solution for the St. Petersburg paradox. More recent work has suggested possible solutions for the equity premium puzzle, the insurance puzzle, gamble-selection, probability weighting, and has provided insights into the dynamics of income inequality. Relation to ergodic theory. Ergodic theory is a branch of mathematics which investigates the relationship between time averages and expected values (or, equivalently, ensemble averages) in dynamical systems and [[stochastic process]|stochastic processes]. Ergodicity economics inherits from this branch the probing of this relationship in [[stochastic processes]] used as economic models. Early economic theory was developed at a time when the [[expected value]] had been invented but its relation to the time average was unclear. No clear distinction was made between the two mathematical objects, which amounts to an implicit assumption of ergodicity. Ergodicity economics explores what aspects of economics can be informed by avoiding this implicit assumption. Critique of expected value. Mean values and expected values are used extensively in economic theory, most commonly as a summary statistic. One common critique of this practice is the sensitivity of mean values to outliers. Ergodicity economics focuses on a different critique and emphasizes the physical meaning of expected values as [[Ensemble average (statistical mechanics)| averages]] across a [[Statistical ensemble (mathematical physics)|statistical ensemble]] of parallel systems. It insists on a physical justification when expected values are used. In essence, at least one of two conditions must hold: In ergodicity economics, expected values are replaced, where necessary, by averages that account for the ergodicity or non-ergodicity of the observables involved. Decision theory. Ergodicity economics emphasizes what happens to an agent's wealth formula_0 over time formula_1. From this follows a possible decision theory where agents maximize the time-average growth rate of wealth. The functional form of the growth rate, formula_2, depends on the wealth process formula_0. In general, a growth rate takes the form formula_3, where the function formula_4, linearizes formula_0, such that growth rates evaluated at different times can be meaningfully compared. Growth processes formula_0 generally violate ergodicity, but their growth rates may nonetheless be ergodic. In this case, the time-average growth rate, formula_5 can be computed as the rate of change of the expected value of formula_4, i.e. formula_6. (1) In this context, formula_4 is called the ergodicity transformation. Relation to classic decision theory. An influential class of models for economic decision-making is known as [[Expected_utility_hypothesis|expected utility theory]]. The following specific model can be mapped to the growth-rate optimization highlighted by ergodicity economics. Here, agents evaluate monetary wealth formula_7 according to a utility function formula_8, and it is postulated that decisions maximize the expected value of the change in utility, formula_9. (2) This model was proposed as an improvement of expected-value maximization, where agents maximize formula_10. A non-linear utility function allows the encoding of behavioral patterns not represented in expected-value maximization. Specifically, expected-utility maximizing agents can have idiosyncratic risk preferences. An agent specified by a convex utility function is more risk-seeking than an expected wealth maximizer, and a concave utility function implies greater risk aversion. Comparing (2) to (1), we can identify the utility function formula_8 with the linearization formula_4, and make the two expressions identical by dividing (2) by formula_11. Division by formula_11 simply implements a preference for faster utility growth in the expected-utility-theory decision protocol. This mapping shows that the two models will yield identical predictions if the utility function applied under expected-utility theory is the same as the ergodicity transformation, needed to compute an ergodic growth rate. Ergodicity economics thus emphasizes the dynamic circumstances under which a decision is made, whereas expected-utility theory emphasizes idiosyncratic preferences to explain behavior. Different ergodicity transformations indicate different types of wealth dynamics, whereas different utility functions indicate different personal preferences. The mapping highlights the relationship between the two approaches, showing that differences in personal preferences can arise purely as a result of different dynamic contexts of decision makers. Continuous example: Geometric Brownian motion. A simple example for an agent's wealth process, formula_0, is [[geometric Brownian motion]] (GBM), commonly used in [[mathematical finance]] and other fields. formula_0 is said to follow GBM if it satisfies the [[stochastic differential equation]] formula_12, (3) where formula_13 is the increment in a [[Wiener process]], and formula_14 ('drift') and formula_15 ('volatility') are constants. Solving (3) gives formula_16. (4) In this case the ergodicity transformation is formula_17, as is easily verified: formula_18 grows linearly in time. Following the recipe laid out above, this leads to the time-average growth rate formula_19. (5) It follows that for geometric Brownian motion, maximizing the rate of change in the [[Utility|logarithmic utility]] function, formula_20, is equivalent to maximizing the time-average growth rate of wealth, i.e. what happens to the agent's wealth over time. Stochastic processes other than (3) possess different ergodicity transformations, where growth-optimal agents maximize the expected value of utility functions other than the logarithm. Trivially, replacing (3) with additive dynamics implies a linear ergodicity transformation, and many similar pairs of dynamics and transformations can be derived. Discrete example: multiplicative Coin Toss. A popular illustration of non-ergodicity in economic processes is a repeated multiplicative coin toss, an instance of the binomial multiplicative process. It demonstrates how an expected-value analysis can indicate that a gamble is favorable although the gambler is guaranteed to lose over time. Definition. In this thought experiment, discussed in, a person participates in a simple game where they toss a fair coin. If the coin lands heads, the person gains 50% on their current wealth; if it lands tails, the person loses 40%. The game shows the difference between the expected value of an investment, or bet, and the time-average or real-world outcome of repeatedly engaging in that bet over time. Calculation of Expected Value. Denoting current wealth by formula_21, and the time when the payout is received by formula_22, we find that wealth after one round is given by the random variable formula_23, which takes the values formula_24 (for heads) and formula_25 (for tails), each with probability formula_26. The expected value of the gambler's wealth after one round is therefore formula_27 By induction, after formula_28 rounds expected wealth is formula_29, increasing exponentially at 5% per round in the game. This calculation shows that the game is favorable in expectation—its expected value increases with each round played. Calculation of Time-Average. The time-average performance indicates what happens to the wealth of a single gambler who plays repeatedly, reinvesting their entire wealth every round. Due to compounding, after formula_28 rounds the wealth will be formula_30 where we have written formula_31 to denote the realized random factor by which wealth is multiplied in the formula_32 round of the game (either formula_33 for heads; or formula_34, for tails). Averaged over time, wealth has grown per round by a factor formula_35 Introducing the notation formula_36 for the number of heads in a sequence of coin tosses we re-write this as formula_37 For any finite formula_28, the time-average per-round growth factor, formula_38, is a random variable. The long-time limit, found by letting the number of rounds diverge formula_39, provides a characteristic scalar which can be compared with the per-round growth factor of the expected value. The proportion of heads tossed then converges to the probability of heads (namely 1/2), and the time-average growth factor is formula_40 Discussion. The comparison between expected value and time-average performance illustrates an effect of broken ergodicity: over time, with probability one, wealth "decreases" by about 5% per round, in contrast to the increase by 5% per round of the expected value. Coverage in the wider media. In December 2020, [[Bloomberg news]] published an article titled "Everything We’ve Learned About Modern Economic Theory Is Wrong" discussing the implications of ergodicity in economics following the publication of a review of the subject in [[Nature Physics]]. [[Morningstar, Inc.|Morningstar]] covered the story to discuss the investment case for [[stock]] diversification. In the book "[[Skin in the Game (book)|Skin in the Game]]", [[Nassim Nicholas Taleb]] suggests that the ergodicity problem requires a rethinking of how economists use [[probability|probabilities]]. A summary of the arguments was published by Taleb in a [[Medium (website)|Medium]] article in August 2017. In the book "[[The End of Theory (book)|The End of Theory]]", [[Richard Bookstaber]] lists non-ergodicity as one of four characteristics of our economy that are part of financial crises, that conventional economics fails to adequately account for, and that any model of such crises needs to take adequate account of. The other three are: computational irreducibility, emergent phenomena, and radical uncertainty. In the book "[[The Ergodic Investor and Entrepreneur (book)|The Ergodic Investor and Entrepreneur]]", Boyd and Reardon tackle the practical implications of non-ergodic capital growth for investors and entrepreneurs, especially for those with a sustainability, circular economy, net positive, or regenerative focus. James White and [[Victor Haghani]] discuss the field of ergodicity economics in their book "[[The Missing Billionaires]]". Criticisms. It has been claimed that expected utility theory implicitly assumes ergodicity in the sense that it optimizes an expected value which is only relevant to the long-term benefit of the decision-maker if the relevant observable is ergodic. Doctor, Wakker, and Tang argue that this is wrong because such assumptions are “outside the scope of expected utility theory as a static theory”. They further argue that ergodicity economics overemphasizes the importance of long-term growth as “the primary factor that explains economic phenomena,” and downplays the importance of individual preferences. They also caution against optimizing long-term growth inappropriately. An example is given of a short-term decision between A) a great loss incurred with certainty and B) a gain enjoyed with almost-certainty paired with an even greater loss at negligible probability. In the example the long-term growth rate favors the certain loss and seems an inappropriate criterion for the short-term decision horizon. Finally, an experiment by Meder and colleagues claims to find that individual risk preferences change with dynamical conditions in ways predicted by ergodicity economics. Doctor, Wakker, and Tang criticize the experiment for being confounded by differences in ambiguity and the complexity of probability calculations. Further, they criticize the analysis for applying static expected utility theory models to a context where dynamic versions are more appropriate. In support of this, Goldstein claims to show that multi-period EUT predicts a similar change in risk preferences as observed in the experiment. References. &lt;templatestyles src="Reflist/styles.css" /&gt; [[Category:Paradoxes in economics]] [[Category:Behavioral finance]] [[Category:Mathematical economics]] [[Category:Coin flipping]] [[Category:Economic theories]] [[Category:Ergodic theory]]
[ { "math_id": 0, "text": "x(t)" }, { "math_id": 1, "text": "t" }, { "math_id": 2, "text": "g" }, { "math_id": 3, "text": "g=\\frac{\\Delta v(x)}{\\Delta t}" }, { "math_id": 4, "text": "v(x)" }, { "math_id": 5, "text": "g_t" }, { "math_id": 6, "text": " g_t= \\frac{E[\\Delta v(x)]}{\\Delta t} " }, { "math_id": 7, "text": "x" }, { "math_id": 8, "text": "u(x)" }, { "math_id": 9, "text": " E[\\Delta u(x)] " }, { "math_id": 10, "text": " E[\\Delta x] " }, { "math_id": 11, "text": "\\Delta t" }, { "math_id": 12, "text": " dx = x(t)(\\mu\\,dt + \\sigma \\,dW_t) " }, { "math_id": 13, "text": " dW_t " }, { "math_id": 14, "text": " \\mu " }, { "math_id": 15, "text": " \\sigma " }, { "math_id": 16, "text": " x(t) = x(0)\\exp\\left( \\left(\\mu - \\frac{\\sigma^2}{2} \\right)t + \\sigma W_t\\right) " }, { "math_id": 17, "text": "v(x)=\\ln(x)" }, { "math_id": 18, "text": " \\ln x(t) = \\ln x(0) + \\left(\\mu - \\frac{\\sigma^2}{2} \\right)t + \\sigma W_t " }, { "math_id": 19, "text": " g_t= \\frac{E[\\Delta v(x)]}{\\Delta t} = \\mu - \\frac{\\sigma^2}{2} " }, { "math_id": 20, "text": " u(x) = \\ln(x) " }, { "math_id": 21, "text": " x(t)" }, { "math_id": 22, "text": " t+\\delta t" }, { "math_id": 23, "text": "x(t+\\delta t)" }, { "math_id": 24, "text": "1.5 \\times x(t)" }, { "math_id": 25, "text": "0.6 \\times x(t)" }, { "math_id": 26, "text": "p_{\\text{H}}=p_{\\text{T}}=1/2" }, { "math_id": 27, "text": "\\begin{align}\nE[x(t+\\delta t)]&= p_{\\text{H}} \\times 1.5 x(t) +p_{\\text{T}} \\times 0.6 x(t) \\\\\n&= 1.05 x(t).\n\\end{align}\n" }, { "math_id": 28, "text": "T" }, { "math_id": 29, "text": "E[x(t+T \\delta t)]=1.05^T x(t)" }, { "math_id": 30, "text": "x(t+T \\delta t)=\\prod_{\\tau=1}^T r_{\\tau} x(t)," }, { "math_id": 31, "text": "r_{\\tau}" }, { "math_id": 32, "text": "\\tau^{\\text{th}}" }, { "math_id": 33, "text": "r_{\\tau}=r_{\\text{H}}=1.5" }, { "math_id": 34, "text": "r_{\\tau}=r_{\\text{T}}=0.6" }, { "math_id": 35, "text": "\n\\bar{r}_T=\\left(\\frac{x(t+T\\delta t)}{x(t)}\\right)^{1/T}.\n" }, { "math_id": 36, "text": "n_{\\text{H}}" }, { "math_id": 37, "text": " \n\\bar{r}_T = \\left(r_{\\text{H}}^{n_{\\text{H}}} r_{\\text{T}}^{T-n_{\\text{H}}}\\right)^{1/T}=r_{\\text{H}}^{n_{\\text{H}}/T} r_{\\text{T}}^{(T-n_{\\text{H}})/T}.\n" }, { "math_id": 38, "text": "\\bar{r}_T" }, { "math_id": 39, "text": "T\\to\\infty" }, { "math_id": 40, "text": "\n\\lim_{T\\to\\infty}\\bar{r}_T= \\left(r_{\\text{H}} r_{\\text{T}}\\right)^{\\frac{1}{2}}\\approx 0.95.\n" } ]
https://en.wikipedia.org/wiki?curid=66241271
66254229
1 Chronicles 2
First Book of Chronicles, chapter 2 1 Chronicles 2 is the second chapter of the Books of Chronicles in the Hebrew Bible or the First Book of Chronicles in the Old Testament of the Christian Bible. The book is compiled from older sources by an unknown person or group, designated by modern scholars as "the Chronicler", and had the final shape established in late fifth or fourth century BCE. This chapter and two subsequent ones focus on the descendants of Judah, where chapter 2 deals with the tribe of Judah in general, chapter 3 lists the sons of David in particular and chapter 4 concerns the remaining families in the tribe of Judah and the tribe of Simeon. These chapters belong to the section focusing on the list of genealogies from Adam to the lists of the people returning from exile in Babylon ( to 9:34). Text. This chapter was originally written in the Hebrew language. It is divided into 55 verses. Textual witnesses. Some early manuscripts containing the text of this chapter in Hebrew are of the Masoretic Text tradition, which includes the Aleppo Codex (10th century), and Codex Leningradensis (1008). There is also a translation into Koine Greek known as the Septuagint, made in the last few centuries BCE. Extant ancient manuscripts of the Septuagint version include Codex Vaticanus (B; formula_0B; 4th century) and Codex Alexandrinus (A; formula_0A; 5th century). Structure. Verses 1–2 are part of the introduction to establish 'Israel's worldwide context' by listing the ancestors from Adam to Israel's twelve sons (–2:2). The remaining verses belong to an arrangement comprising 1 Chronicles 2:3–8:40 with the king-producing tribes of Judah (David; 2:3–4:43) and Benjamin (Saul; 8:1–40) bracketing the series of lists as the priestly tribe of Levi (6:1–81) anchors the center, in the following order: A David's royal tribe of Judah (2:3–4:43) B Northern tribes east of Jordan (5:1–26) X The priestly tribe of Levi (6:1–81) B' Northern tribes west of Jordan (7:1–40) A' Saul's royal tribe of Benjamin (8:1–40) Another concentric arrangement focuses on David's royal tribe of Judah (2:3–4:23), centering on the family of Hezron, Judah's grandson, through his three sons: Jerahmeel, Ram, and Chelubai (Caleb), as follows: A Descendants of Judah: Er, Onan, and Shelah (2:3–8) B Descendants of Ram up to David (2:9–17) C Descendants of Caleb (2:18–24) D Descendants of Jerahmeel (2:25–33) D' Descendants of Jerahmeel (2:34–41) C' Descendants of Caleb (2:42–55) B' Descendants of Ram following David [David's descendants] (3:1–24) A' Descendants of Shelah, Judah s only surviving son (4:21–23) "1 These are the sons of Israel; Reuben, Simeon, Levi, and Judah, Issachar, and Zebulun," "2 Dan, Joseph, and Benjamin, Naphtali, Gad, and Asher." The family of Israel (2:1–2). The twelve sons of Israel are not listed by birth order (cf. –; ), but arranged based on as follows: Dan is placed before the sons of Rachel (cf. ) perhaps in reference to Rachel's wishing that the son of her maid Bilhah to be accounted her own (). The subsequent parts mention every tribe with the exception of Zebulun and Dan, without any explanation of the omission. Nonetheless, Zebulun is mentioned in the Levitical town lists (, ) and in some narratives (, , , , ), whereas Dan is mentioned in Chronicles only in three places (, , ). From Judah to David (2:3–17). The family of Judah has the largest genealogy among the tribes of Israel, about 100 verses in 3 chapters, with the house of David as the main focus. Verses 3–5 are related mainly to Genesis 38, as well as to Genesis 46:12 and Numbers 26:19–22, whereas verse 5 is also tied to Ruth 4:18. The list of Hezron's descendants started in verse 9 with the emphasis on the family of Ram ben Hezron down to David and his siblings (verse 17). Verses 10–12 contain the line from Ram to Jesse, whose seven sons are listed in verses 13–17, and the last of these is David, as the climax of the chapter. These verses (including verse 9) are linked to Ruth 4:19–22 (cf. 1 Samuel 16:6–10; 17:13). David was the seventh son in verse 15, whereas 1 Samuel 16:10–11; 17:12 assumes eight sons of Jesse. Nethaneel, Raddai, and Ozem are not mentioned in other texts. David's sisters are mentioned in verses 16–17 (cf. 2 Samuel 17:25). "And the sons of Carmi; Achar, the troubler of Israel, who transgressed in the thing accursed." "And Hur begot Uri, and Uri begot Bezalel." The family of Jerahmeel ben Hezron (2:25–41). Verses 34–35 display special attitude of the Chronicler towards foreigners: because Sheshan had no sons, his line would continue through his daughters and an Egyptian servant. The family of Caleb ben Hezron (2:42–55). The other descendants of Caleb are enumerated in 1 Chronicles 2:42-49, of which the two latter, 1 Chronicles 2:46-55, are the descendants from his concubines. "Now the sons of Caleb the brother of Jerahmeel were Mesha his firstborn, which was the father of Ziph; and the sons of Mareshah the father of Hebron." Notes. &lt;templatestyles src="Reflist/styles.css" /&gt; References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": " \\mathfrak{G}" } ]
https://en.wikipedia.org/wiki?curid=66254229
66256
Proportional–integral–derivative controller
Control loop feedback mechanism A proportional–integral–derivative controller (PID controller or three-term controller) is a control loop mechanism employing feedback that is widely used in industrial control systems and a variety of other applications requiring continuously modulated control. A PID controller continuously calculates an "error value" formula_0 as the difference between a desired setpoint (SP) and a measured process variable (PV) and applies a correction based on proportional, integral, and derivative terms (denoted "P", "I", and "D" respectively), hence the name. PID systems automatically apply accurate and responsive correction to a control function. An everyday example is the cruise control on a car, where ascending a hill would lower speed if constant engine power were applied. The controller's PID algorithm restores the measured speed to the desired speed with minimal delay and overshoot by increasing the power output of the engine in a controlled manner. The first theoretical analysis and practical application of PID was in the field of automatic steering systems for ships, developed from the early 1920s onwards. It was then used for automatic process control in the manufacturing industry, where it was widely implemented in pneumatic and then electronic controllers. The PID concept has been used widely in applications requiring accurate and optimized automatic control. Fundamental operation. The distinguishing feature of the PID controller is the ability to use the three "control terms" of proportional, integral and derivative influence on the controller output to apply accurate and optimal control. The block diagram on the right shows the principles of how these terms are generated and applied. It shows a PID controller, which continuously calculates an "error value" formula_0 as the difference between a desired setpoint formula_1 and a measured process variable formula_2: formula_3, and applies a correction based on proportional, integral, and derivative terms. The controller attempts to minimize the error over time by adjustment of a "control variable" formula_4, such as the opening of a control valve, to a new value determined by a weighted sum of the control terms. In this model: Tuning – The balance of these effects is achieved by loop tuning to produce the optimal control function. The tuning constants are shown below as "K" and must be derived for each control application, as they depend on the response characteristics of the physical system, external to the controller. These are dependent on the behavior of the measuring sensor, the final control element (such as a control valve), any control signal delays, and the process itself. Approximate values of constants can usually be initially entered knowing the type of application, but they are normally refined, or tuned, by introducing a setpoint change and observing the system response. Control action – The mathematical model and practical loop above both use a "direct" control action for all the terms, which means an increasing positive error results in an increasing positive control output correction. This because the "error" term is not the deviation from the setpoint (actual-desired) but is in fact the correction needed (desired-actual). The system is called "reverse" acting if it is necessary to apply negative corrective action. For instance, if the valve in the flow loop was 100–0% valve opening for 0–100% control output – meaning that the controller action has to be reversed. Some process control schemes and final control elements require this reverse action. An example would be a valve for cooling water, where the fail-safe mode, in the case of signal loss, would be 100% opening of the valve; therefore 0% controller output needs to cause 100% valve opening. Mathematical form. The overall control function formula_5 where formula_6, formula_7, and formula_8, all non-negative, denote the coefficients for the proportional, integral, and derivative terms respectively (sometimes denoted "P", "I", and "D"). In the "standard form" of the equation (see later in article), formula_7 and formula_8 are respectively replaced by formula_9 and formula_10; the advantage of this being that formula_11 and formula_12 have some understandable physical meaning, as they represent an integration time and a derivative time respectively. formula_10 is the time constant with which the controller will attempt to approach the set point. formula_9 determines how long the controller will tolerate the output being consistently above or below the set point. formula_13 Selective use of control terms. Although a PID controller has three control terms, some applications need only one or two terms to provide appropriate control. This is achieved by setting the unused parameters to zero and is called a PI, PD, P, or I controller in the absence of the other control actions. PI controllers are fairly common in applications where derivative action would be sensitive to measurement noise, but the integral term is often needed for the system to reach its target value. Applicability. The use of the PID algorithm does not guarantee optimal control of the system or its control stability (&lt;templatestyles src="Crossreference/styles.css" /&gt;). Situations may occur where there are excessive delays: the measurement of the process value is delayed, or the control action does not apply quickly enough. In these cases lead–lag compensation is required to be effective. The response of the controller can be described in terms of its responsiveness to an error, the degree to which the system overshoots a setpoint, and the degree of any system oscillation. But the PID controller is broadly applicable since it relies only on the response of the measured process variable, not on knowledge or a model of the underlying process. History. Origins. Continuous control, before PID controllers were fully understood and implemented, has one of its origins in the centrifugal governor, which uses rotating weights to control a process. This was invented by Christiaan Huygens in the 17th century to regulate the gap between millstones in windmills depending on the speed of rotation, and thereby compensate for the variable speed of grain feed. With the invention of the low-pressure stationary steam engine there was a need for automatic speed control, and James Watt's self-designed "conical pendulum" governor, a set of revolving steel balls attached to a vertical spindle by link arms, came to be an industry standard. This was based on the millstone-gap control concept. Rotating-governor speed control, however, was still variable under conditions of varying load, where the shortcoming of what is now known as proportional control alone was evident. The error between the desired speed and the actual speed would increase with increasing load. In the 19th century, the theoretical basis for the operation of governors was first described by James Clerk Maxwell in 1868 in his now-famous paper "On Governors". He explored the mathematical basis for control stability, and progressed a good way towards a solution, but made an appeal for mathematicians to examine the problem. The problem was examined further in 1874 by Edward Routh, Charles Sturm, and in 1895, Adolf Hurwitz, all of whom contributed to the establishment of control stability criteria. In subsequent applications, speed governors were further refined, notably by American scientist Willard Gibbs, who in 1872 theoretically analyzed Watt's conical pendulum governor. About this time, the invention of the Whitehead torpedo posed a control problem that required accurate control of the running depth. Use of a depth pressure sensor alone proved inadequate, and a pendulum that measured the fore and aft pitch of the torpedo was combined with depth measurement to become the pendulum-and-hydrostat control. Pressure control provided only a proportional control that, if the control gain was too high, would become unstable and go into overshoot with considerable instability of depth-holding. The pendulum added what is now known as derivative control, which damped the oscillations by detecting the torpedo dive/climb angle and thereby the rate-of-change of depth. This development (named by Whitehead as "The Secret" to give no clue to its action) was around 1868. Another early example of a PID-type controller was developed by Elmer Sperry in 1911 for ship steering, though his work was intuitive rather than mathematically-based. It was not until 1922, however, that a formal control law for what we now call PID or three-term control was first developed using theoretical analysis, by Russian American engineer Nicolas Minorsky. Minorsky was researching and designing automatic ship steering for the US Navy and based his analysis on observations of a helmsman. He noted the helmsman steered the ship based not only on the current course error but also on past error, as well as the current rate of change; this was then given a mathematical treatment by Minorsky. His goal was stability, not general control, which simplified the problem significantly. While proportional control provided stability against small disturbances, it was insufficient for dealing with a steady disturbance, notably a stiff gale (due to steady-state error), which required adding the integral term. Finally, the derivative term was added to improve stability and control. Trials were carried out on the USS "New Mexico", with the controllers controlling the "angular velocity" (not the angle) of the rudder. PI control yielded sustained yaw (angular error) of ±2°. Adding the D element yielded a yaw error of ±1/6°, better than most helmsmen could achieve. The Navy ultimately did not adopt the system due to resistance by personnel. Similar work was carried out and published by several others in the 1930s. Industrial control. The wide use of feedback controllers did not become feasible until the development of wideband high-gain amplifiers to use the concept of negative feedback. This had been developed in telephone engineering electronics by Harold Black in the late 1920s, but not published until 1934. Independently, Clesson E Mason of the Foxboro Company in 1930 invented a wide-band pneumatic controller by combining the nozzle and flapper high-gain pneumatic amplifier, which had been invented in 1914, with negative feedback from the controller output. This dramatically increased the linear range of operation of the nozzle and flapper amplifier, and integral control could also be added by the use of a precision bleed valve and a bellows generating the integral term. The result was the "Stabilog" controller which gave both proportional and integral functions using feedback bellows. The integral term was called "Reset". Later the derivative term was added by a further bellows and adjustable orifice. From about 1932 onwards, the use of wideband pneumatic controllers increased rapidly in a variety of control applications. Air pressure was used for generating the controller output, and also for powering process modulating devices such as diaphragm-operated control valves. They were simple low maintenance devices that operated well in harsh industrial environments and did not present explosion risks in hazardous locations. They were the industry standard for many decades until the advent of discrete electronic controllers and distributed control systems (DCSs). With these controllers, a pneumatic industry signaling standard of was established, which had an elevated zero to ensure devices were working within their linear characteristic and represented the control range of 0-100%. In the 1950s, when high gain electronic amplifiers became cheap and reliable, electronic PID controllers became popular, and the pneumatic standard was emulated by 10-50 mA and 4–20 mA current loop signals (the latter became the industry standard). Pneumatic field actuators are still widely used because of the advantages of pneumatic energy for control valves in process plant environments. Most modern PID controls in industry are implemented as computer software in DCSs, programmable logic controllers (PLCs), or discrete compact controllers. Electronic analog controllers. Electronic analog PID control loops were often found within more complex electronic systems, for example, the head positioning of a disk drive, the power conditioning of a power supply, or even the movement-detection circuit of a modern seismometer. Discrete electronic analog controllers have been largely replaced by digital controllers using microcontrollers or FPGAs to implement PID algorithms. However, discrete analog PID controllers are still used in niche applications requiring high-bandwidth and low-noise performance, such as laser-diode controllers. Control loop example. Consider a robotic arm that can be moved and positioned by a control loop. An electric motor may lift or lower the arm, depending on forward or reverse power applied, but power cannot be a simple function of position because of the inertial mass of the arm, forces due to gravity, external forces on the arm such as a load to lift or work to be done on an external object. By measuring the position (PV), and subtracting it from the setpoint (SP), the error (e) is found, and from it the controller calculates how much electric current to supply to the motor (MV). Proportional. The obvious method is proportional control: the motor current is set in proportion to the existing error. However, this method fails if, for instance, the arm has to lift different weights: a greater weight needs a greater force applied for the same error on the down side, but a smaller force if the error is low on the upside. That's where the integral and derivative terms play their part. Integral. An integral term increases action in relation not only to the error but also the time for which it has persisted. So, if the applied force is not enough to bring the error to zero, this force will be increased as time passes. A pure "I" controller could bring the error to zero, but it would be both slow reacting at the start (because the action would be small at the beginning, depending on time to get significant) and brutal at the end (the action increases as long as the error is positive, even if the error has started to approach zero). Applying too much integral when the error is small and decreasing will lead to overshoot. After overshooting, if the controller were to apply a large correction in the opposite direction and repeatedly overshoot the desired position, the output would oscillate around the setpoint in either a constant, growing, or decaying sinusoid. If the amplitude of the oscillations increases with time, the system is unstable. If they decrease, the system is stable. If the oscillations remain at a constant magnitude, the system is marginally stable. Derivative. A derivative term does not consider the magnitude of the error (meaning it cannot bring it to zero: a pure D controller cannot bring the system to its setpoint), but the rate of change of error, trying to bring this rate to zero. It aims at flattening the error trajectory into a horizontal line, damping the force applied, and so reduces overshoot (error on the other side because of too great applied force). Control damping. In the interest of achieving a controlled arrival at the desired position (SP) in a timely and accurate way, the controlled system needs to be critically damped. A well-tuned position control system will also apply the necessary currents to the controlled motor so that the arm pushes and pulls as necessary to resist external forces trying to move it away from the required position. The setpoint itself may be generated by an external system, such as a PLC or other computer system, so that it continuously varies depending on the work that the robotic arm is expected to do. A well-tuned PID control system will enable the arm to meet these changing requirements to the best of its capabilities. Response to disturbances. If a controller starts from a stable state with zero error (PV = SP), then further changes by the controller will be in response to changes in other measured or unmeasured inputs to the process that affect the process, and hence the PV. Variables that affect the process other than the MV are known as disturbances. Generally, controllers are used to reject disturbances and to implement setpoint changes. A change in load on the arm constitutes a disturbance to the robot arm control process. Applications. In theory, a controller can be used to control any process that has a measurable output (PV), a known ideal value for that output (SP), and an input to the process (MV) that will affect the relevant PV. Controllers are used in industry to regulate temperature, pressure, force, feed rate, flow rate, chemical composition (component concentrations), weight, position, speed, and practically every other variable for which a measurement exists. "This section describes the parallel or non-interacting form of the PID controller. For other forms please see ." Controller theory. The PID control scheme is named after its three correcting terms, whose sum constitutes the manipulated variable (MV). The proportional, integral, and derivative terms are summed to calculate the output of the PID controller. Defining formula_4 as the controller output, the final form of the PID algorithm is formula_14 where formula_6 is the proportional gain, a tuning parameter, formula_7 is the integral gain, a tuning parameter, formula_8 is the derivative gain, a tuning parameter, formula_15 is the error (SP is the setpoint, and PV("t") is the process variable), formula_16 is the time or instantaneous time (the present), formula_17 is the variable of integration (takes on values from time 0 to the present formula_16). Equivalently, the transfer function in the Laplace domain of the PID controller is formula_18 where formula_19 is the complex frequency. Proportional term. The proportional term produces an output value that is proportional to the current error value. The proportional response can be adjusted by multiplying the error by a constant "K"p, called the proportional gain constant. The proportional term is given by formula_20 A high proportional gain results in a large change in the output for a given change in the error. If the proportional gain is too high, the system can become unstable (see the section on loop tuning). In contrast, a small gain results in a small output response to a large input error, and a less responsive or less sensitive controller. If the proportional gain is too low, the control action may be too small when responding to system disturbances. Tuning theory and industrial practice indicate that the proportional term should contribute the bulk of the output change. Steady-state error. The steady-state error is the difference between the desired final output and the actual one. Because a non-zero error is required to drive it, a proportional controller generally operates with a steady-state error. Steady-state error (SSE) is proportional to the process gain and inversely proportional to proportional gain. SSE may be mitigated by adding a compensating bias term to the setpoint AND output or corrected dynamically by adding an integral term. Integral term. The contribution from the integral term is proportional to both the magnitude of the error and the duration of the error. The integral in a PID controller is the sum of the instantaneous error over time and gives the accumulated offset that should have been corrected previously. The accumulated error is then multiplied by the integral gain ("K"i) and added to the controller output. The integral term is given by formula_21 The integral term accelerates the movement of the process towards setpoint and eliminates the residual steady-state error that occurs with a pure proportional controller. However, since the integral term responds to accumulated errors from the past, it can cause the present value to overshoot the setpoint value (see the section on loop tuning). Derivative term. The derivative of the process error is calculated by determining the slope of the error over time and multiplying this rate of change by the derivative gain "K"d. The magnitude of the contribution of the derivative term to the overall control action is termed the derivative gain, "K"d. The derivative term is given by formula_22 Derivative action predicts system behavior and thus improves settling time and stability of the system. An ideal derivative is not causal, so that implementations of PID controllers include an additional low-pass filtering for the derivative term to limit the high-frequency gain and noise. Derivative action is seldom used in practice though – by one estimate in only 25% of deployed controllers – because of its variable impact on system stability in real-world applications. Loop tuning. "Tuning" a control loop is the adjustment of its control parameters (proportional band/gain, integral gain/reset, derivative gain/rate) to the optimum values for the desired control response. Stability (no unbounded oscillation) is a basic requirement, but beyond that, different systems have different behavior, different applications have different requirements, and requirements may conflict with one another. Even though there are only three parameters and it is simple to describe in principle, PID tuning is a difficult problem because it must satisfy complex criteria within the limitations of PID control. Accordingly, there are various methods for loop tuning, and more sophisticated techniques are the subject of patents; this section describes some traditional, manual methods for loop tuning. Designing and tuning a PID controller appears to be conceptually intuitive, but can be hard in practice, if multiple (and often conflicting) objectives, such as short transient and high stability, are to be achieved. PID controllers often provide acceptable control using default tunings, but performance can generally be improved by careful tuning, and performance may be unacceptable with poor tuning. Usually, initial designs need to be adjusted repeatedly through computer simulations until the closed-loop system performs or compromises as desired. Some processes have a degree of nonlinearity, so parameters that work well at full-load conditions do not work when the process is starting up from no load. This can be corrected by gain scheduling (using different parameters in different operating regions). Stability. If the PID controller parameters (the gains of the proportional, integral and derivative terms) are chosen incorrectly, the controlled process input can be unstable; i.e., its output diverges, with or without oscillation, and is limited only by saturation or mechanical breakage. Instability is caused by "excess" gain, particularly in the presence of significant lag. Generally, stabilization of response is required and the process must not oscillate for any combination of process conditions and setpoints, though sometimes marginal stability (bounded oscillation) is acceptable or desired. Mathematically, the origins of instability can be seen in the Laplace domain. The closed-loop transfer function is formula_23 where formula_24 is the PID transfer function, and formula_25 is the plant transfer function. A system is "unstable" where the closed-loop transfer function diverges for some formula_19. This happens in situations where formula_26. In other words, this happens when formula_27 with a 180° phase shift. Stability is guaranteed when formula_28 for frequencies that suffer high phase shifts. A more general formalism of this effect is known as the Nyquist stability criterion. Optimal behavior. The optimal behavior on a process change or setpoint change varies depending on the application. Two basic requirements are "regulation" (disturbance rejection – staying at a given setpoint) and "command tracking" (implementing setpoint changes). These terms refer to how well the controlled variable tracks the desired value. Specific criteria for command tracking include rise time and settling time. Some processes must not allow an overshoot of the process variable beyond the setpoint if, for example, this would be unsafe. Other processes must minimize the energy expended in reaching a new setpoint. Overview of tuning methods. There are several methods for tuning a PID loop. The most effective methods generally involve developing some form of process model and then choosing P, I, and D based on the dynamic model parameters. Manual tuning methods can be relatively time-consuming, particularly for systems with long loop times. The choice of method depends largely on whether the loop can be taken offline for tuning, and on the response time of the system. If the system can be taken offline, the best tuning method often involves subjecting the system to a step change in input, measuring the output as a function of time, and using this response to determine the control parameters. Manual tuning. If the system must remain online, one tuning method is to first set formula_29 and formula_30 values to zero. Increase the formula_31 until the output of the loop oscillates; then set formula_31 to approximately half that value for a "quarter amplitude decay"-type response. Then increase formula_29 until any offset is corrected in sufficient time for the process, but not until too great a value causes instability. Finally, increase formula_30, if required, until the loop is acceptably quick to reach its reference after a load disturbance. Too much formula_31 causes excessive response and overshoot. A fast PID loop tuning usually overshoots slightly to reach the setpoint more quickly; however, some systems cannot accept overshoot, in which case an overdamped closed-loop system is required, which in turn requires a formula_31 setting significantly less than half that of the formula_31 setting that was causing oscillation. Ziegler–Nichols method. Another heuristic tuning method is known as the Ziegler–Nichols method, introduced by John G. Ziegler and Nathaniel B. Nichols in the 1940s. As in the method above, the formula_29 and formula_30 gains are first set to zero. The proportional gain is increased until it reaches the ultimate gain formula_32 at which the output of the loop starts to oscillate constantly. formula_32 and the oscillation period formula_33 are used to set the gains as follows: The oscillation frequency is often measured instead, and the reciprocals of each multiplication yields the same result. These gains apply to the ideal, parallel form of the PID controller. When applied to the standard PID form, only the integral and derivative gains formula_29 and formula_30 are dependent on the oscillation period formula_33. Cohen–Coon parameters. This method was developed in 1953 and is based on a first-order + time delay model. Similar to the Ziegler–Nichols method, a set of tuning parameters were developed to yield a closed-loop response with a decay ratio of formula_34. Arguably the biggest problem with these parameters is that a small change in the process parameters could potentially cause a closed-loop system to become unstable. Relay (Åström–Hägglund) method. Published in 1984 by Karl Johan Åström and Tore Hägglund, the relay method temporarily operates the process using bang-bang control and measures the resultant oscillations. The output is switched (as if by a relay, hence the name) between two values of the control variable. The values must be chosen so the process will cross the setpoint, but they need not be 0% and 100%; by choosing suitable values, dangerous oscillations can be avoided. As long as the process variable is below the setpoint, the control output is set to the higher value. As soon as it rises above the setpoint, the control output is set to the lower value. Ideally, the output waveform is nearly square, spending equal time above and below the setpoint. The period and amplitude of the resultant oscillations are measured, and used to compute the ultimate gain and period, which are then fed into the Ziegler–Nichols method. Specifically, the ultimate period formula_33 is assumed to be equal to the observed period, and the ultimate gain is computed as formula_35 where a is the amplitude of the process variable oscillation, and b is the amplitude of the control output change which caused it. There are numerous variants on the relay method. First-order model with dead time. The transfer function for a first-order process with dead time is formula_36 where "k"p is the process gain, "τ"p is the time constant, "θ" is the dead time, and "u"("s") is a step change input. Converting this transfer function to the time domain results in formula_37 using the same parameters found above. It is important when using this method to apply a large enough step-change input that the output can be measured; however, too large of a step change can affect the process stability. Additionally, a larger step change ensures that the output does not change due to a disturbance (for best results, try to minimize disturbances when performing the step test). One way to determine the parameters for the first-order process is using the 63.2% method. In this method, the process gain ("k"p) is equal to the change in output divided by the change in input. The dead time "θ" is the amount of time between when the step change occurred and when the output first changed. The time constant ("τ"p) is the amount of time it takes for the output to reach 63.2% of the new steady-state value after the step change. One downside to using this method is that it can take a while to reach a new steady-state value if the process has large time constants. Tuning software. Most modern industrial facilities no longer tune loops using the manual calculation methods shown above. Instead, PID tuning and loop optimization software are used to ensure consistent results. These software packages gather data, develop process models, and suggest optimal tuning. Some software packages can even develop tuning by gathering data from reference changes. Mathematical PID loop tuning induces an impulse in the system and then uses the controlled system's frequency response to design the PID loop values. In loops with response times of several minutes, mathematical loop tuning is recommended, because trial and error can take days just to find a stable set of loop values. Optimal values are harder to find. Some digital loop controllers offer a self-tuning feature in which very small setpoint changes are sent to the process, allowing the controller itself to calculate optimal tuning values. Another approach calculates initial values via the Ziegler–Nichols method, and uses a numerical optimization technique to find better PID coefficients. Other formulas are available to tune the loop according to different performance criteria. Many patented formulas are now embedded within PID tuning software and hardware modules. Advances in automated PID loop tuning software also deliver algorithms for tuning PID Loops in a dynamic or non-steady state (NSS) scenario. The software models the dynamics of a process, through a disturbance, and calculate PID control parameters in response. Limitations. While PID controllers are applicable to many control problems, and often perform satisfactorily without any improvements or only coarse tuning, they can perform poorly in some applications and do not in general provide "optimal" control. The fundamental difficulty with PID control is that it is a feedback control system, with "constant" parameters, and no direct knowledge of the process, and thus overall performance is reactive and a compromise. While PID control is the best controller for an observer without a model of the process, better performance can be obtained by overtly modeling the actor of the process without resorting to an observer. PID controllers, when used alone, can give poor performance when the PID loop gains must be reduced so that the control system does not overshoot, oscillate or hunt about the control setpoint value. They also have difficulties in the presence of non-linearities, may trade-off regulation versus response time, do not react to changing process behavior (say, the process changes after it has warmed up), and have lag in responding to large disturbances. The most significant improvement is to incorporate feed-forward control with knowledge about the system, and using the PID only to control error. Alternatively, PIDs can be modified in more minor ways, such as by changing the parameters (either gain scheduling in different use cases or adaptively modifying them based on performance), improving measurement (higher sampling rate, precision, and accuracy, and low-pass filtering if necessary), or cascading multiple PID controllers. Linearity and symmetry. PID controllers work best when the loop to be controlled is linear and symmetric. Thus, their performance in non-linear and asymmetric systems is degraded. A non-linear valve, for instance, in a flow control application, will result in variable loop sensitivity, requiring dampened action to prevent instability. One solution is the use of the valve's non-linear characteristic in the control algorithm to compensate for this. An asymmetric application, for example, is temperature control in HVAC systems using only active heating (via a heating element), where there is only passive cooling available. When it is desired to lower the controlled temperature the heating output is off, but there is no active cooling due to control output. Any overshoot of rising temperature can therefore only be corrected slowly; it cannot be forced downward by the control output. In this case the PID controller could be tuned to be over-damped, to prevent or reduce overshoot, but this reduces performance by increasing the settling time of a rising temperature to the set point. The inherent degradation of control quality in this application could be solved by application of active cooling. Noise in derivative term. A problem with the derivative term is that it amplifies higher frequency measurement or process noise that can cause large amounts of change in the output. It is often helpful to filter the measurements with a low-pass filter in order to remove higher-frequency noise components. As low-pass filtering and derivative control can cancel each other out, the amount of filtering is limited. Therefore, low noise instrumentation can be important. A nonlinear median filter may be used, which improves the filtering efficiency and practical performance. In some cases, the differential band can be turned off with little loss of control. This is equivalent to using the PID controller as a PI controller. Modifications to the algorithm. The basic PID algorithm presents some challenges in control applications that have been addressed by minor modifications to the PID form. Integral windup. One common problem resulting from the ideal PID implementations is integral windup. Following a large change in setpoint the integral term can accumulate an error larger than the maximal value for the regulation variable (windup), thus the system overshoots and continues to increase until this accumulated error is unwound. This problem can be addressed by: Overshooting from known disturbances. For example, a PID loop is used to control the temperature of an electric resistance furnace where the system has stabilized. Now when the door is opened and something cold is put into the furnace the temperature drops below the setpoint. The integral function of the controller tends to compensate for error by introducing another error in the positive direction. This overshoot can be avoided by freezing of the integral function after the opening of the door for the time the control loop typically needs to reheat the furnace. PI controller. A PI controller (proportional-integral controller) is a special case of the PID controller in which the derivative (D) of the error is not used. The controller output is given by formula_38 where formula_39 is the error or deviation of actual measured value (PV) from the setpoint (SP). formula_40 A PI controller can be modelled easily in software such as Simulink or Xcos using a "flow chart" box involving Laplace operators: formula_41 where formula_42 = proportional gain formula_43 = integral gain Setting a value for formula_44 is often a trade off between decreasing overshoot and increasing settling time. The lack of derivative action may make the system more steady in the steady state in the case of noisy data. This is because derivative action is more sensitive to higher-frequency terms in the inputs. Without derivative action, a PI-controlled system is less responsive to real (non-noise) and relatively fast alterations in state and so the system will be slower to reach setpoint and slower to respond to perturbations than a well-tuned PID system may be. Deadband. Many PID loops control a mechanical device (for example, a valve). Mechanical maintenance can be a major cost and wear leads to control degradation in the form of either stiction or backlash in the mechanical response to an input signal. The rate of mechanical wear is mainly a function of how often a device is activated to make a change. Where wear is a significant concern, the PID loop may have an output deadband to reduce the frequency of activation of the output (valve). This is accomplished by modifying the controller to hold its output steady if the change would be small (within the defined deadband range). The calculated output must leave the deadband before the actual output will change. Setpoint step change. The proportional and derivative terms can produce excessive movement in the output when a system is subjected to an instantaneous step increase in the error, such as a large setpoint change. In the case of the derivative term, this is due to taking the derivative of the error, which is very large in the case of an instantaneous step change. As a result, some PID algorithms incorporate some of the following modifications: In this modification, the setpoint is gradually moved from its old value to a newly specified value using a linear or first-order differential ramp function. This avoids the discontinuity present in a simple step change. In this case the PID controller measures the derivative of the measured PV, rather than the derivative of the error. This quantity is always continuous (i.e., never has a step change as a result of changed setpoint). This modification is a simple case of setpoint weighting. Setpoint weighting adds adjustable factors (usually between 0 and 1) to the setpoint in the error in the proportional and derivative element of the controller. The error in the integral term must be the true control error to avoid steady-state control errors. These two extra parameters do not affect the response to load disturbances and measurement noise and can be tuned to improve the controller's setpoint response. Feed-forward. The control system performance can be improved by combining the feedback (or closed-loop) control of a PID controller with feed-forward (or open-loop) control. Knowledge about the system (such as the desired acceleration and inertia) can be fed forward and combined with the PID output to improve the overall system performance. The feed-forward value alone can often provide the major portion of the controller output. The PID controller primarily has to compensate for whatever difference or "error" remains between the setpoint (SP) and the system response to the open-loop control. Since the feed-forward output is not affected by the process feedback, it can never cause the control system to oscillate, thus improving the system response without affecting stability. Feed forward can be based on the setpoint and on extra measured disturbances. Setpoint weighting is a simple form of feed forward. For example, in most motion control systems, in order to accelerate a mechanical load under control, more force is required from the actuator. If a velocity loop PID controller is being used to control the speed of the load and command the force being applied by the actuator, then it is beneficial to take the desired instantaneous acceleration, scale that value appropriately and add it to the output of the PID velocity loop controller. This means that whenever the load is being accelerated or decelerated, a proportional amount of force is commanded from the actuator regardless of the feedback value. The PID loop in this situation uses the feedback information to change the combined output to reduce the remaining difference between the process setpoint and the feedback value. Working together, the combined open-loop feed-forward controller and closed-loop PID controller can provide a more responsive control system. Bumpless operation. PID controllers are often implemented with a "bumpless" initialization feature that recalculates the integral accumulator term to maintain a consistent process output through parameter changes. A partial implementation is to store the integral gain times the error rather than storing the error and postmultiplying by the integral gain, which prevents discontinuous output when the I gain is changed, but not the P or D gains. Other improvements. In addition to feed-forward, PID controllers are often enhanced through methods such as PID gain scheduling (changing parameters in different operating conditions), fuzzy logic, or computational verb logic. Further practical application issues can arise from instrumentation connected to the controller. A high enough sampling rate, measurement precision, and measurement accuracy are required to achieve adequate control performance. Another new method for improvement of PID controller is to increase the degree of freedom by using fractional order. The order of the integrator and differentiator add increased flexibility to the controller. Cascade control. One distinctive advantage of PID controllers is that two PID controllers can be used together to yield better dynamic performance. This is called cascaded PID control. Two controllers are in cascade when they are arranged so that one regulates the set point of the other. A PID controller acts as outer loop controller, which controls the primary physical parameter, such as fluid level or velocity. The other controller acts as inner loop controller, which reads the output of outer loop controller as setpoint, usually controlling a more rapid changing parameter, flowrate or acceleration. It can be mathematically proven that the working frequency of the controller is increased and the time constant of the object is reduced by using cascaded PID controllers.. For example, a temperature-controlled circulating bath has two PID controllers in cascade, each with its own thermocouple temperature sensor. The outer controller controls the temperature of the water using a thermocouple located far from the heater, where it accurately reads the temperature of the bulk of the water. The error term of this PID controller is the difference between the desired bath temperature and measured temperature. Instead of controlling the heater directly, the outer PID controller sets a heater temperature goal for the inner PID controller. The inner PID controller controls the temperature of the heater using a thermocouple attached to the heater. The inner controller's error term is the difference between this heater temperature setpoint and the measured temperature of the heater. Its output controls the actual heater to stay near this setpoint. The proportional, integral, and differential terms of the two controllers will be very different. The outer PID controller has a long time constant – all the water in the tank needs to heat up or cool down. The inner loop responds much more quickly. Each controller can be tuned to match the physics of the system "it" controls – heat transfer and thermal mass of the whole tank or of just the heater – giving better total response. Alternative nomenclature and forms. Standard versus parallel (ideal) form. The form of the PID controller most often encountered in industry, and the one most relevant to tuning algorithms is the "standard form". In this form the formula_31 gain is applied to the formula_45, and formula_46 terms, yielding: formula_47 where formula_48 is the "integral time" formula_49 is the "derivative time" In this standard form, the parameters have a clear physical meaning. In particular, the inner summation produces a new single error value which is compensated for future and past errors. The proportional error term is the current error. The derivative components term attempts to predict the error value at formula_49 seconds (or samples) in the future, assuming that the loop control remains unchanged. The integral component adjusts the error value to compensate for the sum of all past errors, with the intention of completely eliminating them in formula_48 seconds (or samples). The resulting compensated single error value is then scaled by the single gain formula_31 to compute the control variable. In the parallel form, shown in the controller theory section formula_50 the gain parameters are related to the parameters of the standard form through formula_51 and formula_52. This parallel form, where the parameters are treated as simple gains, is the most general and flexible form. However, it is also the form where the parameters have the weakest relationship to physical behaviors and is generally reserved for theoretical treatment of the PID controller. The standard form, despite being slightly more complex mathematically, is more common in industry. Reciprocal gain, a.k.a. proportional band. In many cases, the manipulated variable output by the PID controller is a dimensionless fraction between 0 and 100% of some maximum possible value, and the translation into real units (such as pumping rate or watts of heater power) is outside the PID controller. The process variable, however, is in dimensioned units such as temperature. It is common in this case to express the gain formula_31 not as "output per degree", but rather in the reciprocal form of a "proportional band" formula_53, which is "degrees per full output": the range over which the output changes from 0 to 1 (0% to 100%). Beyond this range, the output is saturated, full-off or full-on. The narrower this band, the higher the proportional gain. Basing derivative action on PV. In most commercial control systems, derivative action is based on process variable rather than error. That is, a change in the setpoint does not affect the derivative action. This is because the digitized version of the algorithm produces a large unwanted spike when the setpoint is changed. If the setpoint is constant then changes in the PV will be the same as changes in error. Therefore, this modification makes no difference to the way the controller responds to process disturbances. Basing proportional action on PV. Most commercial control systems offer the "option" of also basing the proportional action solely on the process variable. This means that only the integral action responds to changes in the setpoint. The modification to the algorithm does not affect the way the controller responds to process disturbances. Basing proportional action on PV eliminates the instant and possibly very large change in output caused by a sudden change to the setpoint. Depending on the process and tuning this may be beneficial to the response to a setpoint step. formula_54 King describes an effective chart-based method. Laplace form. Sometimes it is useful to write the PID regulator in Laplace transform form: formula_55 Having the PID controller written in Laplace form and having the transfer function of the controlled system makes it easy to determine the closed-loop transfer function of the system. Series/interacting form. Another representation of the PID controller is the series, or "interacting" form formula_56 where the parameters are related to the parameters of the standard form through formula_57, formula_58, and formula_59 with formula_60. This form essentially consists of a PD and PI controller in series. As the integral is required to calculate the controller's bias this form provides the ability to track an external bias value which is required to be used for proper implementation of multi-controller advanced control schemes. Discrete implementation. The analysis for designing a digital implementation of a PID controller in a microcontroller (MCU) or FPGA device requires the standard form of the PID controller to be "discretized". Approximations for first-order derivatives are made by backward finite differences. formula_4 and formula_0 are discretized with a sampling period formula_61, k is the sample index. Differentiating both sides of PID equation using Newton's notation gives: formula_62 Derivative terms are approximated as, formula_63 So, formula_64 Applying backward difference again gives, formula_65 By simplifying and regrouping terms of the above equation, an algorithm for an implementation of the discretized PID controller in a MCU is finally obtained: formula_66 or: formula_67 s.t. formula_68 Note: This method solves in fact formula_69 where formula_70 is a constant independent of t. This constant is useful when you want to have a start and stop control on the regulation loop. For instance, setting Kp,Ki and Kd to 0 will keep u(t) constant. Likewise, when you want to start a regulation on a system where the error is already close to 0 with u(t) non null, it prevents from sending the output to 0. Pseudocode. Here is a very simple and explicit group of pseudocode that can be easily understood by the layman: previous_error := 0 integral := 0 loop: error := setpoint − measured_value proportional := error; integral := integral + error × dt derivative := (error - previous_error) / dt output := Kp × proportional + Ki × integral + Kd × derivative previous_error := error wait(dt) goto loop Below a pseudocode illustrates how to implement a PID considering the PID as an IIR filter: The Z-transform of a PID can be written as (formula_71 is the sampling time): formula_72 and expressed in a IIR form (in agreement with the discrete implementation shown above): formula_73 We can then deduce the recursive iteration often found in FPGA implementation formula_74 A0 := Kp + Ki*dt + Kd/dt A1 := -Kp - 2*Kd/dt A2 := Kd/dt error[2] := 0 // e(t-2) error[1] := 0 // e(t-1) error[0] := 0 // e(t) output := u0 // Usually the current value of the actuator loop: error[2] := error[1] error[1] := error[0] error[0] := setpoint − measured_value output := output + A0 * error[0] + A1 * error[1] + A2 * error[2] wait(dt) goto loop Here, Kp is a dimensionless number, Ki is expressed in formula_75 and Kd is expressed in s. When doing a regulation where the actuator and the measured value are not in the same unit (ex. temperature regulation using a motor controlling a valve), Kp, Ki and Kd may be corrected by a unit conversion factor. It may also be interesting to use Ki in its reciprocal form (integration time). The above implementation allows to perform an I-only controller which may be useful in some cases. In the real world, this is D-to-A converted and passed into the process under control as the manipulated variable (MV). The current error is stored elsewhere for re-use in the next differentiation, the program then waits until dt seconds have passed since start, and the loop begins again, reading in new values for the PV and the setpoint and calculating a new value for the error. Note that for real code, the use of "wait(dt)" might be inappropriate because it doesn't account for time taken by the algorithm itself during the loop, or more importantly, any pre-emption delaying the algorithm. A common issue when using formula_30 is the response to the derivative of a rising or falling edge of the setpoint as shown below: A typical workaround is to filter the derivative action using a low pass filter of time constant formula_76 where formula_77: A variant of the above algorithm using an infinite impulse response (IIR) filter for the derivative: A0 := Kp + Ki*dt A1 := -Kp error[2] := 0 // e(t-2) error[1] := 0 // e(t-1) error[0] := 0 // e(t) output := u0 // Usually the current value of the actuator A0d := Kd/dt A1d := - 2.0*Kd/dt A2d := Kd/dt N := 5 tau := Kd / (Kp*N) // IIR filter time constant alpha := dt / (2*tau) d0 := 0 d1 := 0 fd0 := 0 fd1 := 0 loop: error[2] := error[1] error[1] := error[0] error[0] := setpoint − measured_value // PI output := output + A0 * error[0] + A1 * error[1] // Filtered D d1 := d0 d0 := A0d * error[0] + A1d * error[1] + A2d * error[2] fd1 := fd0 fd0 := ((alpha) / (alpha + 1)) * (d0 + d1) - ((alpha - 1) / (alpha + 1)) * fd1 output := output + fd0 wait(dt) goto loop Notes. &lt;templatestyles src="Reflist/styles.css" /&gt; References. &lt;templatestyles src="Reflist/styles.css" /&gt; &lt;templatestyles src="Refbegin/styles.css" /&gt; Further reading. &lt;templatestyles src="Refbegin/styles.css" /&gt;
[ { "math_id": 0, "text": "e(t)" }, { "math_id": 1, "text": "\\text{SP} = r(t)" }, { "math_id": 2, "text": "\\text{PV} = y(t)" }, { "math_id": 3, "text": "e(t) = r(t) - y(t)" }, { "math_id": 4, "text": "u(t)" }, { "math_id": 5, "text": "u(t) = K_\\text{p} e(t) + K_\\text{i} \\int_0^t e(\\tau) \\,\\mathrm{d}\\tau + K_\\text{d} \\frac{\\mathrm{d}e(t)}{\\mathrm{d}t}," }, { "math_id": 6, "text": "K_\\text{p}" }, { "math_id": 7, "text": "K_\\text{i}" }, { "math_id": 8, "text": "K_\\text{d}" }, { "math_id": 9, "text": "K_\\text{p}/T_\\text{i}" }, { "math_id": 10, "text": "K_\\text{p} T_\\text{d}" }, { "math_id": 11, "text": "T_\\text{i}" }, { "math_id": 12, "text": "T_\\text{d}" }, { "math_id": 13, "text": "u(t) = K_\\text{p}\\left(e(t) + \\frac{1}{T_\\text{i}} \\int_0^t e(\\tau) \\,\\mathrm{d}\\tau + T_\\text{d} \\frac{\\mathrm{d}e(t)}{\\mathrm{d}t}\\right)" }, { "math_id": 14, "text": "u(t) = \\mathrm{MV}(t) = K_\\text{p} e(t) + K_\\text{i} \\int_0^t e(\\tau) \\,d\\tau + K_\\text{d} \\frac{de(t)}{dt}," }, { "math_id": 15, "text": "e(t) = \\mathrm{SP} - \\mathrm{PV}(t)" }, { "math_id": 16, "text": "t" }, { "math_id": 17, "text": "\\tau" }, { "math_id": 18, "text": "L(s) = K_\\text{p} + K_\\text{i}/s + K_\\text{d} s," }, { "math_id": 19, "text": "s" }, { "math_id": 20, "text": "P_\\text{out} = K_\\text{p} e(t)." }, { "math_id": 21, "text": "I_\\text{out} = K_\\text{i} \\int_0^t e(\\tau) \\,d\\tau." }, { "math_id": 22, "text": "D_\\text{out} = K_\\text{d} \\frac{de(t)}{dt}." }, { "math_id": 23, "text": "H(s) = \\frac{K(s)G(s)}{1 + K(s)G(s)}," }, { "math_id": 24, "text": "K(s)" }, { "math_id": 25, "text": "G(s)" }, { "math_id": 26, "text": "K(s)G(s) = -1" }, { "math_id": 27, "text": "|K(s)G(s)| = 1" }, { "math_id": 28, "text": "K(s)G(s) < 1" }, { "math_id": 29, "text": "K_i" }, { "math_id": 30, "text": "K_d" }, { "math_id": 31, "text": "K_p" }, { "math_id": 32, "text": "K_u" }, { "math_id": 33, "text": "T_u" }, { "math_id": 34, "text": "\\tfrac{1}{4}" }, { "math_id": 35, "text": "K_u = 4b/\\pi a," }, { "math_id": 36, "text": "y(s) = \\frac{k_\\text{p} e^{-\\theta s}}{\\tau_\\text{p} s + 1} u(s)," }, { "math_id": 37, "text": "y(t) = k_\\text{p} \\Delta u \\left(1 - e^{\\frac{-t - \\theta}{\\tau_\\text{p}}}\\right)," }, { "math_id": 38, "text": "K_P \\Delta + K_I \\int \\Delta\\,dt" }, { "math_id": 39, "text": "\\Delta" }, { "math_id": 40, "text": "\\Delta = SP - PV." }, { "math_id": 41, "text": "C=\\frac{G(1+\\tau s)}{\\tau s}" }, { "math_id": 42, "text": "G = K_P" }, { "math_id": 43, "text": "\\frac G \\tau = K_I" }, { "math_id": 44, "text": "G" }, { "math_id": 45, "text": "I_{\\mathrm{out}}" }, { "math_id": 46, "text": "D_{\\mathrm{out}}" }, { "math_id": 47, "text": "u(t) = K_p \\left( e(t) + \\frac{1}{T_i}\\int_0^t e(\\tau)\\,d\\tau + T_d\\frac{d}{dt}e(t) \\right)" }, { "math_id": 48, "text": "T_i" }, { "math_id": 49, "text": "T_d" }, { "math_id": 50, "text": "u(t) = K_p e(t) + K_i \\int_0^t e(\\tau)\\,d\\tau + K_d\\frac{d}{dt}e(t)" }, { "math_id": 51, "text": "K_i = K_p/T_i" }, { "math_id": 52, "text": "K_d = K_p T_d" }, { "math_id": 53, "text": "100/K_p" }, { "math_id": 54, "text": "\\mathrm{MV(t)}=K_p\\left(\\,{-PV(t)} + \\frac{1}{T_i}\\int_{0}^{t}{e(\\tau)}\\,{d\\tau} - T_d\\frac{d}{dt}PV(t)\\right)" }, { "math_id": 55, "text": "G(s)=K_p + \\frac{K_i}{s} + K_d{s}=\\frac{K_d{s^2} + K_p{s} + K_i}{s}" }, { "math_id": 56, "text": "G(s) = K_c (\\frac{1}{\\tau_i{s}}+1) (\\tau_d{s}+1)" }, { "math_id": 57, "text": "K_p = K_c \\cdot \\alpha" }, { "math_id": 58, "text": "T_i = \\tau_i \\cdot \\alpha" }, { "math_id": 59, "text": "T_d = \\frac{\\tau_d}{\\alpha}" }, { "math_id": 60, "text": "\\alpha = 1 + \\frac{\\tau_d}{\\tau_i}" }, { "math_id": 61, "text": "\\Delta t" }, { "math_id": 62, "text": "\\dot{u}(t) = K_p\\dot{e}(t) + K_ie(t) + K_d\\ddot{e}(t)" }, { "math_id": 63, "text": "\\dot{f}(t_k) = \\dfrac{df(t_k)}{dt}=\\dfrac{f(t_{k})-f(t_{k-1})}{\\Delta t}" }, { "math_id": 64, "text": "\\frac{u(t_{k})-u(t_{k-1})}{\\Delta t} = K_p\\frac{e(t_{k})-e(t_{k-1})}{\\Delta t} + K_i e(t_{k}) + K_d \\frac{\\dot{e}(t_{k}) - \\dot{e}(t_{k-1})}{\\Delta t}" }, { "math_id": 65, "text": "\\frac{u(t_{k})-u(t_{k-1})}{\\Delta t} = K_p\\frac{e(t_{k})-e(t_{k-1})}{\\Delta t} + K_i e(t_{k}) + K_d \\frac{ \\frac{e(t_{k})-e(t_{k-1})}{\\Delta t} - \\frac{e(t_{k-1})-e(t_{k-2})}{\\Delta t} }{\\Delta t}" }, { "math_id": 66, "text": "u(t_{k})=u(t_{k-1})+\\left(K_p+K_i\\Delta t+\\dfrac{K_d}{\\Delta t}\\right) e(t_{k})+\\left(-K_p-\\dfrac{2K_d}{\\Delta t}\\right) e(t_{k-1}) + \\dfrac{K_d}{\\Delta t}e(t_{k-2})" }, { "math_id": 67, "text": "u(t_k)=u(t_{k-1})+K_p\\left[\\left(1+\\dfrac{\\Delta t}{T_i}+\\dfrac{T_d}{\\Delta t}\\right) e(t_k)+\\left(-1-\\dfrac{2T_d}{\\Delta t}\\right)e(t_{k-1}) + \\dfrac{T_d}{\\Delta t}e(t_{k-2})\\right]" }, { "math_id": 68, "text": " T_i = K_p/K_i, T_d = K_d/K_p" }, { "math_id": 69, "text": "u(t) = K_\\text{p} e(t) + K_\\text{i} \\int_0^t e(\\tau) \\,\\mathrm{d}\\tau + K_\\text{d} \\frac{\\mathrm{d}e(t)}{\\mathrm{d}t} + u_0" }, { "math_id": 70, "text": "u_0" }, { "math_id": 71, "text": "\\Delta_t" }, { "math_id": 72, "text": "C(z)= K_p + K_i\\Delta_t \\frac{z}{z-1} + \\frac{K_d}{\\Delta_t} \\frac{z-1}{z}" }, { "math_id": 73, "text": "C(z)=\\frac{\\left(K_p+K_i\\Delta_t+\\dfrac{K_d}{\\Delta_t}\\right)+\\left(-K_p-\\dfrac{2K_d}{\\Delta_t}\\right) z^{-1} + \\dfrac{K_d}{\\Delta_t}z^{-2}}{1-z^{-1}}" }, { "math_id": 74, "text": "u[n] = u[n-1] + \\left(K_p+ K_i\\Delta_t+\\dfrac{K_d}{\\Delta_t}\\right)\\epsilon[n] + \\left(-K_p-\\dfrac{2K_d}{\\Delta_t}\\right)\\epsilon[n-1] + \\dfrac{K_d}{\\Delta_t}\\epsilon[n-2]" }, { "math_id": 75, "text": "s^{-1}" }, { "math_id": 76, "text": "\\tau_d/N" }, { "math_id": 77, "text": "3<=N<=10" } ]
https://en.wikipedia.org/wiki?curid=66256
662624
Noncommutative topology
In mathematics, noncommutative topology is a term used for the relationship between topological and C*-algebraic concepts. The term has its origins in the Gelfand–Naimark theorem, which implies the duality of the category of locally compact Hausdorff spaces and the category of commutative C*-algebras. Noncommutative topology is related to analytic noncommutative geometry. Examples. The premise behind noncommutative topology is that a noncommutative C*-algebra can be treated like the algebra of complex-valued continuous functions on a 'noncommutative space' which does not exist classically. Several topological properties can be formulated as properties for the C*-algebras without making reference to commutativity or the underlying space, and so have an immediate generalization. Among these are: Individual elements of a commutative C*-algebra correspond with continuous functions. And so certain types of functions can correspond to certain properties of a C*-algebra. For example, self-adjoint elements of a commutative C*-algebra correspond to real-valued continuous functions. Also, projections (i.e. self-adjoint idempotents) correspond to indicator functions of clopen sets. Categorical constructions lead to some examples. For example, the coproduct of spaces is the disjoint union and thus corresponds to the direct sum of algebras, which is the product of C*-algebras. Similarly, product topology corresponds to the coproduct of C*-algebras, the tensor product of algebras. In a more specialized setting, compactifications of topologies correspond to unitizations of algebras. So the one-point compactification corresponds to the minimal unitization of C*-algebras, the Stone–Čech compactification corresponds to the multiplier algebra, and corona sets correspond with corona algebras. There are certain examples of properties where multiple generalizations are possible and it is not clear which is preferable. For example, probability measures can correspond either to states or tracial states. Since all states are vacuously tracial states in the commutative case, it is not clear whether the tracial condition is necessary to be a useful generalization. K-theory. One of the major examples of this idea is the generalization of topological K-theory to noncommutative C*-algebras in the form of operator K-theory. A further development in this is a bivariant version of K-theory called KK-theory, which has a composition product formula_0 of which the ring structure in ordinary K-theory is a special case. The product gives the structure of a category to KK. It has been related to correspondences of algebraic varieties. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "KK(A,B)\\times KK(B,C)\\rightarrow KK(A,C)" } ]
https://en.wikipedia.org/wiki?curid=662624
662787
Elliptic partial differential equation
Class of second-order linear partial differential equations Second-order linear partial differential equations (PDEs) are classified as either elliptic, hyperbolic, or parabolic. Any second-order linear PDE in two variables can be written in the form formula_0 where "A", "B", "C", "D", "E", "F", and "G" are functions of "x" and "y" and where formula_1, formula_2 and similarly for formula_3. A PDE written in this form is elliptic if formula_4 with this naming convention inspired by the equation for a planar ellipse. Equations with formula_5 are termed parabolic while those with formula_6 are hyperbolic. The simplest examples of elliptic PDEs are the Laplace equation, formula_7, and the Poisson equation, formula_8 In a sense, any other elliptic PDE in two variables can be considered to be a generalization of one of these equations, as it can always be put into the canonical form formula_9 through a change of variables. Qualitative behavior. Elliptic equations have no real characteristic curves, curves along which it is not possible to eliminate at least one second derivative of formula_10 from the conditions of the Cauchy problem. Since characteristic curves are the only curves along which solutions to partial differential equations with smooth parameters can have discontinuous derivatives, solutions to elliptic equations cannot have discontinuous derivatives anywhere. This means elliptic equations are well suited to describe equilibrium states, where any discontinuities have already been smoothed out. For instance, we can obtain Laplace's equation from the heat equation formula_11 by setting formula_12. This means that Laplace's equation describes a steady state of the heat equation. In parabolic and hyperbolic equations, characteristics describe lines along which information about the initial data travels. Since elliptic equations have no real characteristic curves, there is no meaningful sense of information propagation for elliptic equations. This makes elliptic equations better suited to describe static, rather than dynamic, processes. Derivation of canonical form. We derive the canonical form for elliptic equations in two variables, formula_13. formula_14 and formula_15. If formula_16, applying the chain rule once gives formula_17 and formula_18, a second application gives formula_19 formula_20 and formula_21 We can replace our PDE in x and y with an equivalent equation in formula_22 and formula_23 formula_24 where formula_25 formula_26 and formula_27 To transform our PDE into the desired canonical form, we seek formula_22 and formula_23 such that formula_28 and formula_29. This gives us the system of equations formula_30 formula_31 Adding formula_32 times the second equation to the first and setting formula_33 gives the quadratic equation formula_34 Since the discriminant formula_35, this equation has two distinct solutions, formula_36 which are complex conjugates. Choosing either solution, we can solve for formula_37, and recover formula_22 and formula_23 with the transformations formula_38 and formula_39. Since formula_23 and formula_22 will satisfy formula_40 and formula_29, so with a change of variables from x and y to formula_23 and formula_22 will transform the PDE formula_0 into the canonical form formula_41 as desired. In higher dimensions. A general second-order partial differential equation in "n" variables takes the form formula_42 This equation is considered elliptic if there are no characteristic surfaces, i.e. surfaces along which it is not possible to eliminate at least one second derivative of "u" from the conditions of the Cauchy problem. Unlike the two-dimensional case, this equation cannot in general be reduced to a simple canonical form. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "Au_{xx} + 2Bu_{xy} + Cu_{yy} + Du_x + Eu_y + Fu +G= 0,\\," }, { "math_id": 1, "text": "u_x=\\frac{\\partial u}{\\partial x}" }, { "math_id": 2, "text": "u_{xy}=\\frac{\\partial^2 u}{\\partial x \\partial y}" }, { "math_id": 3, "text": " u_{xx},u_y,u_{yy}" }, { "math_id": 4, "text": "B^2-AC<0," }, { "math_id": 5, "text": "B^2 - AC = 0" }, { "math_id": 6, "text": "B^2 - AC > 0" }, { "math_id": 7, "text": "\\Delta u=u_{xx}+u_{yy}=0" }, { "math_id": 8, "text": "\\Delta u=u_{xx}+u_{yy}=f(x,y)." }, { "math_id": 9, "text": "u_{xx}+u_{yy}+\\text{ (lower-order terms)}=0 " }, { "math_id": 10, "text": "u" }, { "math_id": 11, "text": "u_t=\\Delta u " }, { "math_id": 12, "text": "u_t=0" }, { "math_id": 13, "text": "u_{xx}+u_{xy}+u_{yy}+\\text{ (lower-order terms)}=0 " }, { "math_id": 14, "text": "\\xi =\\xi (x,y)" }, { "math_id": 15, "text": "\\eta=\\eta(x,y) " }, { "math_id": 16, "text": "u(\\xi, \\eta)=u[\\xi(x, y), \\eta(x,y)]" }, { "math_id": 17, "text": "u_{x}=u_\\xi \\xi_x+u_\\eta \\eta_x" }, { "math_id": 18, "text": "u_{y}=u_\\xi \\xi_y+u_\\eta \\eta_y" }, { "math_id": 19, "text": "u_{xx}=u_{\\xi\\xi} {\\xi^2}_x+u_{\\eta\\eta} {\\eta^2}_x+2u_{\\xi\\eta}\\xi_x\\eta_x+u_{\\xi}\\xi_{xx}+u_{\\eta}\\eta_{xx}," }, { "math_id": 20, "text": "u_{yy}=u_{\\xi\\xi} {\\xi^2}_y+u_{\\eta\\eta} {\\eta^2}_y+2u_{\\xi\\eta}\\xi_y\\eta_y+u_{\\xi}\\xi_{yy}+u_{\\eta}\\eta_{yy}," }, { "math_id": 21, "text": "u_{xy}=u_{\\xi\\xi} \\xi_x\\xi_y+u_{\\eta\\eta} \\eta_x\\eta_y+u_{\\xi\\eta}(\\xi_x\\eta_y+\\xi_y\\eta_x)+u_{\\xi}\\xi_{xy}+u_{\\eta}\\eta_{xy}." }, { "math_id": 22, "text": "\\xi" }, { "math_id": 23, "text": "\\eta" }, { "math_id": 24, "text": "au_{\\xi\\xi} + 2bu_{\\xi\\eta} + cu_{\\eta\\eta} \\text{ + (lower-order terms)}= 0,\\," }, { "math_id": 25, "text": "a=A{\\xi_x}^2+2B\\xi_x\\xi_y+C{\\xi_y}^2," }, { "math_id": 26, "text": "b=2A\\xi_x\\eta_x+2B(\\xi_x\\eta_y+\\xi_y\\eta_x) +2C\\xi_y\\eta_y ," }, { "math_id": 27, "text": "c=A{\\eta_x}^2+2B\\eta_x\\eta_y+C{\\eta_y}^2." }, { "math_id": 28, "text": "a=c" }, { "math_id": 29, "text": "b=0" }, { "math_id": 30, "text": "a-c=A({\\xi_x}^2-{\\eta_x}^2)+2B(\\xi_x\\xi_y-\\eta_x\\eta_y)+C({\\xi_y}^2-{\\eta_y}^2)=0" }, { "math_id": 31, "text": "b=0=2A\\xi_x\\eta_x+2B(\\xi_x\\eta_y+\\xi_y\\eta_x) +2C\\xi_y\\eta_y ," }, { "math_id": 32, "text": "i" }, { "math_id": 33, "text": "\\phi=\\xi+ i \\eta" }, { "math_id": 34, "text": "A{\\phi_x}^2+2B\\phi_x\\phi_y+C{\\phi_y}^2=0." }, { "math_id": 35, "text": " B^2-AC<0" }, { "math_id": 36, "text": "{\\phi_x},{\\phi_y}=\\frac{B\\pm i\\sqrt{AC-B^2}}{A} " }, { "math_id": 37, "text": "\\phi(x,y)" }, { "math_id": 38, "text": "\\xi=\\operatorname{Re} \\phi " }, { "math_id": 39, "text": "\\eta=\\operatorname{Im}\\phi" }, { "math_id": 40, "text": "a-c=0" }, { "math_id": 41, "text": "u_{\\xi\\xi}+u_{\\eta\\eta}+\\text{ (lower-order terms)}=0, " }, { "math_id": 42, "text": "\\sum_{i=1}^n\\sum_{j=1}^n a_{i,j} \\frac{\\partial^2 u}{\\partial x_i \\partial x_j} \\; \\text{ + (lower-order terms)} = 0." } ]
https://en.wikipedia.org/wiki?curid=662787
66279104
Non-degenerate two-photon absorption
Simultaneous absorption of two photons of differing energies by a molecule In atomic physics, non-degenerate two-photon absorption (ND-TPA or ND-2PA) or two-color two-photon excitation is a type of two-photon absorption (TPA) where two photons with different energies are (almost) simultaneously absorbed by a molecule, promoting a molecular electronic transition from a lower energy state to a higher energy state. The sum of the energies of the two photons is equal to, or larger than, the total energy of the transition. The probability of ND-TPA is quantified as the non-degenerate two-photon absorption cross section (ND-TPACS) and is an inherent property of molecules. ND-TPACS has been measured using Z-scan (pump-probe) techniques, which measure the laser intensity decrease due to absorption, and fluorescence-based techniques, which measure the fluorescence generated by the fluorophores upon ND-TPA. In ND-TPA, by absorbing the first photon, the molecule makes a transition to a virtual state and stays in the virtual state for an extremely short period of time (virtual state lifetime, VSL). If a second photon is absorbed during the VSL, the molecule makes a transition to the excited electronic state, otherwise it will relax back to the ground state. Therefore, the two photons are "almost" simultaneously absorbed in two-photon absorption. Based on the time–energy uncertainty relation, VSL is inversely proportional to the energy difference between the virtual state and the nearest real electronic state (i.e. the ground or a nearby excited state). Therefore, the closer the virtual state to the real state, the longer the VSL and the higher the probability of TPA. This means that in comparison to degenerate TPA, where the virtual state is in the middle of the ground and the excited state, ND-TPA has a larger absorption cross-section. This phenomenon is known as the resonance enhancement and is the main mechanism behind the observed increase in ND-TPACS of semiconductors and fluorophores in comparison to their degenerate TPA cross-sections. ND-TPA has also been explored in two-photon microscopy for decreasing out-of-focus excitation, increasing penetration depth, increasing spatial resolution, and extending the excitation wavelength range. Theory. The following discussion of techniques for quantitatively obtaining important parameters for use in ND-TPA is a summary of concepts discussed in Yang et. al. Beer's law describes the decay in intensity due to one-photon absorption: formula_0 where z is the distance that the photon travels in a sample, "I"("z") is the light intensity after traveling a distance z in the sample and α is the one-photon absorption coefficient of the sample. In ND-TPA, two different color photons come together, providing the following adaptation of the previous equation, and using a near-infrared (NIR) and short-wavelength infrared (SWIR) photon for ease of interpretation: formula_1 where A is a combined term describing the absorption cross section, collection efficiency, fluorophore concentration and quantum efficiency. For fluorescence with a non-uniform flux, as exists in ND-TPA, the following equation qualifies: formula_2 where K is the product of the quantum yield of the fluorophore, geometry of the imaging system and the fluorophore concentration and is assumed to be independent of the excitation regime, and σ is the absorption cross section. Note that the desynchronization level of the two laser pulses, as shown through the time and spatial delay in "I"SWIR, affects the overall fluorescence of a given volume within a specimen. Also important to note is that photon beam fluxes can be combined in this fashion, allowing for one photon flux to be increased proportionally to the decrease in flux experienced by another photon due to scattering effects, as in biological tissue. Advantages of ND-TPA. The near-simultaneous injection of different-energy photons into a specimen poses advantages over the traditional method of same-energy degenerate two photon excitation. These advantages can be explained by the enhanced VSL and, thus, larger absorption cross-section. Brighter Fluorescence. Because of the longer VSL, there is a higher likelihood of promotion of an electron to an excited singlet state by the second photon, compared to degenerate two-photon excitation. Instead of increased Rayleigh or Raman scattering taking place from the virtual state, more electrons in a given excitation plane are likely to promote to the excited state, followed by larger rates of emission to the ground state. This larger amount of emission events translates to higher fluorescence intensity in a sample at a given spot, increasing signal-to-noise ratio and decreasing the effects of out-of-focus excitation. Depth of Penetration. Even though same-energy two-photon excitation microscopy provides a better depth of penetration than confocal microscopy, it is still confined to ~1mm depths, which is approximately the transport mean free path of biological tissue. Due to the increased VSL, absorption cross-section, and thus fluorescent intensity, ND-TPA provides a larger depth of penetration than degenerate two-photon microscopy, allowing for fluorescent emission deeper in a sample. At every depth location in a sample, ND-TPA provides brighter fluorescence than traditional two-photon absorption, thus allowing for visualizable fluorescence at depths impossible for traditional two-photon microscopy. Due to the high scattering nature of higher energy photons, and the ability of beam fluxes to be combined multiplicatively, beam fluxes can simply be tuned so that lower-energy photons are administered at a higher fluence rate, thus accounting for the loss in higher-energy photons at larger depths within a sample. Longer Excitation Wavelength Range. The combination of two photons of different wavelengths allows for a larger absorption cross-section, thereby accommodating for larger ranges of excitation than degenerate two-photon microscopy. Traditional degenerate two-photon microscopy is confined to photons with energies that, when doubled, account for the energy difference between ground and excited electron states, however, in biological tissue, this confines degenerate two-photon excitation to the near-infrared wavelength optical window, due to enhanced depth of penetration and energy requirements. With ND-TPA, virtually any wavelength may be used, so long as the second photon accounts for the remaining energy difference between the virtual state and the excited singlet state. This combined two-photon excitation has been demonstrated for fluorophores requiring equivalent one-photon excitation wavelengths of 266nm and 1013nm. Enhanced Spatial Resolution. When combined with degenerate two-photon absorption in microscopy settings, ND-TPA can provide better spatial resolution and axial sectioning. Because of the requirement for laser beams to be approximately synchronized in ND-TPA, a desynchronization event can "turn off" entire fluorophores, while the degenerately-excited fluorophores remain "on". This allows for the overlay of degenerate and non-degenerate two-photon microscopy images to pinpoint locations of specific structures like genes or sub cellular components. The use of further optical or reconstructive additions, like a shaded ring filter or beam-shaping techniques, enables further resolution optimization. Development of a Non-degenerate Two Photon Microscope. To implement non-degenerate two photon excitation microscopy, two photon pulses of differing energies must be synchronized to interact with a specimen at the sample plane near-simultaneously. Due to the enhanced absorption cross section and VSL, more time is possible for excitation to occur, and thus perfect synchronization is unnecessary. However, close synchronization of pulses, within ~10ns is preferred. For this and other logistical reasons, a single Ti:Sapphire femtosecond laser is used to create a single laser pulse train. After passing through a half wave plate to rotate the plane of polarization, the laser beam passes through a polarization beam splitter and is separated into two beams. One beam passes into an optical parametric oscillator (OPO), which splits the incoming high frequency beam into lower frequency components, and the resulting beam is of longer wavelength and lower energy than the incoming beam; this beam also passes through an automated defocuser. The second beam is redirected through a delay line, with mirrors optimized to near-perfectly synchronize the higher energy laser pulse with the lower energy output of the OPO. Both beams pass through half wave plates once again before meeting at a dichroic mirror which allows preselected low wavelength laser beams to pass through, while high wavelength beams are reflected orthogonally to meet and mix with the lower-wavelength beam. This mixed beam, consisting of two different-wavelength beams, passes through another dichroic mirror before being focused by an objective onto the specimen. The resulting incoherent fluorescence is partially redirected through the objective and reflected off the second dichroic mirror into another dichroic mirror, which again reflects the beam into a band-pass filter before it passes into a photomultiplier tube (PMT). This signal is then imaged. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "I(z) = I_0 e^{-\\alpha z} " }, { "math_id": 1, "text": "I(z) = A \\, I_\\mathrm{NIR}(0) \\, I_\\mathrm{SWIR}(0) \\, e^{-z (\\alpha_\\mathrm{NIR} + \\, \\alpha_\\mathrm{SWIR})} \\," }, { "math_id": 2, "text": " F = K \\iint \\sigma \\, I_\\mathrm{NIR}(t,r,z) \\, I_\\mathrm{SWIR}(t-t_0,r-r_0,z) \\, dVdt" } ]
https://en.wikipedia.org/wiki?curid=66279104
662889
Persistent data structure
Data structure that always preserves the previous version of itself when it is modified In computing, a persistent data structure or not ephemeral data structure is a data structure that always preserves the previous version of itself when it is modified. Such data structures are effectively immutable, as their operations do not (visibly) update the structure in-place, but instead always yield a new updated structure. The term was introduced in Driscoll, Sarnak, Sleator, and Tarjan's 1986 article. A data structure is partially persistent if all versions can be accessed but only the newest version can be modified. The data structure is fully persistent if every version can be both accessed and modified. If there is also a meld or merge operation that can create a new version from two previous versions, the data structure is called confluently persistent. Structures that are not persistent are called "ephemeral". These types of data structures are particularly common in logical and functional programming, as languages in those paradigms discourage (or fully forbid) the use of mutable data. Partial versus full persistence. In the partial persistence model, a programmer may query any previous version of a data structure, but may only update the latest version. This implies a linear ordering among each version of the data structure. In the fully persistent model, both updates and queries are allowed on any version of the data structure. In some cases the performance characteristics of querying or updating older versions of a data structure may be allowed to degrade, as is true with the rope data structure. In addition, a data structure can be referred to as confluently persistent if, in addition to being fully persistent, two versions of the same data structure can be combined to form a new version which is still fully persistent. Partially persistent data structure. A type of data structure where user may query any version of the structure but may only update the latest version. An ephemeral data structure can be converted to partially persistent data structure using a few techniques. One of the technique is by using randomized version of Van Emde Boas Tree which is created using dynamic perfect hashing. This data structure is created as follows: The size of this data structure is bounded by the number of elements stored in the structure that is O(m). The insertion of a new maximal element is done in constant O(1) expected and amortized time. Finally query to find an element can be done in this structure in O(log(log n)) worst-case time. Techniques for preserving previous versions. Copy-on-write. One method for creating a persistent data structure is to use a platform provided ephemeral data structure such as an array to store the data in the data structure and copy the entirety of that data structure using copy-on-write semantics for any updates to the data structure. This is an inefficient technique because the entire backing data structure must be copied for each write, leading to worst case O(n·m) performance characteristics for m modifications of an array of size n. Fat node. The fat node method is to record all changes made to node fields in the nodes themselves, without erasing old values of the fields. This requires that nodes be allowed to become arbitrarily “fat”. In other words, each fat node contains the same information and pointer fields as an ephemeral node, along with space for an arbitrary number of extra field values. Each extra field value has an associated field name and a version stamp which indicates the version in which the named field was changed to have the specified value. Besides, each fat node has its own version stamp, indicating the version in which the node was created. The only purpose of nodes having version stamps is to make sure that each node only contains one value per field name per version. In order to navigate through the structure, each original field value in a node has a version stamp of zero. Complexity of fat node. With using fat node method, it requires O(1) space for every modification: just store the new data. Each modification takes O(1) additional time to store the modification at the end of the modification history. This is an amortized time bound, assuming modification history is stored in a growable array. At access time, the right version at each node must be found as the structure is traversed. If "m" modifications were to be made, then each access operation would have O(log m) slowdown resulting from the cost of finding the nearest modification in the array. Path copying. With the path copying method a copy of all nodes is made on the path to any node which is about to be modified. These changes must then be cascaded back through the data structure: all nodes that pointed to the old node must be modified to point to the new node instead. These modifications cause more cascading changes, and so on, until the root node is reached. Complexity of path copying. With m modifications, this costs O(log m) additive lookup time. Modification time and space are bounded by the size of the longest path in the data structure and the cost of the update in the ephemeral data structure. In a Balanced Binary Search Tree without parent pointers the worst case modification time complexity is O(log n + update cost). However, in a linked list the worst case modification time complexity is O(n + update cost). A combination. Driscoll, Sarnak, Sleator, Tarjan came up with a way to combine the techniques of fat nodes and path copying, achieving O(1) access slowdown and O(1) modification space and time complexity. In each node, one modification box is stored. This box can hold one modification to the node—either a modification to one of the pointers, or to the node's key, or to some other piece of node-specific data—and a timestamp for when that modification was applied. Initially, every node's modification box is empty. Whenever a node is accessed, the modification box is checked, and its timestamp is compared against the access time. (The access time specifies the version of the data structure being considered.) If the modification box is empty, or the access time is before the modification time, then the modification box is ignored and only the normal part of the node is considered. On the other hand, if the access time is after the modification time, then the value in the modification box is used, overriding that value in the node. Modifying a node works like this. (It is assumed that each modification touches one pointer or similar field.) If the node's modification box is empty, then it is filled with the modification. Otherwise, the modification box is full. A copy of the node is made, but using only the latest values. The modification is performed directly on the new node, without using the modification box. (One of the new node's fields is overwritten and its modification box stays empty.) Finally, this change is cascaded to the node's parent, just like path copying. (This may involve filling the parent's modification box, or making a copy of the parent recursively. If the node has no parent—it's the root—it is added the new root to a sorted array of roots.) With this algorithm, given any time t, at most one modification box exists in the data structure with time t. Thus, a modification at time t splits the tree into three parts: one part contains the data from before time t, one part contains the data from after time t, and one part was unaffected by the modification. Complexity of the combination. Time and space for modifications require amortized analysis. A modification takes O(1) amortized space, and O(1) amortized time. To see why, use a potential function ϕ, where ϕ(T) is the number of full live nodes in T . The live nodes of T are just the nodes that are reachable from the current root at the current time (that is, after the last modification). The full live nodes are the live nodes whose modification boxes are full. Each modification involves some number of copies, say k, followed by 1 change to a modification box. Consider each of the k copies. Each costs O(1) space and time, but decreases the potential function by one. (First, the node to be copied must be full and live, so it contributes to the potential function. The potential function will only drop, however, if the old node isn't reachable in the new tree. But it is known that it isn't reachable in the new tree—the next step in the algorithm will be to modify the node's parent to point at the copy. Finally, it is known that the copy's modification box is empty. Thus, replaced a full live node has been replaced with an empty live node, and ϕ goes down by one.) The final step fills a modification box, which costs O(1) time and increases ϕ by one. Putting it all together, the change in ϕ is Δϕ =1− k. Thus, the algorithm takes O(k +Δϕ)= O(1) space and O(k +Δϕ +1) = O(1) time Generalized form of persistence. Path copying is one of the simple methods to achieve persistency in a certain data structure such as binary search trees. It is nice to have a general strategy for implementing persistence that works with any given data structure. In order to achieve that, we consider a directed graph G. We assume that each vertex v in G has a constant number c of outgoing edges that are represented by pointers. Each vertex has a label representing the data. We consider that a vertex has a bounded number d of edges leading into it which we define as inedges(v). We allow the following different operations on G. Any of the above operations is performed at a specific time and the purpose of the persistent graph representation is to be able to access any version of G at any given time. For this purpose we define a table for each vertex v in G. The table contains c columns and formula_0 rows. Each row contains in addition to the pointers for the outgoing edges, a label which represents the data at the vertex and a time t at which the operation was performed. In addition to that there is an array inedges(v) that keeps track of all the incoming edges to v. When a table is full, a new table with formula_0 rows can be created. The old table becomes inactive and the new table becomes the active table. CREATE-NODE. A call to CREATE-NODE creates a new table and set all the references to null CHANGE-EDGE. If we assume that CHANGE-EDGE(v, i, u) is called, then there are two cases to consider. CHANGE-LABEL. It works exactly the same as CHANGE-EDGE except that instead of changing the edge of the vertex, we change the label. Efficiency of the generalized persistent data structure. In order to find the efficiency of the scheme proposed above, we use an argument defined as a credit scheme. The credit represents a currency. For example, the credit can be used to pay for a table. The argument states the following: The credit scheme should always satisfy the following invariant: Each row of each active table stores one credit and the table has the same number of credits as the number of rows. Let us confirm that the invariant applies to all the three operations CREATE-NODE, CHANGE-EDGE and CHANGE-LABEL. As a summary, we conclude that having formula_2 calls to CREATE_NODE and formula_3 calls to CHANGE_EDGE will result in the creation of formula_4 tables. Since each table has size formula_5 without taking into account the recursive calls, then filling in a table requires formula_6 where the additional d factor comes from updating the inedges at other nodes. Therefore, the amount of work required to complete a sequence of operations is bounded by the number of tables created multiplied by formula_6. Each access operation can be done in formula_7 and there are formula_8 edge and label operations, thus it requires formula_9. We conclude that There exists a data structure that can complete any formula_10 sequence of CREATE-NODE, CHANGE-EDGE and CHANGE-LABEL in formula_11. Applications of persistent data structures. Next element search or point location. One of the useful applications that can be solved efficiently using persistence is the Next Element Search. Assume that there are formula_10 non intersecting line segments that don't cross each other that are parallel to the x-axis. We want to build a data structure that can query a point formula_12 and return the segment above formula_12 (if any). We will start by solving the Next Element Search using the naïve method then we will show how to solve it using the persistent data structure method. Naïve method. We start with a vertical line segment that starts off at infinity and we sweep the line segments from the left to the right. We take a pause every time we encounter an end point of these segments. The vertical lines split the plane into vertical strips. If there are formula_10 line segments then we can get formula_13 vertical strips since each segment has formula_14 end points. No segment begins and ends in the strip. Every segment either it doesn't touch the strip or it completely crosses it. We can think of the segments as some objects that are in some sorted order from top to bottom. What we care about is where the point that we are looking at fits in this order. We sort the endpoints of the segments by their formula_15 coordinate. For each strip formula_16, we store the subset segments that cross formula_16 in a dictionary. When the vertical line sweeps the line segments, whenever it passes over the left endpoint of a segment then we add it to the dictionary. When it passes through the right endpoint of the segment, we remove it from the dictionary. At every endpoint, we save a copy of the dictionary and we store all the copies sorted by the formula_15 coordinates. Thus we have a data structure that can answer any query. In order to find the segment above a point formula_12, we can look at the formula_15 coordinate of formula_12 to know which copy or strip it belongs to. Then we can look at the formula_17 coordinate to find the segment above it. Thus we need two binary searches, one for the formula_15 coordinate to find the strip or the copy, and another for the formula_17 coordinate to find the segment above it. Thus the query time takes formula_18. In this data structure, the space is the issue since if we assume that we have the segments structured in a way such that every segment starts before the end of any other segment, then the space required for the structure to be built using the naïve method would be formula_19. Let us see how we can build another persistent data structure with the same query time but with a better space. Persistent data structure method. We can notice that what really takes time in the data structure used in the naïve method is that whenever we move from a strip to the next, we need to take a snap shot of whatever data structure we are using to keep things in sorted order. We can notice that once we get the segments that intersect formula_16, when we move to formula_20 either one thing leaves or one thing enters. If the difference between what is in formula_16 and what is in formula_20 is only one insertion or deletion then it is not a good idea to copy everything from formula_16 to formula_20. The trick is that since each copy differs from the previous one by only one insertion or deletion, then we need to copy only the parts that change. Let us assume that we have a tree rooted at formula_21. When we insert a key formula_22 into the tree, we create a new leaf containing formula_22. Performing rotations to rebalance the tree will only modify the nodes of the path from formula_22 to formula_21. Before inserting the key formula_22 into the tree, we copy all the nodes on the path from formula_22 to formula_21. Now we have 2 versions of the tree, the original one which doesn't contain formula_22 and the new tree that contains formula_22 and whose root is a copy of the root of formula_21. Since copying the path from formula_22 to formula_21 doesn't increase the insertion time by more than a constant factor then the insertion in the persistent data structure takes formula_18 time. For the deletion, we need to find which nodes will be affected by the deletion. For each node formula_23 affected by the deletion, we copy the path from the root to formula_23. This will provide a new tree whose root is a copy of the root of the original tree. Then we perform the deletion on the new tree. We will end up with 2 versions of the tree. The original one which contains formula_22 and the new one which doesn't contain formula_22. Since any deletion only modifies the path from the root to formula_23 and any appropriate deletion algorithm runs in formula_18, thus the deletion in the persistent data structure takes formula_18. Every sequence of insertion and deletion will cause the creation of a sequence of dictionaries or versions or trees formula_24 where each formula_25 is the result of operations formula_24. If each formula_25 contains formula_8 elements, then the search in each formula_25 takes formula_26. Using this persistent data structure we can solve the next element search problem in formula_18 query time and formula_27 space instead of formula_19. Please find below the source code for an example related to the next search problem. Examples of persistent data structures. Perhaps the simplest persistent data structure is the singly linked list or "cons"-based list, a simple list of objects formed by each carrying a reference to the next in the list. This is persistent because the "tail" of the list can be taken, meaning the last "k" items for some "k", and new nodes can be added in front of it. The tail will not be duplicated, instead becoming shared between both the old list and the new list. So long as the contents of the tail are immutable, this sharing will be invisible to the program. Many common reference-based data structures, such as red–black trees, stacks, and treaps, can easily be adapted to create a persistent version. Some others need slightly more effort, for example: queues, dequeues, and extensions including min-deques (which have an additional "O"(1) operation "min" returning the minimal element) and random access deques (which have an additional operation of random access with sub-linear, most often logarithmic, complexity). There also exist persistent data structures which use destructive operations, making them impossible to implement efficiently in purely functional languages (like Haskell outside specialized monads like state or IO), but possible in languages like C or Java. These types of data structures can often be avoided with a different design. One primary advantage to using purely persistent data structures is that they often behave better in multi-threaded environments. Linked lists. Singly linked lists are the bread-and-butter data structure in functional languages. Some ML-derived languages, like Haskell, are purely functional because once a node in the list has been allocated, it cannot be modified, only copied, referenced or destroyed by the garbage collector when nothing refers to it. (Note that ML itself is not purely functional, but supports non-destructive list operations subset, that is also true in the Lisp (LISt Processing) functional language dialects like Scheme and Racket.) Consider the two lists: xs = [0, 1, 2] ys = [3, 4, 5] These would be represented in memory by: where a circle indicates a node in the list (the arrow out representing the second element of the node which is a pointer to another node). Now concatenating the two lists: zs = xs ++ ys results in the following memory structure: Notice that the nodes in list codice_0 have been copied, but the nodes in codice_1 are shared. As a result, the original lists (codice_0 and codice_1) persist and have not been modified. The reason for the copy is that the last node in codice_0 (the node containing the original value codice_5) cannot be modified to point to the start of codice_1, because that would change the value of codice_0. Trees. Consider a binary search tree, where every node in the tree has the recursive invariant that all subnodes contained in the left subtree have a value that is less than or equal to the value stored in the node, and subnodes contained in the right subtree have a value that is greater than the value stored in the node. For instance, the set of data xs = [a, b, c, d, f, g, h] might be represented by the following binary search tree: A function which inserts data into the binary tree and maintains the invariant is: fun insert (x, E) = T (E, x, E) | insert (x, s as T (a, y, b)) = if x &lt; y then T (insert (x, a), y, b) else if x &gt; y then T (a, y, insert (x, b)) else s After executing ys = insert ("e", xs) The following configuration is produced: Notice two points: first, the original tree (codice_0) persists. Second, many common nodes are shared between the old tree and the new tree. Such persistence and sharing is difficult to manage without some form of garbage collection (GC) to automatically free up nodes which have no live references, and this is why GC is a feature commonly found in functional programming languages. Code. GitHub repo containing implementations of persistent BSTs using Fat Nodes, Copy-on-Write, and Path Copying Techniques. To use the persistent BST implementations, simply clone the repository and follow the instructions provided in the README file. Link: https://github.com/DesaultierMAKK/PersistentBST Persistent hash array mapped trie. A persistent hash array mapped trie is a specialized variant of a hash array mapped trie that will preserve previous versions of itself on any updates. It is often used to implement a general purpose persistent map data structure. Hash array mapped tries were originally described in a 2001 paper by Phil Bagwell entitled "Ideal Hash Trees". This paper presented a mutable Hash table where "Insert, search and delete times are small and constant, independent of key set size, operations are O(1). Small worst-case times for insert, search and removal operations can be guaranteed and misses cost less than successful searches". This data structure was then modified by Rich Hickey to be fully persistent for use in the Clojure programming language. Conceptually, hash array mapped tries work similar to any generic tree in that they store nodes hierarchically and retrieve them by following a path down to a particular element. The key difference is that Hash Array Mapped Tries first use a hash function to transform their lookup key into a (usually 32 or 64 bit) integer. The path down the tree is then determined by using slices of the binary representation of that integer to index into a sparse array at each level of the tree. The leaf nodes of the tree behave similar to the buckets used to construct hash tables and may or may not contain multiple candidates depending on hash collisions. Most implementations of persistent hash array mapped tries use a branching factor of 32 in their implementation. This means that in practice while insertions, deletions, and lookups into a persistent hash array mapped trie have a computational complexity of "O"(log "n"), for most applications they are effectively constant time, as it would require an extremely large number of entries to make any operation take more than a dozen steps. Usage in programming languages. Haskell. Haskell is a pure functional language and therefore does not allow for mutation. Therefore, all data structures in the language are persistent, as it is impossible to not preserve the previous state of a data structure with functional semantics. This is because any change to a data structure that would render previous versions of a data structure invalid would violate referential transparency. In its standard library Haskell has efficient persistent implementations for linked lists, Maps (implemented as size balanced trees), and Sets among others. Clojure. Like many programming languages in the Lisp family, Clojure contains an implementation of a linked list, but unlike other dialects its implementation of a linked list has enforced persistence instead of being persistent by convention. Clojure also has efficient implementations of persistent vectors, maps, and sets based on persistent hash array mapped tries. These data structures implement the mandatory read-only parts of the Java collections framework. The designers of the Clojure language advocate the use of persistent data structures over mutable data structures because they have value semantics which gives the benefit of making them freely shareable between threads with cheap aliases, easy to fabricate, and language independent. These data structures form the basis of Clojure's support for parallel computing since they allow for easy retries of operations to sidestep data races and atomic compare and swap semantics. Elm. The Elm programming language is purely functional like Haskell, which makes all of its data structures persistent by necessity. It contains persistent implementations of linked lists as well as persistent arrays, dictionaries, and sets. Elm uses a custom virtual DOM implementation that takes advantage of the persistent nature of Elm data. As of 2016 it was reported by the developers of Elm that this virtual DOM allows the Elm language to render HTML faster than the popular JavaScript frameworks React, Ember, and Angular. Java. The Java programming language is not particularly functional. Despite this, the core JDK package java.util.concurrent includes CopyOnWriteArrayList and CopyOnWriteArraySet which are persistent structures, implemented using copy-on-write techniques. The usual concurrent map implementation in Java, ConcurrentHashMap, is not persistent, however. Fully persistent collections are available in third-party libraries, or other JVM languages. JavaScript. The popular JavaScript frontend framework React is frequently used along with a state management system that implements the Flux architecture, a popular implementation of which is the JavaScript library Redux. The Redux library is inspired by the state management pattern used in the Elm programming language, meaning that it mandates that users treat all data as persistent. As a result, the Redux project recommends that in certain cases users make use of libraries for enforced and efficient persistent data structures. This reportedly allows for greater performance than when comparing or making copies of regular JavaScript objects. One such library of persistent data structures Immutable.js is based on the data structures made available and popularized by Clojure and Scala. It is mentioned by the documentation of Redux as being one of the possible libraries that can provide enforced immutability. Mori.js brings data structures similar to those in Clojure to JavaScript. Immer.js brings an interesting approach where one "creates the next immutable state by mutating the current one". Immer.js uses native JavaScript objects and not efficient persistent data structures and it might cause performance issues when data size is big. Prolog. Prolog terms are naturally immutable and therefore data structures are typically persistent data structures. Their performance depends on sharing and garbage collection offered by the Prolog system. Extensions to non-ground Prolog terms are not always feasible because of search space explosion. Delayed goals might mitigate the problem. Some Prolog systems nevertheless do provide destructive operations like setarg/3, which might come in different flavors, with/without copying and with/without backtracking of the state change. There are cases where setarg/3 is used to the good of providing a new declarative layer, like a constraint solver. Scala. The Scala programming language promotes the use of persistent data structures for implementing programs using "Object-Functional Style". Scala contains implementations of many persistent data structures including linked lists, red–black trees, as well as persistent hash array mapped tries as introduced in Clojure. Garbage collection. Because persistent data structures are often implemented in such a way that successive versions of a data structure share underlying memory ergonomic use of such data structures generally requires some form of automatic garbage collection system such as reference counting or mark and sweep. In some platforms where persistent data structures are used it is an option to not use garbage collection which, while doing so can lead to memory leaks, can in some cases have a positive impact on the overall performance of an application. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "d+1" }, { "math_id": 1, "text": "d+2" }, { "math_id": 2, "text": "n_{1}" }, { "math_id": 3, "text": "n_{2}" }, { "math_id": 4, "text": "2\\cdot n_{1}+n_{2}" }, { "math_id": 5, "text": "O(d)" }, { "math_id": 6, "text": "O(d^{2})" }, { "math_id": 7, "text": "O(Log(d))" }, { "math_id": 8, "text": "m" }, { "math_id": 9, "text": "m\\cdot O(Log(d))" }, { "math_id": 10, "text": "n" }, { "math_id": 11, "text": "O(n\\cdot d^{2})+m\\cdot O(Log(d))" }, { "math_id": 12, "text": "p" }, { "math_id": 13, "text": "2\\cdot n+1" }, { "math_id": 14, "text": "2" }, { "math_id": 15, "text": "x" }, { "math_id": 16, "text": "s_{i}" }, { "math_id": 17, "text": "y" }, { "math_id": 18, "text": "O(Log(n))" }, { "math_id": 19, "text": "O(n^{2})" }, { "math_id": 20, "text": "s_{i+1}" }, { "math_id": 21, "text": "T" }, { "math_id": 22, "text": "k" }, { "math_id": 23, "text": "v" }, { "math_id": 24, "text": "S_{1}, S_{2}, \\dots S_{i}" }, { "math_id": 25, "text": "S_{i}" }, { "math_id": 26, "text": "O(Log(m))" }, { "math_id": 27, "text": "O(n\\cdot Log(n))" } ]
https://en.wikipedia.org/wiki?curid=662889
66294
Reinforcement learning
Field of machine learning &lt;templatestyles src="Machine learning/styles.css"/&gt; Reinforcement learning (RL) is an interdisciplinary area of machine learning and optimal control concerned with how an intelligent agent ought to take actions in a dynamic environment in order to maximize the cumulative reward. Reinforcement learning is one of three basic machine learning paradigms, alongside supervised learning and unsupervised learning. Q-learning at its simplest stores data in tables. This approach falters with increasing numbers of states/actions since the likelihood of the agent visiting a particular state and performing a particular action is increasingly small. Reinforcement learning differs from supervised learning in not needing labelled input/output pairs to be presented, and in not needing sub-optimal actions to be explicitly corrected. Instead the focus is on finding a balance between exploration (of uncharted territory) and exploitation (of current knowledge) with the goal of maximizing the long term reward, whose feedback might be incomplete or delayed. The environment is typically stated in the form of a Markov decision process (MDP), because many reinforcement learning algorithms for this context use dynamic programming techniques. The main difference between the classical dynamic programming methods and reinforcement learning algorithms is that the latter do not assume knowledge of an exact mathematical model of the Markov decision process and they target large Markov decision processes where exact methods become infeasible. &lt;templatestyles src="Template:TOC limit/styles.css" /&gt; Introduction. Due to its generality, reinforcement learning is studied in many disciplines, such as game theory, control theory, operations research, information theory, simulation-based optimization, multi-agent systems, swarm intelligence, and statistics. In the operations research and control literature, reinforcement learning is called "approximate dynamic programming," or "neuro-dynamic programming." The problems of interest in reinforcement learning have also been studied in the theory of optimal control, which is concerned mostly with the existence and characterization of optimal solutions, and algorithms for their exact computation, and less with learning or approximation, particularly in the absence of a mathematical model of the environment. Basic reinforcement learning is modeled as a Markov decision process: The purpose of reinforcement learning is for the agent to learn an optimal, or nearly-optimal, policy that maximizes the "reward function" or other user-provided reinforcement signal that accumulates from the immediate rewards. This is similar to processes that appear to occur in animal psychology. (See Reinforcement.) For example, biological brains are hardwired to interpret signals such as pain and hunger as negative reinforcements, and interpret pleasure and food intake as positive reinforcements. In some circumstances, animals can learn to engage in behaviors that optimize these rewards. This suggests that animals are capable of reinforcement learning. A basic reinforcement learning AI agent interacts with its environment in discrete time steps. At each time t, the agent receives the current state formula_8 and reward formula_9. It then chooses an action formula_10 from the set of available actions, which is subsequently sent to the environment. The environment moves to a new state formula_11 and the reward formula_12 associated with the "transition" formula_13 is determined. The goal of a reinforcement learning agent is to learn a "policy": formula_14, formula_15 that maximizes the expected cumulative reward. Formulating the problem as a Markov decision process assumes the agent directly observes the current environmental state; in this case the problem is said to have "full observability". If the agent only has access to a subset of states, or if the observed states are corrupted by noise, the agent is said to have "partial observability", and formally the problem must be formulated as a Partially observable Markov decision process. In both cases, the set of actions available to the agent can be restricted. For example, the state of an account balance could be restricted to be positive; if the current value of the state is 3 and the state transition attempts to reduce the value by 4, the transition will not be allowed. When the agent's performance is compared to that of an agent that acts optimally, the difference in performance gives rise to the notion of "regret". In order to act near optimally, the agent must reason about the long-term consequences of its actions (i.e., maximize future income), although the immediate reward associated with this might be negative. Thus, reinforcement learning is particularly well-suited to problems that include a long-term versus short-term reward trade-off. It has been applied successfully to various problems, including energy storage operation, robot control, photovoltaic generators dispatch, backgammon, checkers, Go (AlphaGo), and autonomous driving systems. Two elements make reinforcement learning powerful: the use of samples to optimize performance and the use of function approximation to deal with large environments. Thanks to these two key components, reinforcement learning can be used in large environments in the following situations: The first two of these problems could be considered planning problems (since some form of model is available), while the last one could be considered to be a genuine learning problem. However, reinforcement learning converts both planning problems to machine learning problems. Exploration. The exploration vs. exploitation trade-off has been most thoroughly studied through the multi-armed bandit problem and for finite state space Markov decision processes in Burnetas and Katehakis (1997). Reinforcement learning requires clever exploration mechanisms; randomly selecting actions, without reference to an estimated probability distribution, shows poor performance. The case of (small) finite Markov decision processes is relatively well understood. However, due to the lack of algorithms that scale well with the number of states (or scale to problems with infinite state spaces), simple exploration methods are the most practical. One such method is formula_16-greedy, where formula_17 is a parameter controlling the amount of exploration vs. exploitation. With probability formula_18, exploitation is chosen, and the agent chooses the action that it believes has the best long-term effect (ties between actions are broken uniformly at random). Alternatively, with probability formula_16, exploration is chosen, and the action is chosen uniformly at random. formula_16 is usually a fixed parameter but can be adjusted either according to a schedule (making the agent explore progressively less), or adaptively based on heuristics. Algorithms for control learning. Even if the issue of exploration is disregarded and even if the state was observable (assumed hereafter), the problem remains to use past experience to find out which actions lead to higher cumulative rewards. Criterion of optimality. Policy. The agent's action selection is modeled as a map called "policy": formula_19 formula_20 The policy map gives the probability of taking action formula_6 when in state formula_4. There are also deterministic policies. State-value function. The state-value function formula_21 is defined as, "expected discounted return" starting with state formula_4, i.e. formula_22, and successively following policy formula_23. Hence, roughly speaking, the value function estimates "how good" it is to be in a given state. formula_24 where the random variable formula_25 denotes the discounted return, and is defined as the sum of future discounted rewards: formula_26 where formula_12 is the reward for transitioning from state formula_8 to formula_11, formula_27 is the discount rate. formula_28 is less than 1, so rewards in the distant future are weighted less than rewards in the immediate future. The algorithm must find a policy with maximum expected discounted return. From the theory of Markov decision processes it is known that, without loss of generality, the search can be restricted to the set of so-called "stationary" policies. A policy is "stationary" if the action-distribution returned by it depends only on the last state visited (from the observation agent's history). The search can be further restricted to "deterministic" stationary policies. A "deterministic stationary" policy deterministically selects actions based on the current state. Since any such policy can be identified with a mapping from the set of states to the set of actions, these policies can be identified with such mappings with no loss of generality. Brute force. The brute force approach entails two steps: One problem with this is that the number of policies can be large, or even infinite. Another is that the variance of the returns may be large, which requires many samples to accurately estimate the discounted return of each policy. These problems can be ameliorated if we assume some structure and allow samples generated from one policy to influence the estimates made for others. The two main approaches for achieving this are value function estimation and direct policy search. Value function. Value function approaches attempt to find a policy that maximizes the discounted return by maintaining a set of estimates of expected discounted returns formula_29 for some policy (usually either the "current" [on-policy] or the optimal [off-policy] one). These methods rely on the theory of Markov decision processes, where optimality is defined in a sense stronger than the one above: A policy is optimal if it achieves the best-expected discounted return from "any" initial state (i.e., initial distributions play no role in this definition). Again, an optimal policy can always be found among stationary policies. To define optimality in a formal manner, define the state-value of a policy formula_23 by formula_30 where formula_25 stands for the discounted return associated with following formula_23 from the initial state formula_4. Defining formula_31 as the maximum possible state-value of formula_32, where formula_23 is allowed to change, formula_33 A policy that achieves these optimal state-values in each state is called "optimal". Clearly, a policy that is optimal in this sense is also optimal in the sense that it maximizes the expected discounted return, since formula_34, where formula_4 is a state randomly sampled from the distribution formula_35 of initial states (so formula_36). Although state-values suffice to define optimality, it is useful to define action-values. Given a state formula_4, an action formula_6 and a policy formula_23, the action-value of the pair formula_37 under formula_23 is defined by formula_38 where formula_25 now stands for the random discounted return associated with first taking action formula_6 in state formula_4 and following formula_23, thereafter. The theory of Markov decision processes states that if formula_39 is an optimal policy, we act optimally (take the optimal action) by choosing the action from formula_40 with the highest action-value at each state, formula_4. The "action-value function" of such an optimal policy (formula_41) is called the "optimal action-value function" and is commonly denoted by formula_42. In summary, the knowledge of the optimal action-value function alone suffices to know how to act optimally. Assuming full knowledge of the Markov decision process, the two basic approaches to compute the optimal action-value function are value iteration and policy iteration. Both algorithms compute a sequence of functions formula_43 (formula_44) that converge to formula_42. Computing these functions involves computing expectations over the whole state-space, which is impractical for all but the smallest (finite) Markov decision processes. In reinforcement learning methods, expectations are approximated by averaging over samples and using function approximation techniques to cope with the need to represent value functions over large state-action spaces. Monte Carlo methods. Monte Carlo methods can be used in an algorithm that mimics policy iteration. Policy iteration consists of two steps: "policy evaluation" and "policy improvement". Monte Carlo is used in the policy evaluation step. In this step, given a stationary, deterministic policy formula_23, the goal is to compute the function values formula_45 (or a good approximation to them) for all state-action pairs formula_37. Assume (for simplicity) that the Markov decision process is finite, that sufficient memory is available to accommodate the action-values and that the problem is episodic and after each episode a new one starts from some random initial state. Then, the estimate of the value of a given state-action pair formula_37 can be computed by averaging the sampled returns that originated from formula_37 over time. Given sufficient time, this procedure can thus construct a precise estimate formula_46 of the action-value function formula_47. This finishes the description of the policy evaluation step. In the policy improvement step, the next policy is obtained by computing a "greedy" policy with respect to formula_46: Given a state formula_4, this new policy returns an action that maximizes formula_48. In practice lazy evaluation can defer the computation of the maximizing actions to when they are needed. Problems with this procedure include: Temporal difference methods. The first problem is corrected by allowing the procedure to change the policy (at some or all states) before the values settle. This too may be problematic as it might prevent convergence. Most current algorithms do this, giving rise to the class of "generalized policy iteration" algorithms. Many "actor-critic" methods belong to this category. The second issue can be corrected by allowing trajectories to contribute to any state-action pair in them. This may also help to some extent with the third problem, although a better solution when returns have high variance is Sutton's temporal difference (TD) methods that are based on the recursive Bellman equation. The computation in TD methods can be incremental (when after each transition the memory is changed and the transition is thrown away), or batch (when the transitions are batched and the estimates are computed once based on the batch). Batch methods, such as the least-squares temporal difference method, may use the information in the samples better, while incremental methods are the only choice when batch methods are infeasible due to their high computational or memory complexity. Some methods try to combine the two approaches. Methods based on temporal differences also overcome the fourth issue. Another problem specific to TD comes from their reliance on the recursive Bellman equation. Most TD methods have a so-called formula_49 parameter formula_50 that can continuously interpolate between Monte Carlo methods that do not rely on the Bellman equations and the basic TD methods that rely entirely on the Bellman equations. This can be effective in palliating this issue. Function approximation methods. In order to address the fifth issue, "function approximation methods" are used. "Linear function approximation" starts with a mapping formula_51 that assigns a finite-dimensional vector to each state-action pair. Then, the action values of a state-action pair formula_37 are obtained by linearly combining the components of formula_52 with some "weights" formula_53: formula_54 The algorithms then adjust the weights, instead of adjusting the values associated with the individual state-action pairs. Methods based on ideas from nonparametric statistics (which can be seen to construct their own features) have been explored. Value iteration can also be used as a starting point, giving rise to the Q-learning algorithm and its many variants. Including Deep Q-learning methods when a neural network is used to represent Q, with various applications in stochastic search problems. The problem with using action-values is that they may need highly precise estimates of the competing action values that can be hard to obtain when the returns are noisy, though this problem is mitigated to some extent by temporal difference methods. Using the so-called compatible function approximation method compromises generality and efficiency. Direct policy search. An alternative method is to search directly in (some subset of) the policy space, in which case the problem becomes a case of stochastic optimization. The two approaches available are gradient-based and gradient-free methods. Gradient-based methods ("policy gradient methods") start with a mapping from a finite-dimensional (parameter) space to the space of policies: given the parameter vector formula_53, let formula_55 denote the policy associated to formula_53. Defining the performance function by formula_56 under mild conditions this function will be differentiable as a function of the parameter vector formula_53. If the gradient of formula_57 was known, one could use gradient ascent. Since an analytic expression for the gradient is not available, only a noisy estimate is available. Such an estimate can be constructed in many ways, giving rise to algorithms such as Williams' REINFORCE method (which is known as the likelihood ratio method in the simulation-based optimization literature). A large class of methods avoids relying on gradient information. These include simulated annealing, cross-entropy search or methods of evolutionary computation. Many gradient-free methods can achieve (in theory and in the limit) a global optimum. Policy search methods may converge slowly given noisy data. For example, this happens in episodic problems when the trajectories are long and the variance of the returns is large. Value-function based methods that rely on temporal differences might help in this case. In recent years, "actor–critic methods" have been proposed and performed well on various problems. Policy search methods have been used in the robotics context. Many policy search methods may get stuck in local optima (as they are based on local search). Model-based algorithms. Finally, all of the above methods can be combined with algorithms that first learn a model of the Markov Decision Process, the probability of each next state given an action taken from an existing state. For instance, the Dyna algorithm learns a model from experience, and uses that to provide more modelled transitions for a value function, in addition to the real transitions. Such methods can sometimes be extended to use of non-parametric models, such as when the transitions are simply stored and 'replayed' to the learning algorithm. Model-based methods can be more computationally intensive than model-free approaches, and their utility can be limited by the extent to which the Markov Decision Process can be learnt. There are other ways to use models than to update a value function. For instance, in model predictive control the model is used to update the behavior directly. Theory. Both the asymptotic and finite-sample behaviors of most algorithms are well understood. Algorithms with provably good online performance (addressing the exploration issue) are known. Efficient exploration of Markov decision processes is given in Burnetas and Katehakis (1997). Finite-time performance bounds have also appeared for many algorithms, but these bounds are expected to be rather loose and thus more work is needed to better understand the relative advantages and limitations. For incremental algorithms, asymptotic convergence issues have been settled. Temporal-difference-based algorithms converge under a wider set of conditions than was previously possible (for example, when used with arbitrary, smooth function approximation). Research. Research topics include: Comparison of key algorithms. Associative reinforcement learning. Associative reinforcement learning tasks combine facets of stochastic learning automata tasks and supervised learning pattern classification tasks. In associative reinforcement learning tasks, the learning system interacts in a closed loop with its environment. Deep reinforcement learning. This approach extends reinforcement learning by using a deep neural network and without explicitly designing the state space. The work on learning ATARI games by Google DeepMind increased attention to deep reinforcement learning or end-to-end reinforcement learning. Adversarial deep reinforcement learning. Adversarial deep reinforcement learning is an active area of research in reinforcement learning focusing on vulnerabilities of learned policies. In this research area some studies initially showed that reinforcement learning policies are susceptible to imperceptible adversarial manipulations. While some methods have been proposed to overcome these susceptibilities, in the most recent studies it has been shown that these proposed solutions are far from providing an accurate representation of current vulnerabilities of deep reinforcement learning policies. Fuzzy reinforcement learning. By introducing fuzzy inference in reinforcement learning, approximating the state-action value function with fuzzy rules in continuous space becomes possible. The IF - THEN form of fuzzy rules make this approach suitable for expressing the results in a form close to natural language. Extending FRL with Fuzzy Rule Interpolation allows the use of reduced size sparse fuzzy rule-bases to emphasize cardinal rules (most important state-action values). Inverse reinforcement learning. In inverse reinforcement learning (IRL), no reward function is given. Instead, the reward function is inferred given an observed behavior from an expert. The idea is to mimic observed behavior, which is often optimal or close to optimal. One popular IRL paradigm is named maximum entropy inverse reinforcement learning (MaxEnt IRL). MaxEnt IRL estimates the parameters of a linear model of the reward function by maximizing the entropy of the probability distribution of observed trajectories subject to constraints related to matching expected feature counts. Recently it has been shown that MaxEnt IRL is a particular case of a more general framework named random utility inverse reinforcement learning (RU-IRL). RU-IRL is based on random utility theory and Markov decision processes. While prior IRL approaches assume that the apparent random behavior of an observed agent is due to it following a random policy, RU-IRL assumes that the observed agent follows a deterministic policy but randomness in observed behavior is due to the fact that an observer only has partial access to the features the observed agent uses in decision making. The utility function is modeled as a random variable to account for the ignorance of the observer regarding the features the observed agent actually considers in its utility function. Safe reinforcement learning. Safe reinforcement learning (SRL) can be defined as the process of learning policies that maximize the expectation of the return in problems in which it is important to ensure reasonable system performance and/or respect safety constraints during the learning and/or deployment processes. An alternative approach is risk-averse reinforcement learning, where instead of the "expected" return, a "risk-measure" of the return is optimized, such as the Conditional Value at Risk (CVaR). In addition to mitigating risk, the CVaR objective increases robustness to model uncertainties. However, CVaR optimization in risk-averse RL requires special care, to prevent gradient bias and blindness to success. Statistical comparison of reinforcement learning algorithms. Efficient comparison of RL algorithms is essential for research, deployment and monitoring of RL systems. To compare different algorithms on a given environment, an agent can be trained for each algorithm. Since the performance is sensitive to implementation details, all algorithms should be implemented as closely as possible to each other. After the training is finished, the agents can be run on a sample of test episodes, and their scores (returns) can be compared. Since episodes are typically assumed to be i.i.d, standard statistical tools can be used for hypothesis testing, such as T-test and permutation test. This requires to accumulate all the rewards within an episode into a single number - the episodic return. However, this causes a loss of information, as different time-steps are averaged together, possibly with different levels of noise. Whenever the noise level varies across the episode, the statistical power can be improved significantly, by weighting the rewards according to their estimated noise. See also. &lt;templatestyles src="Div col/styles.css"/&gt; References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\mathcal{S}" }, { "math_id": 1, "text": "\\mathcal{A}" }, { "math_id": 2, "text": "P_a(s,s')=\\Pr(S_{t+1}=s'\\mid S_t=s, A_t=a)" }, { "math_id": 3, "text": "t" }, { "math_id": 4, "text": "s" }, { "math_id": 5, "text": "s'" }, { "math_id": 6, "text": "a" }, { "math_id": 7, "text": "R_a(s,s')" }, { "math_id": 8, "text": "S_t" }, { "math_id": 9, "text": "R_t" }, { "math_id": 10, "text": "A_t" }, { "math_id": 11, "text": "S_{t+1}" }, { "math_id": 12, "text": "R_{t+1}" }, { "math_id": 13, "text": "(S_t,A_t,S_{t+1})" }, { "math_id": 14, "text": "\\pi: \\mathcal{S} \\times \\mathcal{A} \\rightarrow [0,1] " }, { "math_id": 15, "text": "\\pi(s,a) = \\Pr(A_t = a\\mid S_t =s)" }, { "math_id": 16, "text": "\\varepsilon" }, { "math_id": 17, "text": "0 < \\varepsilon < 1" }, { "math_id": 18, "text": "1-\\varepsilon" }, { "math_id": 19, "text": "\\pi: \\mathcal{A} \\times \\mathcal{S} \\rightarrow [0,1]" }, { "math_id": 20, "text": "\\pi(a,s) = \\Pr(A_t = a \\mid S_t = s)" }, { "math_id": 21, "text": "V_\\pi(s)" }, { "math_id": 22, "text": "S_0 = s" }, { "math_id": 23, "text": "\\pi" }, { "math_id": 24, "text": "V_\\pi(s) = \\operatorname \\mathbb{E}[G\\mid S_0 = s] = \\operatorname \\mathbb{E}\\left[\\sum_{t=0}^\\infty \\gamma^t R_{t+1}\\mid S_0 = s\\right]," }, { "math_id": 25, "text": "G" }, { "math_id": 26, "text": "G=\\sum_{t=0}^\\infty \\gamma^t R_{t+1}=R_1 + \\gamma R_2 + \\gamma^2 R_3 + \\dots," }, { "math_id": 27, "text": "0 \\le \\gamma<1" }, { "math_id": 28, "text": "\\gamma" }, { "math_id": 29, "text": "\\operatorname \\mathbb{E}[G]" }, { "math_id": 30, "text": " V^{\\pi} (s) = \\operatorname \\mathbb{E}[G\\mid s,\\pi]," }, { "math_id": 31, "text": "V^*(s)" }, { "math_id": 32, "text": "V^\\pi(s)" }, { "math_id": 33, "text": "V^*(s) = \\max_\\pi V^\\pi(s)." }, { "math_id": 34, "text": "V^*(s) = \\max_\\pi \\mathbb{E}[G\\mid s,\\pi]" }, { "math_id": 35, "text": "\\mu" }, { "math_id": 36, "text": "\\mu(s) = \\Pr(S_0 = s)" }, { "math_id": 37, "text": "(s,a)" }, { "math_id": 38, "text": "Q^\\pi(s,a) = \\operatorname \\mathbb{E}[G\\mid s,a,\\pi],\\," }, { "math_id": 39, "text": "\\pi^*" }, { "math_id": 40, "text": "Q^{\\pi^*}(s,\\cdot)" }, { "math_id": 41, "text": "Q^{\\pi^*}" }, { "math_id": 42, "text": "Q^*" }, { "math_id": 43, "text": "Q_k" }, { "math_id": 44, "text": "k=0,1,2,\\ldots" }, { "math_id": 45, "text": "Q^\\pi(s,a)" }, { "math_id": 46, "text": "Q" }, { "math_id": 47, "text": "Q^\\pi" }, { "math_id": 48, "text": "Q(s,\\cdot)" }, { "math_id": 49, "text": "\\lambda" }, { "math_id": 50, "text": "(0\\le \\lambda\\le 1)" }, { "math_id": 51, "text": "\\phi" }, { "math_id": 52, "text": "\\phi(s,a)" }, { "math_id": 53, "text": "\\theta" }, { "math_id": 54, "text": "Q(s,a) = \\sum_{i=1}^d \\theta_i \\phi_i(s,a)." }, { "math_id": 55, "text": "\\pi_\\theta" }, { "math_id": 56, "text": "\\rho(\\theta) = \\rho^{\\pi_\\theta}" }, { "math_id": 57, "text": "\\rho" } ]
https://en.wikipedia.org/wiki?curid=66294
66294087
Furry's theorem
Theorem in quantum physics In quantum electrodynamics, Furry's theorem states that if a Feynman diagram consists of a closed loop of fermion lines with an odd number of vertices, its contribution to the amplitude vanishes. As a corollary, a single photon cannot arise from the vacuum or be absorbed by it. The theorem was first derived by Wendell H. Furry in 1937, as a direct consequence of the conservation of energy and charge conjugation symmetry. Theory. Quantum electrodynamics has a number of symmetries, one of them being the discrete symmetry of charge conjugation. This acts on fields through a unitary charge conjugation operator formula_0 which anticommutes with the photon field formula_1 as formula_2, while leaving the vacuum state invariant formula_3. Considering the simplest case of the correlation function of a single photon operator gives formula_4 so this correlation function must vanish. For formula_5 photon operators, this argument shows that under charge conjugation this picks up a factor of formula_6 and thus vanishes when formula_5 is odd. More generally, since the charge conjugation operator also anticommutes with the vector current formula_7, Furry's theorem states that the correlation function of any odd number of on-shell or off-shell photon fields and/or currents must vanish in quantum electrodynamics. Since the theorem holds at the non-perturbative level, it must also hold at each order in perturbation theory. At leading order this means that any fermion loop with an odd number of vertices must have a vanishing contribution to the amplitude. An explicit calculation of these diagrams reveals that this is because the diagram with a fermion going clockwise around the loop cancels with the second diagram where the fermion goes anticlockwise. The vanishing of the three vertex loop can also be seen as a consequence of the renormalizability of quantum electrodynamics since the bare Lagrangian does not have any counterterms involving three photons. Applications and limitations. Furry's theorem allows for the simplification of a number of amplitude calculations in quantum electrodynamics. In particular, since the result also holds when photons are off-shell, all Feynman diagrams which have at least one internal fermion loops with an odd number of vertices have a vanishing contribution to the amplitude and can be ignored. Historically the theorem was important in showing that the scattering of photons by an external field, known as Delbrück scattering, does not proceed via a triangle diagram and must instead proceed through a box diagram. In the presence of a background charge density or a nonzero chemical potential, Furry's theorem is broken, although if both these vanish then it does hold at nonzero temperatures as well as at zero temperatures. It also does not apply in the presence of a strong background magnetic field where photon splitting interactions formula_8 are allowed, a process that may be detected in astrophysical settings such as around neutron stars. The theorem also does not hold when Weyl fermions are involved in the loops rather than Dirac fermions, resulting in non-vanishing odd vertex number diagrams. In particular, the non-vanishing of the triangle diagram with Weyl fermions gives rise to the chiral anomaly, with the sum of these having to cancel for a quantum theory to be consistent. While the theorem has been formulated in quantum electrodynamics, a version of it holds more generally. For example, while the Standard Model is not charge conjugation invariant due to weak interactions, the fermion loop diagrams with an odd number of photons attached will still vanish since these are equivalent to a purely quantum electrodynamical diagram. Similarly, any diagram involving such loops as sub-diagrams will also vanish. It is however no longer true that all odd number photon diagrams need to vanish. For example, relaxing the requirement of charge conjugation and parity invariance of quantum electrodynamics, as occurs when weak interactions are included, allows for a three-photon vertex term. While this term does give rise to formula_9 interactions, they only occur if two of the photons are virtual; searching for such interactions must be done indirectly, such as through bremsstrahlung experiments from electron-positron collisions. In non-Abelian Yang–Mills theories, Furry's theorem does not hold since these involve noncommuting color charges. For example, the quark triangle diagrams with three external gluons are proportional to two different generator traces formula_10 and so they do not cancel. However, charge conjugation arguments can still be applied in limited cases such as to deduce that the triangle diagram formula_11 for a color neutral spin formula_12 boson vanishes.
[ { "math_id": 0, "text": "C" }, { "math_id": 1, "text": "A_\\mu(x)" }, { "math_id": 2, "text": "CA^\\mu(x) C^\\dagger = -A^\\mu(x)" }, { "math_id": 3, "text": "C|\\Omega\\rangle = |\\Omega\\rangle" }, { "math_id": 4, "text": "\n\\langle \\Omega|A^\\mu(x)|\\Omega\\rangle = \\langle \\Omega|C^\\dagger C A^\\mu(x) C^\\dagger C|\\Omega\\rangle = - \\langle \\Omega|A^\\mu(x)|\\Omega\\rangle,\n" }, { "math_id": 5, "text": "n" }, { "math_id": 6, "text": "(-1)^n" }, { "math_id": 7, "text": "j^\\mu(x)" }, { "math_id": 8, "text": "\\gamma \\rightarrow \\gamma \\gamma" }, { "math_id": 9, "text": " \\gamma \\rightarrow \\gamma \\gamma " }, { "math_id": 10, "text": "\\text{tr}[T^aT^bT^c] \\neq \\text{tr}[T^aT^cT^b]" }, { "math_id": 11, "text": "gg \\rightarrow X" }, { "math_id": 12, "text": "1^-" } ]
https://en.wikipedia.org/wiki?curid=66294087
66300206
Dual Steenrod algebra
In algebraic topology, through an algebraic operation (dualization), there is an associated commutative algebra from the noncommutative Steenrod algebras called the dual Steenrod algebra. This dual algebra has a number of surprising benefits, such as being commutative and provided technical tools for computing the Adams spectral sequence in many cases (such as formula_0) with much ease. Definition. Recall59 that the Steenrod algebra formula_1 (also denoted formula_2) is a graded noncommutative Hopf algebra which is cocommutative, meaning its comultiplication is cocommutative. This implies if we take the dual Hopf algebra, denoted formula_3, or just formula_4, then this gives a graded-commutative algebra which has a noncommutative comultiplication. We can summarize this duality through dualizing a commutative diagram of the Steenrod's Hopf algebra structure:formula_5If we dualize we get mapsformula_6giving the main structure maps for the dual Hopf algebra. It turns out there's a nice structure theorem for the dual Hopf algebra, separated by whether the prime is formula_7 or odd. Case of p=2. In this case, the dual Steenrod algebra is a graded commutative polynomial algebra formula_8 where the degree formula_9. Then, the coproduct map is given byformula_10sendingformula_11where formula_12. General case of p &gt; 2. For all other prime numbers, the dual Steenrod algebra is slightly more complex and involves a graded-commutative exterior algebra in addition to a graded-commutative polynomial algebra. If we let formula_13 denote an exterior algebra over formula_14 with generators formula_15 and formula_16, then the dual Steenrod algebra has the presentationformula_17whereformula_18In addition, it has the comultiplication formula_10 defined byformula_19where again formula_12. Rest of Hopf algebra structure in both cases. The rest of the Hopf algebra structures can be described exactly the same in both cases. There is both a unit map formula_20 and counit map formula_21formula_22which are both isomorphisms in degree formula_23: these come from the original Steenrod algebra. In addition, there is also a conjugation map formula_24 defined recursively by the equationsformula_25In addition, we will denote formula_26 as the kernel of the counit map formula_21 which is isomorphic to formula_4 in degrees formula_27. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\pi_*(MU)" }, { "math_id": 1, "text": "\\mathcal{A}_p^*" }, { "math_id": 2, "text": "\\mathcal{A}^*" }, { "math_id": 3, "text": "\\mathcal{A}_{p,*}" }, { "math_id": 4, "text": "\\mathcal{A}_*" }, { "math_id": 5, "text": "\\mathcal{A}_p^* \\xrightarrow{\\psi^*}\n\\mathcal{A}_p^* \\otimes \\mathcal{A}_p^* \\xrightarrow{\\phi^*}\n\\mathcal{A}_p^*" }, { "math_id": 6, "text": "\\mathcal{A}_{p,*} \\xleftarrow{\\psi_*}\n\\mathcal{A}_{p,*} \\otimes \\mathcal{A}_{p,*}\\xleftarrow{\\phi_*}\n\\mathcal{A}_{p,*}" }, { "math_id": 7, "text": "2" }, { "math_id": 8, "text": "\\mathcal{A}_* = \\mathbb{Z}/2[\\xi_1,\\xi_2,\\ldots]" }, { "math_id": 9, "text": "\\deg(\\xi_n) = 2^n-1" }, { "math_id": 10, "text": "\\Delta:\\mathcal{A}_* \\to \\mathcal{A}_*\\otimes\\mathcal{A}_*" }, { "math_id": 11, "text": "\\Delta\\xi_n = \\sum_{0 \\leq i \\leq n} \\xi_{n-i}^{2^i}\\otimes \\xi_i" }, { "math_id": 12, "text": "\\xi_0 = 1" }, { "math_id": 13, "text": "\\Lambda(x,y)" }, { "math_id": 14, "text": "\\mathbb{Z}/p" }, { "math_id": 15, "text": "x" }, { "math_id": 16, "text": "y" }, { "math_id": 17, "text": "\\mathcal{A}_* = \\mathbb{Z}/p[\\xi_1,\\xi_2,\\ldots]\\otimes \\Lambda(\\tau_0,\\tau_1,\\ldots)" }, { "math_id": 18, "text": "\\begin{align}\n\\deg(\\xi_n) &= 2(p^n - 1) \\\\\n\\deg(\\tau_n) &= 2p^n - 1\n\\end{align}" }, { "math_id": 19, "text": "\\begin{align}\n\\Delta(\\xi_n) &= \\sum_{0 \\leq i \\leq n} \\xi_{n-i}^{p^i}\\otimes \\xi_i \\\\\n\\Delta(\\tau_n) &= \\tau_n\\otimes 1 + \\sum_{0 \\leq i \\leq n}\\xi_{n-i}^{p^i}\\otimes \\tau_i \n\\end{align}" }, { "math_id": 20, "text": "\\eta" }, { "math_id": 21, "text": "\\varepsilon" }, { "math_id": 22, "text": "\\begin{align}\n\\eta&: \\mathbb{Z}/p \\to \\mathcal{A}_* \\\\\n\\varepsilon&: \\mathcal{A}_* \\to \\mathbb{Z}/p\n\\end{align}" }, { "math_id": 23, "text": "0" }, { "math_id": 24, "text": "c: \\mathcal{A}_* \\to \\mathcal{A}_*" }, { "math_id": 25, "text": "\\begin{align}\nc(\\xi_0) &= 1 \\\\\n\\sum_{0 \\leq i \\leq n} \\xi_{n-i}^{p^i}c(\\xi_i)& = 0\n\n\\end{align}" }, { "math_id": 26, "text": "\\overline{\\mathcal{A}_*}" }, { "math_id": 27, "text": "> 1" } ]
https://en.wikipedia.org/wiki?curid=66300206
663023
Linear speedup theorem
Speeding up Turing machines by increasing tape symbol complexity In computational complexity theory, the linear speedup theorem for Turing machines states that given any real "c" &gt; 0 and any "k"-tape Turing machine solving a problem in time "f"("n"), there is another "k"-tape machine that solves the same problem in time at most "f"("n")/"c" + 2"n" + 3, where "k" &gt; 1. If the original machine is non-deterministic, then the new machine is also non-deterministic. The constants 2 and 3 in 2"n" + 3 can be lowered, for example, to "n" + 2. Proof. The construction is based on packing several tape symbols of the original machine "M" into one tape symbol of the new machine "N". It has a similar effect as using longer words and commands in processors: it speeds up the computations but increases the machine size. How many old symbols are packed into a new symbol depends on the desired speed-up. Suppose the new machine packs three old symbols into a new symbol. Then the alphabet of the new machine is formula_0: it consists of the original symbols and the packed symbols. The new machine has the same number "k" &gt; 1 of tapes. A state of "N" consists of the following components: The new machine "N" starts with encoding the given input into a new alphabet (that is why its alphabet must include formula_1). For example, if the input to 2-tape "M" is on the left, then after the encoding the tape configuration of "N" is on the right: The new machine packs three old symbols (e.g., the blank symbol "_", the symbol "a", and the symbol "b") into a new symbol (here (_,"a","b")) and copies it the second tape, while erasing the first tape. At the end of the initialization, the new machine directs its head to the beginning. Overall, this takes 2"n" + 3 steps. After the initialization, the state of "N" is formula_2, where the symbol formula_3 means that it will be filled in by the machine later; the symbol formula_4 means that the head of the original machine points to the first symbols inside formula_5 and formula_6. Now the machine starts simulating "m" = 3 transitions of "M" using six of its own transitions (in this concrete case, there will be no speed up, but in general "m" can be much larger than six). Let the configurations of "M" and "N" be: where the bold symbols indicate the head position. The state of "N" is formula_7. Now the following happens: Thus, the state of "N" becomes formula_9. Complexity. Initialization requires 2"n" + 3 steps. In the simulation, 6 steps of "N" simulate "m" steps of "M". Choosing "m" &gt; 6"c" produces the running time bounded by formula_10 Machines with a read-only input tape. The theorem as stated above also holds for Turing machines with 1-way, read-only input tape and formula_11 work tapes. Single-tape machines. For single-tape Turing machines, linear speedup holds for machines with execution time at least formula_12. It provably does not hold for machines with time formula_13. Dependence on tape compression. The proof of the speedup theorem clearly hinges on the capability to compress storage by replacing the alphabet with a larger one. Geffert showed that for nondeterministic single-tape Turing machines of time complexity formula_14 linear speedup can be achieved without increasing the alphabet. Dependence on the shape of storage. Regan considered a property of a computational model called information vicinity. This property is related to the memory structure: a Turing machine has linear vicinity, while a Kolmogorov-Uspenskii machine and other pointer machines have an exponential one. Regan’s thesis is that the existence of linear speedup has to do with having a polynomial information vicinity. The salient point in this claim is that a model with exponential vicinity will not have speedup even if changing the alphabet is allowed (for models with a discrete memory that stores symbols). Regan did not, however, prove any general theorem of this kind. Hühne proved that if we require the speedup to be obtained by an on-line simulation (which is the case for the speedup on ordinary Turing machines), then linear speedup does not exist on machines with tree storage. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\Sigma \\cup \\Sigma^3" }, { "math_id": 1, "text": "\\Sigma" }, { "math_id": 2, "text": "(q_0; ~~~?, (\\_,\\_,\\_), ?; ~~~?, (\\_,a,b), ?; ~~~ [1,1])" }, { "math_id": 3, "text": "?" }, { "math_id": 4, "text": "[1,1]" }, { "math_id": 5, "text": "(\\_,\\_,\\_)" }, { "math_id": 6, "text": "(\\_,a,b)" }, { "math_id": 7, "text": "(q; ~~~?, (\\_,\\_,b), ?; ~~~?, (b,\\_,\\_), ?; ~~~ [3,1])" }, { "math_id": 8, "text": "(q; ~~~\\#, (\\_,\\_,b), (b,a,b); ~~~(b,a,b), (b,\\_,\\_), (\\_,\\_,\\_); ~~~ [3,1])" }, { "math_id": 9, "text": "(q'; ~~~?, (\\_,\\_,b), ?; ~~~?, (b,\\_,\\_), ?; ~~~ [3,1])" }, { "math_id": 10, "text": "f(n)/c + 2n + 3." }, { "math_id": 11, "text": "k\\ge 1" }, { "math_id": 12, "text": "n^2" }, { "math_id": 13, "text": "t(n)\\in \\Omega(n\\log n)\\cap o(n^2)" }, { "math_id": 14, "text": "T(n) \\ge n^2" } ]
https://en.wikipedia.org/wiki?curid=663023
66303034
DALL-E
Image-generating deep-learning model DALL·E, DALL·E 2, and DALL·E 3 are text-to-image models developed by OpenAI using deep learning methodologies to generate digital images from natural language descriptions known as "prompts". The first version of DALL-E was announced in January 2021. In the following year, its successor DALL-E 2 was released. DALL·E 3 was released natively into ChatGPT for ChatGPT Plus and ChatGPT Enterprise customers in October 2023, with availability via OpenAI's API and "Labs" platform provided in early November. Microsoft implemented the model in Bing's Image Creator tool and plans to implement it into their Designer app. History and background. DALL·E was revealed by OpenAI in a blog post on 5 January 2021, and uses a version of GPT-3 modified to generate images. On 6 April 2022, OpenAI announced DALL·E 2, a successor designed to generate more realistic images at higher resolutions that "can combine concepts, attributes, and styles". On 20 July 2022, DALL·E 2 entered into a beta phase with invitations sent to 1 million waitlisted individuals; users could generate a certain number of images for free every month and may purchase more. Access had previously been restricted to pre-selected users for a research preview due to concerns about ethics and safety. On 28 September 2022, DALL·E 2 was opened to everyone and the waitlist requirement was removed. In September 2023, OpenAI announced their latest image model, DALL·E 3, capable of understanding "significantly more nuance and detail" than previous iterations. In early November 2022, OpenAI released DALL·E 2 as an API, allowing developers to integrate the model into their own applications. Microsoft unveiled their implementation of DALL·E 2 in their Designer app and Image Creator tool included in Bing and Microsoft Edge. The API operates on a cost-per-image basis, with prices varying depending on image resolution. Volume discounts are available to companies working with OpenAI's enterprise team. The software's name is a portmanteau of the names of animated robot Pixar character WALL-E and the Catalan surrealist artist Salvador Dalí. In February 2024, OpenAI began adding watermarks to DALL-E generated images, containing metadata in the C2PA (Coalition for Content Provenance and Authenticity) standard promoted by the Content Authenticity Initiative. Technology. The first generative pre-trained transformer (GPT) model was initially developed by OpenAI in 2018, using a Transformer architecture. The first iteration, GPT-1, was scaled up to produce GPT-2 in 2019; in 2020, it was scaled up again to produce GPT-3, with 175 billion parameters. DALL·E's model is a multimodal implementation of GPT-3 with 12 billion parameters which "swaps text for pixels," trained on text–image pairs from the Internet. In detail, the input to the Transformer model is a sequence of tokenized image caption followed by tokenized image patches. The image caption is in English, tokenized by byte pair encoding (vocabulary size 16384), and can be up to 256 tokens long. Each image is a 256×256 RGB image, divided into 32×32 patches of 4×4 each. Each patch is then converted by a discrete variational autoencoder to a token (vocabulary size 8192). DALL·E was developed and announced to the public in conjunction with CLIP (Contrastive Language-Image Pre-training). CLIP is a separate model based on zero-shot learning that was trained on 400 million pairs of images with text captions scraped from the Internet. Its role is to "understand and rank" DALL·E's output by predicting which caption from a list of 32,768 captions randomly selected from the dataset (of which one was the correct answer) is most appropriate for an image. This model is used to filter a larger initial list of images generated by DALL·E to select the most appropriate outputs. DALL·E 2 uses 3.5 billion parameters, a smaller number than its predecessor. DALL·E 2 uses a diffusion model conditioned on CLIP image embeddings, which, during inference, are generated from CLIP text embeddings by a prior model. Contrastive Language-Image Pre-training (CLIP). Contrastive Language-Image Pre-training is a technique for training a pair of models. One model takes in a piece of text and outputs a single vector. Another takes in an image and outputs a single vector. To train such a pair of models, one would start by preparing a large dataset of image-caption pairs, then sample batches of size formula_0. Let the outputs from the text and image models be respectively formula_1. The loss incurred on this batch is:formula_2In words, it is the total sum of cross-entropy loss across every column and every row of the matrix formula_3. The models released were trained on a dataset "WebImageText," containing 400 million pairs of image-captions. The total number of words is similar to WebText, which contains about 40 GB of text. Capabilities. DALL·E can generate imagery in multiple styles, including photorealistic imagery, paintings, and emoji. It can "manipulate and rearrange" objects in its images, and can correctly place design elements in novel compositions without explicit instruction. Thom Dunn writing for "BoingBoing" remarked that "For example, when asked to draw a daikon radish blowing its nose, sipping a latte, or riding a unicycle, DALL·E often draws the handkerchief, hands, and feet in plausible locations." DALL·E showed the ability to "fill in the blanks" to infer appropriate details without specific prompts, such as adding Christmas imagery to prompts commonly associated with the celebration, and appropriately placed shadows to images that did not mention them. Furthermore, DALL·E exhibits a broad understanding of visual and design trends. DALL·E can produce images for a wide variety of arbitrary descriptions from various viewpoints with only rare failures. Mark Riedl, an associate professor at the Georgia Tech School of Interactive Computing, found that DALL-E could blend concepts (described as a key element of human creativity). Its visual reasoning ability is sufficient to solve Raven's Matrices (visual tests often administered to humans to measure intelligence). DALL·E 3 follows complex prompts with more accuracy and detail than its predecessors, and is able to generate more coherent and accurate text. DALL·E 3 is integrated into ChatGPT Plus. Image modification. Given an existing image, DALL·E 2 can produce "variations" of the image as individual outputs based on the original, as well as edit the image to modify or expand upon it. DALL·E 2's "inpainting" and "outpainting" use context from an image to fill in missing areas using a medium consistent with the original, following a given prompt. For example, this can be used to insert a new subject into an image, or expand an image beyond its original borders. According to OpenAI, "Outpainting takes into account the image’s existing visual elements — including shadows, reflections, and textures — to maintain the context of the original image." Technical limitations. DALL·E 2's language understanding has limits. It is sometimes unable to distinguish "A yellow book and a red vase" from "A red book and a yellow vase" or "A panda making latte art" from "Latte art of a panda". It generates images of "an astronaut riding a horse" when presented with the prompt "a horse riding an astronaut". It also fails to generate the correct images in a variety of circumstances. Requesting more than three objects, negation, numbers, and connected sentences may result in mistakes, and object features may appear on the wrong object. Additional limitations include handling text — which, even with legible lettering, almost invariably results in dream-like gibberish — and its limited capacity to address scientific information, such as astronomy or medical imagery. Ethical concerns. DALL·E 2's reliance on public datasets influences its results and leads to algorithmic bias in some cases, such as generating higher numbers of men than women for requests that do not mention gender. DALL·E 2's training data was filtered to remove violent and sexual imagery, but this was found to increase bias in some cases such as reducing the frequency of women being generated. OpenAI hypothesize that this may be because women were more likely to be sexualized in training data which caused the filter to influence results. In September 2022, OpenAI confirmed to "The Verge" that DALL·E invisibly inserts phrases into user prompts to address bias in results; for instance, "black man" and "Asian woman" are inserted into prompts that do not specify gender or race. A concern about DALL·E 2 and similar image generation models is that they could be used to propagate deepfakes and other forms of misinformation. As an attempt to mitigate this, the software rejects prompts involving public figures and uploads containing human faces. Prompts containing potentially objectionable content are blocked, and uploaded images are analyzed to detect offensive material. A disadvantage of prompt-based filtering is that it is easy to bypass using alternative phrases that result in a similar output. For example, the word "blood" is filtered, but "ketchup" and "red liquid" are not. Another concern about DALL·E 2 and similar models is that they could cause technological unemployment for artists, photographers, and graphic designers due to their accuracy and popularity. DALL·E 3 is designed to block users from generating art in the style of currently-living artists. In 2023 Microsoft pitched the United States Department of Defense to use DALL·E models to train battlefield management system. In January 2024 OpenAI removed its blanket ban on military and warfare use from its usage policies. Reception. Most coverage of DALL·E focuses on a small subset of "surreal" or "quirky" outputs. DALL-E's output for "an illustration of a baby daikon radish in a tutu walking a dog" was mentioned in pieces from "Input", NBC, "Nature", and other publications. Its output for "an armchair in the shape of an avocado" was also widely covered. "ExtremeTech" stated "you can ask DALL·E for a picture of a phone or vacuum cleaner from a specified period of time, and it understands how those objects have changed". "Engadget" also noted its unusual capacity for "understanding how telephones and other objects change over time". According to "MIT Technology Review", one of OpenAI's objectives was to "give language models a better grasp of the everyday concepts that humans use to make sense of things". Wall Street investors have had a positive reception of DALL·E 2, with some firms thinking it could represent a turning point for a future multi-trillion dollar industry. By mid-2019, OpenAI had already received over $1 billion in funding from Microsoft and Khosla Ventures, and in January 2023, following the launch of DALL·E 2 and ChatGPT, received an additional $10 billion in funding from Microsoft. Japan's anime community has had a negative reaction to DALL·E 2 and similar models. Two arguments are typically presented by artists against the software. The first is that AI art is not art because it is not created by a human with intent. "The juxtaposition of AI-generated images with their own work is degrading and undermines the time and skill that goes into their art. AI-driven image generation tools have been heavily criticized by artists because they are trained on human-made art scraped from the web." The second is the trouble with copyright law and data text-to-image models are trained on. OpenAI has not released information about what dataset(s) were used to train DALL·E 2, inciting concern from some that the work of artists has been used for training without permission. Copyright laws surrounding these topics are inconclusive at the moment. After integrating DALL·E 3 into Bing Chat and ChatGPT, Microsoft and OpenAI faced criticism for excessive content filtering, with critics saying DALL·E had been "lobotomized." The flagging of images generated by prompts such as "man breaks server rack with sledgehammer" was cited as evidence. Over the first days of its launch, filtering was reportedly increased to the point where images generated by some of Bing's own suggested prompts were being blocked. "TechRadar" argued that leaning too heavily on the side of caution could limit DALL·E's value as a creative tool. Open-source implementations. Since OpenAI has not released source code for any of the three models, there have been several attempts to create open-source models offering similar capabilities. Released in 2022 on Hugging Face's Spaces platform, Craiyon (formerly DALL·E Mini until a name change was requested by OpenAI in June 2022) is an AI model based on the original DALL·E that was trained on unfiltered data from the Internet. It attracted substantial media attention in mid-2022, after its release due to its capacity for producing humorous imagery. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "N" }, { "math_id": 1, "text": "v_1, ..., v_N, w_1, ..., w_N " }, { "math_id": 2, "text": "-\\sum_{i} \\ln\\frac{e^{v_i \\cdot w_i}}{\\sum_j e^{v_i \\cdot w_j}} -\\sum_{j} \\ln\\frac{e^{v_j \\cdot w_j}}{\\sum_i e^{v_i \\cdot w_j}} " }, { "math_id": 3, "text": "[v_i \\cdot w_j]_{i, j}" } ]
https://en.wikipedia.org/wiki?curid=66303034
663041
Greatest element and least element
Element ≥ (or ≤) each other element In mathematics, especially in order theory, the greatest element of a subset formula_2 of a partially ordered set (poset) is an element of formula_2 that is greater than every other element of formula_2. The term least element is defined dually, that is, it is an element of formula_2 that is smaller than every other element of formula_3 Definitions. Let formula_4 be a preordered set and let formula_5 An element formula_6 is said to be a greatest element of formula_2 if formula_7 and if it also satisfies: formula_8 for all formula_9 By switching the side of the relation that formula_10 is on in the above definition, the definition of a least element of formula_2 is obtained. Explicitly, an element formula_11 is said to be a least element of formula_2 if formula_12 and if it also satisfies: formula_13 for all formula_9 If formula_4 is also a partially ordered set then formula_2 can have at most one greatest element and it can have at most one least element. Whenever a greatest element of formula_2 exists and is unique then this element is called the greatest element of formula_2. The terminology the least element of formula_2 is defined similarly. If formula_4 has a greatest element (resp. a least element) then this element is also called a top (resp. a bottom) of formula_14 Relationship to upper/lower bounds. Greatest elements are closely related to upper bounds. Let formula_4 be a preordered set and let formula_5 An upper bound of formula_2 in formula_4 is an element formula_15 such that formula_16 and formula_17 for all formula_9 Importantly, an upper bound of formula_2 in formula_0 is not required to be an element of formula_3 If formula_6 then formula_18 is a greatest element of formula_2 if and only if formula_18 is an upper bound of formula_2 in formula_4 and formula_19 In particular, any greatest element of formula_2 is also an upper bound of formula_2 (in formula_0) but an upper bound of formula_2 in formula_0 is a greatest element of formula_2 if and only if it belongs to formula_3 In the particular case where formula_20 the definition of "formula_15 is an upper bound of formula_2 in formula_2" becomes: formula_15 is an element such that formula_21 and formula_17 for all formula_22 which is completely identical to the definition of a greatest element given before. Thus formula_18 is a greatest element of formula_2 if and only if formula_18 is an upper bound of formula_2 in formula_2. If formula_15 is an upper bound of formula_2 in formula_0 that is not an upper bound of formula_2 in formula_2 (which can happen if and only if formula_23) then formula_15 can not be a greatest element of formula_2 (however, it may be possible that some other element is a greatest element of formula_2). In particular, it is possible for formula_2 to simultaneously not have a greatest element and for there to exist some upper bound of formula_2 in formula_0. Even if a set has some upper bounds, it need not have a greatest element, as shown by the example of the negative real numbers. This example also demonstrates that the existence of a least upper bound (the number 0 in this case) does not imply the existence of a greatest element either. Contrast to maximal elements and local/absolute maximums. A greatest element of a subset of a preordered set should not be confused with a maximal element of the set, which are elements that are not strictly smaller than any other element in the set. Let formula_4 be a preordered set and let formula_5 An element formula_24 is said to be a maximal element of formula_2 if the following condition is satisfied: whenever formula_25 satisfies formula_26 then necessarily formula_27 If formula_4 is a partially ordered set then formula_24 is a maximal element of formula_2 if and only if there does not exist any formula_25 such that formula_28 and formula_29 A maximal element of formula_4 is defined to mean a maximal element of the subset formula_30 A set can have several maximal elements without having a greatest element. Like upper bounds and maximal elements, greatest elements may fail to exist. In a totally ordered set the maximal element and the greatest element coincide; and it is also called maximum; in the case of function values it is also called the absolute maximum, to avoid confusion with a local maximum. The dual terms are minimum and absolute minimum. Together they are called the absolute extrema. Similar conclusions hold for least elements. One of the most important differences between a greatest element formula_18 and a maximal element formula_31 of a preordered set formula_4 has to do with what elements they are comparable to. Two elements formula_32 are said to be comparable if formula_33 or formula_34; they are called incomparable if they are not comparable. Because preorders are reflexive (which means that formula_35 is true for all elements formula_1), every element formula_1 is always comparable to itself. Consequently, the only pairs of elements that could possibly be incomparable are distinct pairs. In general, however, preordered sets (and even directed partially ordered sets) may have elements that are incomparable. By definition, an element formula_6 is a greatest element of formula_4 if formula_36 for every formula_37; so by its very definition, a greatest element of formula_4 must, in particular, be comparable to every element in formula_38 This is not required of maximal elements. Maximal elements of formula_4 are not required to be comparable to every element in formula_38 This is because unlike the definition of "greatest element", the definition of "maximal element" includes an important if statement. The defining condition for formula_39 to be a maximal element of formula_4 can be reworded as: For all formula_40 IF formula_28 (so elements that are incomparable to formula_31 are ignored) then formula_27 Suppose that formula_2 is a set containing at least two (distinct) elements and define a partial order formula_41 on formula_2 by declaring that formula_42 if and only if formula_43 If formula_44 belong to formula_2 then neither formula_42 nor formula_45 holds, which shows that all pairs of distinct (i.e. non-equal) elements in formula_2 are incomparable. Consequently, formula_46 can not possibly have a greatest element (because a greatest element of formula_2 would, in particular, have to be comparable to every element of formula_2 but formula_2 has no such element). However, every element formula_24 is a maximal element of formula_46 because there is exactly one element in formula_2 that is both comparable to formula_31 and formula_47 that element being formula_31 itself (which of course, is formula_48). In contrast, if a preordered set formula_4 does happen to have a greatest element formula_18 then formula_18 will necessarily be a maximal element of formula_4 and moreover, as a consequence of the greatest element formula_18 being comparable to every element of formula_49 if formula_4 is also partially ordered then it is possible to conclude that formula_18 is the only maximal element of formula_14 However, the uniqueness conclusion is no longer guaranteed if the preordered set formula_4 is not also partially ordered. For example, suppose that formula_50 is a non-empty set and define a preorder formula_41 on formula_50 by declaring that formula_42 always holds for all formula_51 The directed preordered set formula_52 is partially ordered if and only if formula_50 has exactly one element. All pairs of elements from formula_50 are comparable and every element of formula_50 is a greatest element (and thus also a maximal element) of formula_53 So in particular, if formula_50 has at least two elements then formula_52 has multiple distinct greatest elements. Properties. Throughout, let formula_4 be a partially ordered set and let formula_5 Top and bottom. The least and greatest element of the whole partially ordered set play a special role and are also called bottom (⊥) and top (⊤), or zero (0) and unit (1), respectively. If both exist, the poset is called a bounded poset. The notation of 0 and 1 is used preferably when the poset is a complemented lattice, and when no confusion is likely, i.e. when one is not talking about partial orders of numbers that already contain elements 0 and 1 different from bottom and top. The existence of least and greatest elements is a special completeness property of a partial order. Further introductory information is found in the article on order theory. Notes. &lt;templatestyles src="Reflist/styles.css" /&gt; References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "P" }, { "math_id": 1, "text": "x" }, { "math_id": 2, "text": "S" }, { "math_id": 3, "text": "S." }, { "math_id": 4, "text": "(P, \\leq)" }, { "math_id": 5, "text": "S \\subseteq P." }, { "math_id": 6, "text": "g \\in P" }, { "math_id": 7, "text": "g \\in S" }, { "math_id": 8, "text": "s \\leq g" }, { "math_id": 9, "text": "s \\in S." }, { "math_id": 10, "text": "s" }, { "math_id": 11, "text": "l \\in P" }, { "math_id": 12, "text": "l \\in S" }, { "math_id": 13, "text": "l \\leq s" }, { "math_id": 14, "text": "(P, \\leq)." }, { "math_id": 15, "text": "u" }, { "math_id": 16, "text": "u \\in P" }, { "math_id": 17, "text": "s \\leq u" }, { "math_id": 18, "text": "g" }, { "math_id": 19, "text": "g \\in S." }, { "math_id": 20, "text": "P = S," }, { "math_id": 21, "text": "u \\in S" }, { "math_id": 22, "text": "s \\in S," }, { "math_id": 23, "text": "u \\not\\in S" }, { "math_id": 24, "text": "m \\in S" }, { "math_id": 25, "text": "s \\in S" }, { "math_id": 26, "text": "m \\leq s," }, { "math_id": 27, "text": "s \\leq m." }, { "math_id": 28, "text": "m \\leq s" }, { "math_id": 29, "text": "s \\neq m." }, { "math_id": 30, "text": "S := P." }, { "math_id": 31, "text": "m" }, { "math_id": 32, "text": "x, y \\in P" }, { "math_id": 33, "text": "x \\leq y" }, { "math_id": 34, "text": "y \\leq x" }, { "math_id": 35, "text": "x \\leq x" }, { "math_id": 36, "text": "s \\leq g," }, { "math_id": 37, "text": "s \\in P" }, { "math_id": 38, "text": "P." }, { "math_id": 39, "text": "m \\in P" }, { "math_id": 40, "text": "s \\in P," }, { "math_id": 41, "text": "\\,\\leq\\," }, { "math_id": 42, "text": "i \\leq j" }, { "math_id": 43, "text": "i = j." }, { "math_id": 44, "text": "i \\neq j" }, { "math_id": 45, "text": "j \\leq i" }, { "math_id": 46, "text": "(S, \\leq)" }, { "math_id": 47, "text": "\\geq m," }, { "math_id": 48, "text": "\\leq m" }, { "math_id": 49, "text": "P," }, { "math_id": 50, "text": "R" }, { "math_id": 51, "text": "i, j \\in R." }, { "math_id": 52, "text": "(R, \\leq)" }, { "math_id": 53, "text": "(R, \\leq)." }, { "math_id": 54, "text": "g." }, { "math_id": 55, "text": "S = \\{ 1, 2, 4 \\}" }, { "math_id": 56, "text": "\\mathbb{R}" }, { "math_id": 57, "text": "\\{ a, b, c, d \\}" }, { "math_id": 58, "text": "a \\leq c," }, { "math_id": 59, "text": "a \\leq d," }, { "math_id": 60, "text": "b \\leq c," }, { "math_id": 61, "text": "b \\leq d." }, { "math_id": 62, "text": "\\{ a, b \\}" }, { "math_id": 63, "text": "c" }, { "math_id": 64, "text": "d," }, { "math_id": 65, "text": "\\mathbb{R}," }, { "math_id": 66, "text": "\\mathbb{R}^2" }, { "math_id": 67, "text": "(x, y)" }, { "math_id": 68, "text": "0 < x < 1" }, { "math_id": 69, "text": "(1, 0)." } ]
https://en.wikipedia.org/wiki?curid=663041
663047
Cook–Levin theorem
Boolean satisfiability is NP-complete and therefore that NP-complete problems exist In computational complexity theory, the Cook–Levin theorem, also known as Cook's theorem, states that the Boolean satisfiability problem is NP-complete. That is, it is in NP, and any problem in NP can be reduced in polynomial time by a deterministic Turing machine to the Boolean satisfiability problem. The theorem is named after Stephen Cook and Leonid Levin. The proof is due to Richard Karp, based on an earlier proof (using a different notion of reducibility) by Cook. An important consequence of this theorem is that if there exists a deterministic polynomial-time algorithm for solving Boolean satisfiability, then every NP problem can be solved by a deterministic polynomial-time algorithm. The question of whether such an algorithm for Boolean satisfiability exists is thus equivalent to the P versus NP problem, which is still widely considered the most important unsolved problem in theoretical computer science. Contributions. The concept of NP-completeness was developed in the late 1960s and early 1970s in parallel by researchers in North America and the USSR. In 1971, Stephen Cook published his paper "The complexity of theorem proving procedures" in conference proceedings of the newly founded ACM Symposium on Theory of Computing. Richard Karp's subsequent paper, "Reducibility among combinatorial problems", generated renewed interest in Cook's paper by providing a list of 21 NP-complete problems. Karp also introduced the notion of completeness used in the current definition of NP-completeness (i.e., by polynomial-time many-one reduction). Cook and Karp each received a Turing Award for this work. The theoretical interest in NP-completeness was also enhanced by the work of Theodore P. Baker, John Gill, and Robert Solovay who showed, in 1975, that solving NP-problems in certain oracle machine models requires exponential time. That is, there exists an oracle "A" such that, for all subexponential deterministic-time complexity classes T, the relativized complexity class NP"A" is not a subset of T"A". In particular, for this oracle, P"A" ≠ NP"A". In the USSR, a result equivalent to Baker, Gill, and Solovay's was published in 1969 by M. Dekhtiar. Later Leonid Levin's paper, "Universal search problems", was published in 1973, although it was mentioned in talks and submitted for publication a few years earlier. Levin's approach was slightly different from Cook's and Karp's in that he considered search problems, which require finding solutions rather than simply determining existence. He provided six such NP-complete search problems, or "universal problems". Additionally he found for each of these problems an algorithm that solves it in optimal time (in particular, these algorithms run in polynomial time if and only if P = NP). Definitions. A decision problem is "in NP" if it can be decided by a non-deterministic Turing machine in polynomial time. An "instance of the Boolean satisfiability problem" is a Boolean expression that combines Boolean variables using Boolean operators. Such an expression is "satisfiable" if there is some assignment of truth values to the variables that makes the entire expression true. Idea. Given any decision problem in NP, construct a non-deterministic machine that solves it in polynomial time. Then for each input to that machine, build a Boolean expression that computes whether when that specific input is passed to the machine, the machine runs correctly, and the machine halts and answers "yes". Then the expression can be satisfied if and only if there is a way for the machine to run correctly and answer "yes", so the satisfiability of the constructed expression is equivalent to asking whether or not the machine will answer "yes". Proof. "This proof is based on the one given by ." There are two parts to proving that the Boolean satisfiability problem (SAT) is NP-complete. One is to show that SAT is an NP problem. The other is to show that every NP problem can be reduced to an instance of a SAT problem by a polynomial-time many-one reduction. SAT is in NP because any assignment of Boolean values to Boolean variables that is claimed to satisfy the given expression can be "verified" in polynomial time by a deterministic Turing machine. (The statements verifiable in polynomial time by a deterministic Turing machine" and solvable in polynomial time by a non-deterministic Turing machine" are equivalent, and the proof can be found in many textbooks, for example Sipser's "Introduction to the Theory of Computation", section 7.3., as well as in the Wikipedia article on NP). Now suppose that a given problem in NP can be solved by the nondeterministic Turing machine formula_1, where formula_2 is the set of states, formula_3 is the alphabet of tape symbols, formula_4 is the initial state, formula_5 is the set of accepting states, and formula_6 is the transition relation. Suppose further that formula_0 accepts or rejects an instance of the problem after at most formula_7 computation steps, where formula_8 is the size of the instance and formula_9 is a polynomial function. For each input, formula_10, specify a Boolean expression formula_11 that is satisfiable if and only if the machine formula_0 accepts formula_10. The Boolean expression uses the variables set out in the following table. Here, formula_12 is a machine state, formula_13 is a tape position, formula_14 is a tape symbol, and formula_15 is the number of a computation step. Define the Boolean expression formula_11 to be the conjunction of the sub-expressions in the following table, for all formula_13 and formula_15: If there is an accepting computation for formula_0 on input formula_10, then formula_11 is satisfiable by assigning formula_16, formula_18 and formula_21 their intended interpretations. On the other hand, if formula_11 is satisfiable, then there is an accepting computation for formula_0 on input formula_10 that follows the steps indicated by the assignments to the variables. There are formula_17 Boolean variables, each encodable in space formula_22. The number of clauses is formula_20 so the size of formula_11 is formula_23. Thus the transformation is certainly a polynomial-time many-one reduction, as required. Only the first table row (formula_19) actually depends on the input string formula_10. The remaining lines depend only on the input length formula_8 and on the machine formula_0; they formalize a generic computation of formula_0 for up to formula_7 steps. The transformation makes extensive use of the polynomial formula_7. As a consequence, the above proof is not constructive: even if formula_0 is known, witnessing the membership of the given problem in NP, the transformation cannot be effectively computed, unless an upper bound formula_7 of formula_0's time complexity is also known. Complexity. While the above method encodes a non-deterministic Turing machine in complexity formula_24, the literature describes more sophisticated approaches in complexity formula_25. The quasilinear result first appeared seven years after Cook's original publication. The use of SAT to prove the existence of an NP-complete problem can be extended to other computational problems in logic, and to completeness for other complexity classes. The quantified Boolean formula problem (QBF) involves Boolean formulas extended to include nested universal quantifiers and existential quantifiers for its variables. The QBF problem can be used to encode computation with a Turing machine limited to polynomial space complexity, proving that there exists a problem (the recognition of true quantified Boolean formulas) that is PSPACE-complete. Analogously, dependency quantified boolean formulas encode computation with a Turing machine limited to logarithmic space complexity, proving that there exists a problem that is NL-complete. Consequences. The proof shows that every problem in NP can be reduced in polynomial time (in fact, logarithmic space suffices) to an instance of the Boolean satisfiability problem. This means that if the Boolean satisfiability problem could be solved in polynomial time by a deterministic Turing machine, then all problems in NP could be solved in polynomial time, and so the complexity class NP would be equal to the complexity class P. The significance of NP-completeness was made clear by the publication in 1972 of Richard Karp's landmark paper, "Reducibility among combinatorial problems", in which he showed that 21 diverse combinatorial and graph theoretical problems, each infamous for its intractability, are NP-complete. Karp showed each of his problems to be NP-complete by reducing another problem (already shown to be NP-complete) to that problem. For example, he showed the problem 3SAT (the Boolean satisfiability problem for expressions in conjunctive normal form (CNF) with exactly three variables or negations of variables per clause) to be NP-complete by showing how to reduce (in polynomial time) any instance of SAT to an equivalent instance of 3SAT. Garey and Johnson presented more than 300 NP-complete problems in their book "Computers and Intractability: A Guide to the Theory of NP-Completeness", and new problems are still being discovered to be within that complexity class. Although many practical instances of SAT can be solved by heuristic methods, the question of whether there is a deterministic polynomial-time algorithm for SAT (and consequently all other NP-complete problems) is still a famous unsolved problem, despite decades of intense effort by complexity theorists, mathematical logicians, and others. For more details, see the article P versus NP problem. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "M" }, { "math_id": 1, "text": "M = (Q, \\Sigma, s, F, \\delta)" }, { "math_id": 2, "text": "Q" }, { "math_id": 3, "text": "\\Sigma" }, { "math_id": 4, "text": "s \\in Q" }, { "math_id": 5, "text": "F \\subseteq Q" }, { "math_id": 6, "text": "\\delta \\subseteq ((Q \\setminus F) \\times \\Sigma) \\times (Q \\times \\Sigma \\times \\{-1, +1\\})" }, { "math_id": 7, "text": "p(n)" }, { "math_id": 8, "text": "n" }, { "math_id": 9, "text": "p" }, { "math_id": 10, "text": "I" }, { "math_id": 11, "text": "B" }, { "math_id": 12, "text": "q \\in Q" }, { "math_id": 13, "text": "-p(n) \\leq i \\leq p(n)" }, { "math_id": 14, "text": "j \\in \\Sigma" }, { "math_id": 15, "text": "0 \\leq k \\leq p(n)" }, { "math_id": 16, "text": "T_{i,j,k}" }, { "math_id": 17, "text": "O(p(n)^2)" }, { "math_id": 18, "text": "H_{i,k}" }, { "math_id": 19, "text": "T_{i,j,0}" }, { "math_id": 20, "text": "O(p(n)^3)" }, { "math_id": 21, "text": "Q_{i,k}" }, { "math_id": 22, "text": "O(\\log p(n))" }, { "math_id": 23, "text": "O(\\log(p(n)) p(n)^3)" }, { "math_id": 24, "text": "O(\\log(p(n))p(n)^3)" }, { "math_id": 25, "text": "O(p(n)\\log(p(n)))" } ]
https://en.wikipedia.org/wiki?curid=663047
663050
Space hierarchy theorem
Both deterministic and nondeterministic machines can solve more problems given more spaceIn computational complexity theory, the space hierarchy theorems are separation results that show that both deterministic and nondeterministic machines can solve more problems in (asymptotically) more space, subject to certain conditions. For example, a deterministic Turing machine can solve more decision problems in space "n" log "n" than in space "n". The somewhat weaker analogous theorems for time are the time hierarchy theorems. The foundation for the hierarchy theorems lies in the intuition that with either more time or more space comes the ability to compute more functions (or decide more languages). The hierarchy theorems are used to demonstrate that the time and space complexity classes form a hierarchy where classes with tighter bounds contain fewer languages than those with more relaxed bounds. Here we define and prove the space hierarchy theorem. The space hierarchy theorems rely on the concept of space-constructible functions. The deterministic and nondeterministic space hierarchy theorems state that for all space-constructible functions "f"("n"), formula_0, where SPACE stands for either DSPACE or NSPACE, and o refers to the little o notation. Statement. Formally, a function formula_1 is space-constructible if formula_2 and there exists a Turing machine which computes the function formula_3 in space formula_4 when starting with an input formula_5, where formula_5 represents a string of "n" consecutive 1s. Most of the common functions that we work with are space-constructible, including polynomials, exponents, and logarithms. For every space-constructible function formula_6, there exists a language L that is decidable in space formula_4 but not in space formula_7. Proof. The goal is to define a language that can be decided in space formula_4 but not space formula_7. The language is defined as L: formula_8 For any machine M that decides a language in space formula_7, L will differ in at least one spot from the language of M. Namely, for some large enough k, M will use space formula_9 on formula_10 and will therefore differ at its value. On the other hand, L is in formula_11. The algorithm for deciding the language L is as follows: Note on step 3: Execution is limited to formula_14 steps in order to avoid the case where M does not halt on the input x. That is, the case where M consumes space of only formula_15 as required, but runs for infinite time. The above proof holds for the case of PSPACE, but some changes need to be made for the case of NPSPACE. The crucial point is that while on a deterministic TM, acceptance and rejection can be inverted (crucial for step 4), this is not possible on a non-deterministic machine.&lt;br&gt; For the case of NPSPACE, L needs to be redefined first: formula_16 Now, the algorithm needs to be changed to accept L by modifying step 4 to: L can not be decided by a TM using formula_7 cells. Assuming L can be decided by some TM M using formula_7 cells, and following from the Immerman–Szelepcsényi theorem, formula_17 can also be determined by a TM (called formula_18) using formula_7 cells. Here lies the contradiction, therefore the assumption must be false: Comparison and improvements. The space hierarchy theorem is stronger than the analogous time hierarchy theorems in several ways: It seems to be easier to separate classes in space than in time. Indeed, whereas the time hierarchy theorem has seen little remarkable improvement since its inception, the nondeterministic space hierarchy theorem has seen at least one important improvement by Viliam Geffert in his 2003 paper "Space hierarchy theorem revised". This paper made several generalizations of the theorem: Refinement of space hierarchy. If space is measured as the number of cells used regardless of alphabet size, then &amp;NoBreak;&amp;NoBreak; because one can achieve any linear compression by switching to a larger alphabet. However, by measuring space in bits, a much sharper separation is achievable for deterministic space. Instead of being defined up to a multiplicative constant, space is now defined up to an additive constant. However, because any constant amount of external space can be saved by storing the contents into the internal state, we still have &amp;NoBreak;&amp;NoBreak;. Assume that f is space-constructible. SPACE is deterministic. The proof is similar to the proof of the space hierarchy theorem, but with two complications: The universal Turing machine has to be space-efficient, and the reversal has to be space-efficient. One can generally construct universal Turing machines with &amp;NoBreak;&amp;NoBreak; space overhead, and under appropriate assumptions, just &amp;NoBreak;&amp;NoBreak; space overhead (which may depend on the machine being simulated). For the reversal, the key issue is how to detect if the simulated machine rejects by entering an infinite (space-constrained) loop. Simply counting the number of steps taken would increase space consumption by about &amp;NoBreak;&amp;NoBreak;. At the cost of a potentially exponential time increase, loops can be detected space-efficiently as follows: Modify the machine to erase everything and go to a specific configuration A on success. Use depth-first search to determine whether A is reachable in the space bound from the starting configuration. The search starts at A and goes over configurations that lead to A. Because of determinism, this can be done in place and without going into a loop. It can also be determined whether the machine exceeds a space bound (as opposed to looping within the space bound) by iterating over all configurations about to exceed the space bound and checking (again using depth-first search) whether the initial configuration leads to any of them. Corollaries. Corollary 1. "For any two functions formula_20, formula_21, where formula_22 is formula_23 and formula_24 is space-constructible, formula_25. This corollary lets us separate various space complexity classes. For any natural number k, the function formula_26 is space-constructible. Therefore for any two natural numbers formula_27 we can prove formula_28. NL ⊊ PSPACE. Corollary 2. Proof. Savitch's theorem shows that formula_29, while the space hierarchy theorem shows that formula_30. The result is this corollary along with the fact that TQBF ∉ NL since TQBF is PSPACE-complete. This could also be proven using the non-deterministic space hierarchy theorem to show that NL ⊊ NPSPACE, and using Savitch's theorem to show that PSPACE = NPSPACE. PSPACE ⊊ EXPSPACE. Corollary 3. This last corollary shows the existence of decidable problems that are intractable. In other words, their decision procedures must use more than polynomial space. Corollary 4. There are problems in PSPACE requiring an arbitrarily large exponent to solve; therefore PSPACE does not collapse to DSPACE("n""k") for some constant "k". SPACE("n") ≠ PTIME. Corollary 5. To see it, assume the contrary, thus any problem decided in space formula_31 is decided in time formula_32, and any problem formula_33 decided in space formula_34 is decided in time formula_35. Now formula_36, thus P is closed under such a change of bound, that is formula_37, so formula_38. This implies that for all formula_39, but the space hierarchy theorem implies that formula_40, and Corollary 6 follows. Note that this argument neither proves that formula_41 nor that formula_42, as to reach a contradiction we used the negation of both sentences, that is we used both inclusions, and can only deduce that at least one fails. It is currently unknown which fail(s) but conjectured that both do, that is that formula_43 and formula_44 are incomparable -at least for deterministic space. This question is related to that of the time complexity of (nondeterministic) linear bounded automata which accept the complexity class formula_45 (aka as context-sensitive languages, CSL); so by the above CSL is not known to be decidable in polynomial time -see also Kuroda's two problems on LBA. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\mathsf{SPACE}\\left(o(f(n))\\right) \\subsetneq \\mathsf{SPACE}(f(n))" }, { "math_id": 1, "text": "f:\\mathbb{N} \\longrightarrow \\mathbb{N}" }, { "math_id": 2, "text": "f(n) \\ge \\log~n" }, { "math_id": 3, "text": "f(n)" }, { "math_id": 4, "text": "O(f(n))" }, { "math_id": 5, "text": "1^n" }, { "math_id": 6, "text": "f:\\mathbb{N} \\longrightarrow\n\\mathbb{N}" }, { "math_id": 7, "text": "o(f(n))" }, { "math_id": 8, "text": "L = \\{~ (\\langle M \\rangle, 10^k): M \\mbox{ uses space } \\le f(|\\langle M \\rangle, 10^k|) \\mbox{ and time } \\le 2^{f(|\\langle M \\rangle, 10^k|)} \\mbox{ and } M \\mbox{ does not accept } (\\langle M \\rangle,\n10^k) ~ \\}" }, { "math_id": 9, "text": "<math>\\le f(|\\langle M \\rangle, 10^k|)</Math>" }, { "math_id": 10, "text": "<math>(\\langle M \\rangle, 10^k)</Math>" }, { "math_id": 11, "text": "\\mathsf{SPACE}(f(n))" }, { "math_id": 12, "text": "f(|x|)" }, { "math_id": 13, "text": "\\langle M \\rangle, 10^k" }, { "math_id": 14, "text": "2^{f(|x|)}" }, { "math_id": 15, "text": "O(f(x))" }, { "math_id": 16, "text": "L = \\{~ (\\langle M \\rangle, 10^k): M \\mbox{ uses space } \\le f(|\\langle M \\rangle, 10^k|) \\mbox{ and } M \\mbox{ accepts } (\\langle M \\rangle,\n10^k) ~ \\}" }, { "math_id": 17, "text": "\\overline L" }, { "math_id": 18, "text": "\\overline M" }, { "math_id": 19, "text": "w = (\\langle \\overline M \\rangle, 10^k)" }, { "math_id": 20, "text": "f_1" }, { "math_id": 21, "text": "f_2: \\mathbb{N} \\longrightarrow\n\\mathbb{N}" }, { "math_id": 22, "text": "f_1(n)" }, { "math_id": 23, "text": "o(f_2(n))" }, { "math_id": 24, "text": "f_2" }, { "math_id": 25, "text": "\\mathsf{SPACE}(f_1(n)) \\subsetneq \\mathsf{SPACE}(f_2(n))" }, { "math_id": 26, "text": "n^k" }, { "math_id": 27, "text": "k_1 < k_2" }, { "math_id": 28, "text": "\\mathsf{SPACE}(n^{k_1}) \\subsetneq \\mathsf{SPACE}(n^{k_2})" }, { "math_id": 29, "text": "\\mathsf{NL} \\subseteq \\mathsf{SPACE}(\\log^2n)" }, { "math_id": 30, "text": "\\mathsf{SPACE}(\\log^2n) \\subsetneq \\mathsf{SPACE}(n)" }, { "math_id": 31, "text": "O(n)" }, { "math_id": 32, "text": "O(n^c)" }, { "math_id": 33, "text": "L" }, { "math_id": 34, "text": "O(n^b)" }, { "math_id": 35, "text": "O((n^b)^c)=O(n^{bc})" }, { "math_id": 36, "text": "\\mathsf{P}:=\\bigcup_{k\\in\\mathbb N}\\mathsf{DTIME}(n^k)" }, { "math_id": 37, "text": "\\bigcup_{k\\in\\mathbb N}\\mathsf{DTIME}(n^{bk})\\subseteq\\mathsf{P}" }, { "math_id": 38, "text": "L\\in\\mathsf{P}" }, { "math_id": 39, "text": "b, \\mathsf{SPACE}(n^b)\\subseteq\\mathsf{P}\\subseteq\\mathsf{SPACE}(n)" }, { "math_id": 40, "text": "\\mathsf{SPACE}(n^2)\\not\\subseteq\\mathsf{SPACE}(n)" }, { "math_id": 41, "text": "\\mathsf{P}\\not\\subseteq\\mathsf{SPACE}(n)" }, { "math_id": 42, "text": "\\mathsf{SPACE}(n)\\not\\subseteq\\mathsf{P}" }, { "math_id": 43, "text": "\\mathsf{SPACE}(n)" }, { "math_id": 44, "text": "\\mathsf{P}" }, { "math_id": 45, "text": "\\mathsf{NSPACE}(n)" } ]
https://en.wikipedia.org/wiki?curid=663050
66307448
Neural network quantum states
Neural Network Quantum States (NQS or NNQS) is a general class of variational quantum states parameterized in terms of an artificial neural network. It was first introduced in 2017 by the physicists Giuseppe Carleo and Matthias Troyer to approximate wave functions of many-body quantum systems. Given a many-body quantum state formula_0 comprising formula_1 degrees of freedom and a choice of associated quantum numbers formula_2, then an NQS parameterizes the wave-function amplitudes formula_3 where formula_4 is an artificial neural network of parameters (weights) formula_5, formula_1 input variables (formula_2) and one complex-valued output corresponding to the wave-function amplitude. This variational form is used in conjunction with specific stochastic learning approaches to approximate quantum states of interest. Learning the Ground-State Wave Function. One common application of NQS is to find an approximate representation of the ground state wave function of a given Hamiltonian formula_6. The learning procedure in this case consists in finding the best neural-network weights that minimize the variational energy formula_7 Since, for a general artificial neural network, computing the expectation value is an exponentially costly operation in formula_1, stochastic techniques based, for example, on the Monte Carlo method are used to estimate formula_8, analogously to what is done in Variational Monte Carlo, see for example for a review. More specifically, a set of formula_9 samples formula_10, with formula_11, is generated such that they are uniformly distributed according to the Born probability density formula_12. Then it can be shown that the sample mean of the so-called "local energy" formula_13 is a statistical estimate of the quantum expectation value formula_8, i.e. formula_14 Similarly, it can be shown that the gradient of the energy with respect to the network weights formula_5 is also approximated by a sample mean formula_15 where formula_16 and can be efficiently computed, in deep networks through backpropagation. The stochastic approximation of the gradients is then used to minimize the energy formula_8 typically using a stochastic gradient descent approach. When the neural-network parameters are updated at each step of the learning procedure, a new set of samples formula_17 is generated, in an iterative procedure similar to what done in unsupervised learning. Connection with Tensor Networks. Neural-Network representations of quantum wave functions share some similarities with variational quantum states based on tensor networks. For example, connections with matrix product states have been established. These studies have shown that NQS support volume law scaling for the entropy of entanglement. In general, given a NQS with fully-connected weights, it corresponds, in the worse case, to a matrix product state of exponentially large bond dimension in formula_1. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": " |\\Psi\\rangle " }, { "math_id": 1, "text": " N " }, { "math_id": 2, "text": " s_1 \\ldots s_N " }, { "math_id": 3, "text": "\n\\langle s_1 \\ldots s_N |\\Psi; W \\rangle = F(s_1 \\ldots s_N; W),\n" }, { "math_id": 4, "text": " F(s_1 \\ldots s_N; W) " }, { "math_id": 5, "text": " W " }, { "math_id": 6, "text": " \\hat{H} " }, { "math_id": 7, "text": "\nE(W) = \\langle \\Psi; W | \\hat{H}|\\Psi; W \\rangle .\n" }, { "math_id": 8, "text": " E(W) " }, { "math_id": 9, "text": " M " }, { "math_id": 10, "text": " S^{(1)}, S^{(2)} \\ldots S^{(M)} " }, { "math_id": 11, "text": " S^{(i)}=s^{(i)}_1\\ldots s^{(i)}_N " }, { "math_id": 12, "text": " P(S) \\propto |F(s_1 \\ldots s_N; W)|^2 " }, { "math_id": 13, "text": " E_{\\mathrm{loc}}(S) = \\langle S|\\hat{H}|\\Psi\\rangle/ \\langle S|\\Psi\\rangle " }, { "math_id": 14, "text": "\nE(W) \\simeq \\frac{1}{M} \\sum_i^M E_{\\mathrm{loc}}(S^{(i)}). \n" }, { "math_id": 15, "text": "\n\\frac{\\partial E(W)}{\\partial W_k} \\simeq \\frac{1}{M} \\sum_i^M (E_{\\mathrm{loc}}(S^{(i)}) - E(W)) O^\\star_k(S^{(i)}), \n" }, { "math_id": 16, "text": " O(S^{(i)})= \\frac{\\partial \\log F(S^{(i)};W)}{\\partial W_k}" }, { "math_id": 17, "text": " S^{(i)} " } ]
https://en.wikipedia.org/wiki?curid=66307448
66308569
Doubly triangular number
Type of triangular number In mathematics, the doubly triangular numbers are the numbers that appear within the sequence of triangular numbers, in positions that are also triangular numbers. That is, if formula_0 denotes the formula_1th triangular number, then the doubly triangular numbers are the numbers of the form formula_2. Sequence and formula. The doubly triangular numbers form the sequence 0, 1, 6, 21, 55, 120, 231, 406, 666, 1035, 1540, 2211, ... The formula_1th doubly triangular number is given by the quartic formula formula_3 The sums of row sums of Floyd's triangle give the doubly triangular numbers. Another way of expressing this fact is that the sum of all of the numbers in the first formula_1 rows of Floyd's triangle is the formula_1th doubly triangular number. In combinatorial enumeration. Doubly triangular numbers arise naturally as numbers of unordered pairs of unordered pairs of objects, including pairs where both objects are the same: When pairs with both objects the same are excluded, a different sequence arises, the "tritriangular numbers" formula_4 which are given by the formula formula_5. In numerology. Some numerologists and biblical studies scholars consider it significant that 666, the number of the beast, is a doubly triangular number. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "T_n=n(n+1)/2" }, { "math_id": 1, "text": "n" }, { "math_id": 2, "text": "T_{T_n}" }, { "math_id": 3, "text": "T_{T_n} = \\frac{n(n+1)(n^2+n+2)}{8}." }, { "math_id": 4, "text": "3,15,45,105,\\dots" }, { "math_id": 5, "text": "\\binom{\\binom{n}{2}}{2}" } ]
https://en.wikipedia.org/wiki?curid=66308569
6631661
Transportation theory (mathematics)
Study of optimal transportation and allocation of resources In mathematics and economics, transportation theory or transport theory is a name given to the study of optimal transportation and allocation of resources. The problem was formalized by the French mathematician Gaspard Monge in 1781. In the 1920s A.N. Tolstoi was one of the first to study the transportation problem mathematically. In 1930, in the collection "Transportation Planning Volume I" for the National Commissariat of Transportation of the Soviet Union, he published a paper "Methods of Finding the Minimal Kilometrage in Cargo-transportation in space". Major advances were made in the field during World War II by the Soviet mathematician and economist Leonid Kantorovich. Consequently, the problem as it is stated is sometimes known as the Monge–Kantorovich transportation problem. The linear programming formulation of the transportation problem is also known as the Hitchcock–Koopmans transportation problem. Motivation. Mines and factories. Suppose that we have a collection of formula_0 mines mining iron ore, and a collection of formula_1 factories which use the iron ore that the mines produce. Suppose for the sake of argument that these mines and factories form two disjoint subsets formula_2 and formula_3 of the Euclidean plane formula_4. Suppose also that we have a "cost function" formula_5, so that formula_6 is the cost of transporting one shipment of iron from formula_7 to formula_8. For simplicity, we ignore the time taken to do the transporting. We also assume that each mine can supply only one factory (no splitting of shipments) and that each factory requires precisely one shipment to be in operation (factories cannot work at half- or double-capacity). Having made the above assumptions, a "transport plan" is a bijection formula_9. In other words, each mine formula_10 supplies precisely one target factory formula_11 and each factory is supplied by precisely one mine. We wish to find the "optimal transport plan", the plan formula_12 whose "total cost" formula_13 is the least of all possible transport plans from formula_2 to formula_3. This motivating special case of the transportation problem is an instance of the assignment problem. More specifically, it is equivalent to finding a minimum weight matching in a bipartite graph. Moving books: the importance of the cost function. The following simple example illustrates the importance of the cost function in determining the optimal transport plan. Suppose that we have formula_1 books of equal width on a shelf (the real line), arranged in a single contiguous block. We wish to rearrange them into another contiguous block, but shifted one book-width to the right. Two obvious candidates for the optimal transport plan present themselves: If the cost function is proportional to Euclidean distance (formula_14 for some formula_15) then these two candidates are "both" optimal. If, on the other hand, we choose the strictly convex cost function proportional to the square of Euclidean distance (formula_16 for some formula_15), then the "many small moves" option becomes the unique minimizer. Note that the above cost functions consider only the horizontal distance traveled by the books, not the horizontal distance traveled by a device used to pick each book up and move the book into position. If the latter is considered instead, then, of the two transport plans, the second is always optimal for the Euclidean distance, while, provided there are at least 3 books, the first transport plan is optimal for the squared Euclidean distance. Hitchcock problem. The following transportation problem formulation is credited to F. L. Hitchcock: Suppose there are formula_0 sources formula_17 for a commodity, with formula_18 units of supply at formula_19 and formula_1 sinks formula_20 for the commodity, with the demand formula_21 at formula_22. If formula_23 is the unit cost of shipment from formula_19 to formula_22, find a flow that satisfies demand from supplies and minimizes the flow cost. This challenge in logistics was taken up by D. R. Fulkerson and in the book "Flows in Networks" (1962) written with L. R. Ford Jr. Tjalling Koopmans is also credited with formulations of transport economics and allocation of resources. Abstract formulation of the problem. Monge and Kantorovich formulations. The transportation problem as it is stated in modern or more technical literature looks somewhat different because of the development of Riemannian geometry and measure theory. The mines-factories example, simple as it is, is a useful reference point when thinking of the abstract case. In this setting, we allow the possibility that we may not wish to keep all mines and factories open for business, and allow mines to supply more than one factory, and factories to accept iron from more than one mine. Let formula_24 and formula_25 be two separable metric spaces such that any probability measure on formula_24 (or formula_25) is a Radon measure (i.e. they are Radon spaces). Let formula_26 be a Borel-measurable function. Given probability measures formula_27 on formula_24 and formula_28 on formula_25, Monge's formulation of the optimal transportation problem is to find a transport map formula_29 that realizes the infimum formula_30 where formula_31 denotes the push forward of formula_27 by formula_12. A map formula_12 that attains this infimum ("i.e." makes it a minimum instead of an infimum) is called an "optimal transport map". Monge's formulation of the optimal transportation problem can be ill-posed, because sometimes there is no formula_12 satisfying formula_32: this happens, for example, when formula_27 is a Dirac measure but formula_28 is not. We can improve on this by adopting Kantorovich's formulation of the optimal transportation problem, which is to find a probability measure formula_33 on formula_34 that attains the infimum formula_35 where formula_36 denotes the collection of all probability measures on formula_34 with marginals formula_27 on formula_24 and formula_28 on formula_25. It can be shown that a minimizer for this problem always exists when the cost function formula_37 is lower semi-continuous and formula_38 is a tight collection of measures (which is guaranteed for Radon spaces formula_24 and formula_25). (Compare this formulation with the definition of the Wasserstein metric formula_39 on the space of probability measures.) A gradient descent formulation for the solution of the Monge–Kantorovich problem was given by Sigurd Angenent, Steven Haker, and Allen Tannenbaum. Duality formula. The minimum of the Kantorovich problem is equal to formula_40 where the supremum runs over all pairs of bounded and continuous functions formula_41 and formula_42 such that formula_43 Economic interpretation. The economic interpretation is clearer if signs are flipped. Let formula_44 stand for the vector of characteristics of a worker, formula_45 for the vector of characteristics of a firm, and formula_46 for the economic output generated by worker formula_47 matched with firm formula_48. Setting formula_49 and formula_50, the Monge–Kantorovich problem rewrites: formula_51 which has dual : formula_52 where the infimum runs over bounded and continuous function formula_53 and formula_54. If the dual problem has a solution, one can see that: formula_55 so that formula_56 interprets as the equilibrium wage of a worker of type formula_47, and formula_57 interprets as the equilibrium profit of a firm of type formula_48. Solution of the problem. Optimal transportation on the real line. For formula_58, let formula_59 denote the collection of probability measures on formula_60 that have finite formula_61-th moment. Let formula_62 and let formula_63, where formula_64 is a convex function. formula_68 The proof of this solution appears in Rachev &amp; Rüschendorf (1998). Discrete version and linear programming formulation. In the case where the margins formula_69 and formula_70 are discrete, let formula_71 and formula_72 be the probability masses respectively assigned to formula_73 and formula_74, and let formula_75 be the probability of an formula_76 assignment. The objective function in the primal Kantorovich problem is then formula_77 and the constraint formula_78 expresses as formula_79 and formula_80 In order to input this in a linear programming problem, we need to vectorize the matrix formula_81 by either stacking its columns or its rows, we call formula_82 this operation. In the column-major order, the constraints above rewrite as formula_83 and formula_84 where formula_85 is the Kronecker product, formula_86 is a matrix of size formula_87 with all entries of ones, and formula_88 is the identity matrix of size formula_89. As a result, setting formula_90, the linear programming formulation of the problem is formula_91 which can be readily inputted in a large-scale linear programming solver (see chapter 3.4 of Galichon (2016)). Semi-discrete case. In the semi-discrete case, formula_92 and formula_69 is a continuous distribution over formula_93, while formula_94 is a discrete distribution which assigns probability mass formula_95 to site formula_96. In this case, we can see that the primal and dual Kantorovich problems respectively boil down to: formula_97 for the primal, where formula_98 means that formula_99 and formula_100, and: formula_101 for the dual, which can be rewritten as: formula_102 which is a finite-dimensional convex optimization problem that can be solved by standard techniques, such as gradient descent. In the case when formula_103, one can show that the set of formula_73 assigned to a particular site formula_104 is a convex polyhedron. The resulting configuration is called a power diagram. Quadratic normal case. Assume the particular case formula_105, formula_106, and formula_107 where formula_108 is invertible. One then has formula_109 formula_110 formula_111 The proof of this solution appears in Galichon (2016). Separable Hilbert spaces. Let formula_24 be a separable Hilbert space. Let formula_112 denote the collection of probability measures on formula_24 that have finite formula_61-th moment; let formula_113 denote those elements formula_114 that are Gaussian regular: if formula_115 is any strictly positive Gaussian measure on formula_24 and formula_116, then formula_117 also. Let formula_118, formula_119, formula_120 for formula_121. Then the Kantorovich problem has a unique solution formula_122, and this solution is induced by an optimal transport map: i.e., there exists a Borel map formula_123 such that formula_124 Moreover, if formula_28 has bounded support, then formula_125 for formula_27-almost all formula_126 for some locally Lipschitz, formula_37-concave and maximal Kantorovich potential formula_127. (Here formula_128 denotes the Gateaux derivative of formula_127.) Entropic regularization. Consider a variant of the discrete problem above, where we have added an entropic regularization term to the objective function of the primal problem formula_129 One can show that the dual regularized problem is formula_130 where, compared with the unregularized version, the "hard" constraint in the former dual (formula_131) has been replaced by a "soft" penalization of that constraint (the sum of the formula_132 terms ). The optimality conditions in the dual problem can be expressed as Eq. 5.1: formula_133 Eq. 5.2: formula_134 Denoting formula_108 as the formula_135 matrix of term formula_136, solving the dual is therefore equivalent to looking for two diagonal positive matrices formula_137 and formula_138 of respective sizes formula_139 and formula_140, such that formula_141 and formula_142. The existence of such matrices generalizes Sinkhorn's theorem and the matrices can be computed using the Sinkhorn–Knopp algorithm, which simply consists of iteratively looking for formula_143 to solve Equation 5.1, and formula_144 to solve Equation 5.2. Sinkhorn–Knopp's algorithm is therefore a coordinate descent algorithm on the dual regularized problem. Applications. The Monge–Kantorovich optimal transport has found applications in wide range in different fields. Among them are: References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "m" }, { "math_id": 1, "text": "n" }, { "math_id": 2, "text": "M" }, { "math_id": 3, "text": "F" }, { "math_id": 4, "text": "\\mathbb{R}^2" }, { "math_id": 5, "text": "c : \\mathbb{R}^2 \\times \\mathbb{R}^2 \\to [0, \\infty)" }, { "math_id": 6, "text": "c(x, y)" }, { "math_id": 7, "text": "x" }, { "math_id": 8, "text": "y" }, { "math_id": 9, "text": "T: M \\to F" }, { "math_id": 10, "text": "m \\in M" }, { "math_id": 11, "text": "T(m) \\in F" }, { "math_id": 12, "text": "T" }, { "math_id": 13, "text": "c(T) := \\sum_{m \\in M} c(m, T(m))" }, { "math_id": 14, "text": "c(x, y) = \\alpha \\|x - y\\|" }, { "math_id": 15, "text": "\\alpha > 0" }, { "math_id": 16, "text": "c(x, y) = \\alpha \\|x - y\\|^2" }, { "math_id": 17, "text": "x_1, \\ldots, x_m" }, { "math_id": 18, "text": "a(x_i)" }, { "math_id": 19, "text": "x_i" }, { "math_id": 20, "text": "y_1, \\ldots, y_n" }, { "math_id": 21, "text": "b(y_j)" }, { "math_id": 22, "text": "y_j" }, { "math_id": 23, "text": "a(x_i,\\ y_j)" }, { "math_id": 24, "text": "X" }, { "math_id": 25, "text": "Y" }, { "math_id": 26, "text": "c : X \\times Y \\to [0, \\infty]" }, { "math_id": 27, "text": "\\mu" }, { "math_id": 28, "text": "\\nu" }, { "math_id": 29, "text": "T : X \\to Y" }, { "math_id": 30, "text": "\\inf \\left\\{ \\left. \\int_X c(x, T(x)) \\, \\mathrm{d} \\mu (x) \\;\\right| \\; T_* (\\mu) = \\nu \\right\\}," }, { "math_id": 31, "text": "T_*(\\mu)" }, { "math_id": 32, "text": "T_*(\\mu) = \\nu " }, { "math_id": 33, "text": "\\gamma" }, { "math_id": 34, "text": "X \\times Y" }, { "math_id": 35, "text": "\\inf \\left\\{ \\left. \\int_{X \\times Y} c(x, y) \\, \\mathrm{d} \\gamma (x, y) \\right| \\gamma \\in \\Gamma (\\mu, \\nu) \\right\\}," }, { "math_id": 36, "text": " \\Gamma (\\mu, \\nu) " }, { "math_id": 37, "text": "c" }, { "math_id": 38, "text": "\\Gamma(\\mu, \\nu)" }, { "math_id": 39, "text": "W_p" }, { "math_id": 40, "text": "\\sup \\left( \\int_X \\varphi (x) \\, \\mathrm{d} \\mu (x) + \\int_Y \\psi (y) \\, \\mathrm{d} \\nu (y) \\right)," }, { "math_id": 41, "text": "\\varphi : X \\rightarrow \\mathbb{R}" }, { "math_id": 42, "text": "\\psi : Y \\rightarrow \\mathbb{R}" }, { "math_id": 43, "text": "\\varphi (x) + \\psi (y) \\leq c(x, y)." }, { "math_id": 44, "text": " x \\in X" }, { "math_id": 45, "text": " y \\in Y" }, { "math_id": 46, "text": " \\Phi(x,y) =-c(x,y)" }, { "math_id": 47, "text": " x" }, { "math_id": 48, "text": " y" }, { "math_id": 49, "text": " u(x) = -\\varphi(x)" }, { "math_id": 50, "text": " v(y) =-\\psi(y)" }, { "math_id": 51, "text": "\\sup \\left\\{ \\int_{X\\times Y}\\Phi(x,y) d\\gamma(x,y) ,\\gamma \\in \\Gamma(\\mu,\\nu) \\right\\}" }, { "math_id": 52, "text": " \\inf \\left\\{ \\int_X u(x) \\,d\\mu(x) +\\int_Y v(y) \\, d\\nu (y) :u(x) +v(y) \\geq \\Phi(x,y) \\right\\}" }, { "math_id": 53, "text": " u:X\\rightarrow \\mathbb{R}" }, { "math_id": 54, "text": " v:Y\\rightarrow \\mathbb{R}" }, { "math_id": 55, "text": "v(y) =\\sup_x \\left\\{ \\Phi(x,y) - u(x)\\right\\}" }, { "math_id": 56, "text": " u(x)" }, { "math_id": 57, "text": " v(y)" }, { "math_id": 58, "text": "1 \\leq p < \\infty" }, { "math_id": 59, "text": "\\mathcal{P}_p(\\mathbb{R})" }, { "math_id": 60, "text": "\\mathbb{R}" }, { "math_id": 61, "text": "p" }, { "math_id": 62, "text": "\\mu, \\nu \\in \\mathcal{P}_p(\\mathbb{R})" }, { "math_id": 63, "text": "c(x, y) = h(x-y)" }, { "math_id": 64, "text": "h:\\mathbb{R} \\rightarrow [0,\\infty)" }, { "math_id": 65, "text": "F_\\mu : \\mathbb{R}\\rightarrow[0,1]" }, { "math_id": 66, "text": "F_{\\nu}^{-1} \\circ F_{\\mu} : \\mathbb{R} \\to \\mathbb{R}" }, { "math_id": 67, "text": "h" }, { "math_id": 68, "text": "\\min_{\\gamma \\in \\Gamma(\\mu, \\nu)} \\int_{\\mathbb{R}^2} c(x, y) \\, \\mathrm{d} \\gamma (x, y) = \\int_0^1 c \\left( F_{\\mu}^{-1} (s), F_{\\nu}^{-1} (s) \\right) \\, \\mathrm{d} s." }, { "math_id": 69, "text": " \\mu " }, { "math_id": 70, "text": " \\nu " }, { "math_id": 71, "text": " \\mu_x " }, { "math_id": 72, "text": " \\nu_y " }, { "math_id": 73, "text": " x\\in \\mathbf{X}" }, { "math_id": 74, "text": " y\\in \\mathbf{Y} " }, { "math_id": 75, "text": " \\gamma _{xy} " }, { "math_id": 76, "text": " xy " }, { "math_id": 77, "text": " \\sum_{x\\in \\mathbf{X},y\\in \\mathbf{Y}} \\gamma_{xy}c_{xy}" }, { "math_id": 78, "text": " \\gamma \\in \\Gamma \\left( \\mu ,\\nu \\right)" }, { "math_id": 79, "text": "\n\\sum_{y\\in \\mathbf{Y}}\\gamma_{xy}=\\mu_x,\\forall x\\in \\mathbf{X}\n" }, { "math_id": 80, "text": "\n\\sum_{x\\in \\mathbf{X}} \\gamma_{xy}=\\nu_y,\\forall y\\in \\mathbf{Y}.\n" }, { "math_id": 81, "text": " \\gamma_{xy}" }, { "math_id": 82, "text": " \\operatorname{vec} " }, { "math_id": 83, "text": " \\left( 1_{1\\times \\left\\vert \\mathbf{Y}\\right\\vert }\\otimes I_{\\left\\vert \\mathbf{X}\\right\\vert }\\right) \\operatorname{vec}\\left( \\gamma \\right) =\\mu " }, { "math_id": 84, "text": " \\left( I_{\\left\\vert \\mathbf{Y}\\right\\vert }\\otimes 1_{1\\times \\left\\vert \\mathbf{X}\\right\\vert}\\right) \\operatorname{vec}\\left( \\gamma \\right) =\\nu " }, { "math_id": 85, "text": " \\otimes " }, { "math_id": 86, "text": " 1_{n\\times m}" }, { "math_id": 87, "text": " n\\times m " }, { "math_id": 88, "text": " I_{n}" }, { "math_id": 89, "text": " n" }, { "math_id": 90, "text": " z=\\operatorname{vec}\\left( \\gamma \\right) " }, { "math_id": 91, "text": "\n\\begin{align}\n& \\text{Minimize } &&\\operatorname{vec}(c)^\\top z \\\\[4pt]\n& \\text{subject to:} && z \\ge 0, \\\\[4pt]\n& && \\begin{pmatrix}\n1_{1\\times \\left\\vert \\mathbf{Y}\\right\\vert }\\otimes I_{\\left\\vert \\mathbf{X}\n\\right\\vert } \\\\\nI_{\\left\\vert \\mathbf{Y}\\right\\vert }\\otimes 1_{1\\times \\left\\vert \\mathbf{X}\n\\right\\vert }\n\\end{pmatrix} z=\\binom{\\mu }{\\nu }\n\\end{align}\n" }, { "math_id": 92, "text": " X=Y=\\mathbb{R}^d " }, { "math_id": 93, "text": " \\mathbb{R}^d" }, { "math_id": 94, "text": " \\nu =\\sum_{j=1}^{J}\\nu _{j}\\delta_{y_{i}}" }, { "math_id": 95, "text": " \\nu _{j} " }, { "math_id": 96, "text": " y_j \\in \\mathbb{R}^d" }, { "math_id": 97, "text": " \\inf \\left\\{ \\int_X \\sum_{j=1}^J c(x,y_j) \\, d\\gamma_j(x) ,\\gamma \\in \\Gamma(\\mu,\\nu)\\right\\} " }, { "math_id": 98, "text": " \\gamma \\in \\Gamma \\left( \\mu ,\\nu \\right) " }, { "math_id": 99, "text": " \\int_{X} d\\gamma _{j}\\left( x\\right) =\\nu _{j}" }, { "math_id": 100, "text": " \\sum_{j}d\\gamma_{j}\\left( x\\right) =d\\mu \\left( x\\right)" }, { "math_id": 101, "text": " \\sup \\left\\{ \\int_{X}\\varphi (x)d\\mu (x)+\\sum_{j=1}^{J}\\psi _{j}\\nu_{j}:\\psi _{j}+\\varphi (x)\\leq c\\left( x,y_{j}\\right) \\right\\}" }, { "math_id": 102, "text": " \\sup_{\\psi \\in \\mathbb{R}^{J}}\\left\\{ \\int_{X}\\inf_{j}\\left\\{ c\\left(x,y_{j}\\right) -\\psi _{j}\\right\\} d\\mu (x)+\\sum_{j=1}^{J}\\psi_{j}\\nu_{j}\\right\\} " }, { "math_id": 103, "text": " c\\left( x,y\\right) =\\left\\vert x-y\\right\\vert ^{2}/2 " }, { "math_id": 104, "text": " j " }, { "math_id": 105, "text": " \\mu =\\mathcal{N}\\left( 0,\\Sigma_X\\right) " }, { "math_id": 106, "text": " \\nu =\\mathcal{N} \\left( 0,\\Sigma _{Y}\\right) " }, { "math_id": 107, "text": " c(x,y) =\\left\\vert y-Ax\\right\\vert^2/2 " }, { "math_id": 108, "text": " A " }, { "math_id": 109, "text": " \\varphi(x) =-x^\\top \\Sigma_X^{-1/2}\\left( \\Sigma_X^{1/2}A^\\top \\Sigma_Y A\\Sigma_X^{1/2}\\right) ^{1/2}\\Sigma_{X}^{-1/2}x/2 " }, { "math_id": 110, "text": " \\psi(y) =-y^\\top A\\Sigma_X^{1/2}\\left( \\Sigma_X^{1/2}A^\\top \\Sigma_Y A\\Sigma_{X}^{1/2}\\right)^{-1/2} \\Sigma_X^{1/2}Ay/2 " }, { "math_id": 111, "text": " T(x) = (A^\\top)^{-1}\\Sigma_X^{-1/2} \\left(\\Sigma_X^{1/2}A^\\top \\Sigma_Y A\\Sigma_X^{1/2} \\right)^{1/2} \\Sigma_X^{-1/2}x " }, { "math_id": 112, "text": "\\mathcal{P}_p(X)" }, { "math_id": 113, "text": "\\mathcal{P}_p^r(X)" }, { "math_id": 114, "text": "\\mu \\in \\mathcal{P}_p(X)" }, { "math_id": 115, "text": "g" }, { "math_id": 116, "text": "g(N) = 0" }, { "math_id": 117, "text": "\\mu(N) = 0" }, { "math_id": 118, "text": "\\mu \\in \\mathcal{P}_p^r (X)" }, { "math_id": 119, "text": "\\nu \\in \\mathcal{P}_p(X)" }, { "math_id": 120, "text": "c (x, y) = | x - y |^p/p" }, { "math_id": 121, "text": "p\\in(1,\\infty), p^{-1} + q^{-1} = 1" }, { "math_id": 122, "text": "\\kappa" }, { "math_id": 123, "text": "r\\in L^p(X, \\mu; X)" }, { "math_id": 124, "text": "\\kappa = (\\mathrm{id}_X \\times r)_{*} (\\mu) \\in \\Gamma (\\mu, \\nu)." }, { "math_id": 125, "text": "r(x) = x - | \\nabla \\varphi (x) |^{q - 2} \\, \\nabla \\varphi (x)" }, { "math_id": 126, "text": "x\\in X" }, { "math_id": 127, "text": "\\varphi" }, { "math_id": 128, "text": "\\nabla \\varphi" }, { "math_id": 129, "text": "\n\\begin{align}\n& \\text{Minimize } \\sum_{x\\in \\mathbf{X}, y\\in \\mathbf{Y}}\\gamma_{xy}c_{xy}+\\varepsilon \\gamma_{xy} \\ln \\gamma_{xy} \\\\[4pt]\n& \\text{subject to: } \\\\[4pt]\n& \\gamma\\ge0 \\\\[4pt]\n& \\sum_{y\\in \\mathbf{Y}}\\gamma _{xy} =\\mu _{x},\\forall x\\in \\mathbf{X} \\\\[4pt]\n& \\sum_{x\\in \\mathbf{X}}\\gamma_{xy} = \\nu_y, \\forall y\\in \\mathbf{Y}\n\\end{align}\n" }, { "math_id": 130, "text": "\n\\max_{\\varphi ,\\psi} \\sum_{x\\in \\mathbf{X}} \\varphi_x \\mu_x + \\sum_{y\\in \\mathbf{Y}} \\psi_y v_y - \\varepsilon \\sum_{x\\in \\mathbf{X},y\\in \\mathbf{Y}} \\exp \\left( \\frac{\\varphi_x + \\psi_y - c_{xy}}{\\varepsilon }\\right)\n" }, { "math_id": 131, "text": " \\varphi_x + \\psi_y - c_{xy}\\geq 0" }, { "math_id": 132, "text": " \\varepsilon \\exp \\left( (\\varphi _x + \\psi_y - c_{xy})/\\varepsilon \\right)" }, { "math_id": 133, "text": "\n\\mu_x = \\sum_{y\\in \\mathbf{Y}} \\exp \\left( \\frac{\\varphi_x + \\psi_y - c_{xy}}{\\varepsilon} \\right) ~\\forall x\\in \\mathbf{X}\n" }, { "math_id": 134, "text": "\n\\nu_y = \\sum_{x\\in \\mathbf{X}} \\exp \\left( \\frac{\\varphi_x + \\psi_y - c_{xy}}{\\varepsilon }\\right) ~\\forall y\\in \\mathbf{Y} \n" }, { "math_id": 135, "text": " \\left\\vert \\mathbf{X}\\right\\vert \\times \\left\\vert \\mathbf{Y}\\right\\vert " }, { "math_id": 136, "text": " A_{xy}=\\exp \\left(-c_{xy} / \\varepsilon \\right)" }, { "math_id": 137, "text": " D_{1}" }, { "math_id": 138, "text": " D_{2}" }, { "math_id": 139, "text": " \\left\\vert \\mathbf{X}\\right\\vert" }, { "math_id": 140, "text": " \\left\\vert \\mathbf{Y}\\right\\vert" }, { "math_id": 141, "text": " D_{1}AD_{2}1_{\\left\\vert \\mathbf{Y}\\right\\vert }=\\mu" }, { "math_id": 142, "text": " \\left( D_{1}AD_{2}\\right) ^{\\top }1_{\\left\\vert \\mathbf{X}\\right\\vert }=\\nu " }, { "math_id": 143, "text": " \\varphi _{x}" }, { "math_id": 144, "text": " \\psi _{y}" } ]
https://en.wikipedia.org/wiki?curid=6631661
663188
American Invitational Mathematics Examination
Mathematics test used to determine qualification for the U.S. Mathematical Olympiad The American Invitational Mathematics Examination (AIME) is a selective and prestigious 15-question 3-hour test given since 1983 to those who rank in the top 5% on the AMC 12 high school mathematics examination (formerly known as the AHSME), and starting in 2010, those who rank in the top 2.5% on the AMC 10. Two different versions of the test are administered, the AIME I and AIME II. However, qualifying students can only take one of these two competitions. The AIME is the second of two tests used to determine qualification for the United States Mathematical Olympiad (USAMO), the first being the AMC. The use of calculators is not allowed on the test, with only pencils, erasers, rulers, and compasses permitted. Format and scoring. The competition consists of 15 questions of increasing difficulty, where each answer is an integer between 0 and 999 inclusive. Thus the competition effectively removes the element of chance afforded by a multiple-choice test while preserving the ease of automated grading; answers are entered onto an OMR sheet, similar to the way grid-in math questions are answered on the SAT. Leading zeros must be gridded in; for example, answers of 7 and 43 must be written and gridded in as 007 and 043, respectively. Concepts typically covered in the competition include topics in elementary algebra, geometry, trigonometry, as well as number theory, probability, and combinatorics. Many of these concepts are not directly covered in typical high school mathematics courses; thus, participants often turn to supplementary resources to prepare for the competition. One point is earned for each correct answer, and no points are deducted for incorrect answers. No partial credit is given. Thus AIME scores are integers from 0 to 15 inclusive. Some historical results are: A student's score on the AIME is used in combination with their score on the AMC to determine eligibility for the USAMO or USAJMO. A student's score on an AMC exam is added to 10 times their score on the AIME to form a USAMO or USAJMO index. Since 2017, the USAMO and USAJMO qualification cutoff has been split between the AMC A and B, as well as the AIME I and II. Hence, there will be a total of 8 published USAMO and USAJMO qualification cutoffs per year, and a student can have up to 2 USAMO/USAJMO indices (via participating in both AMC contests). The student only needs to reach one qualification cutoff to take the USAMO or USAJMO. During the 1990s, it was not uncommon for fewer than 2,000 students to qualify for the AIME, although 1994 was a notable exception where 99 students achieved perfect scores on the AHSME and the list of high scorers, which usually was distributed in small pamphlets, had to be distributed several months late in thick newspaper bundles. History. The AIME began in 1983. It was given once per year on a Tuesday or Thursday in late March or early April. Beginning in 2000, the AIME is given twice per year, the second date being an "alternate" test given to accommodate those students who are unable to sit for the first test because of spring break, illness, or any other reason. However, under no circumstances may a student officially participate both competitions. The alternate competition, commonly called the "AIME2" or "AIME-II," is usually given exactly two weeks after the first test, on a Tuesday in early April. However, like the AMC, the AIME recently has been given on a Tuesday in early March, and on the Wednesday 8 days later, e.g. March 13 and 20, 2019. In 2020, the rapid spread of the COVID-19 pandemic led to the cancellation of the AIME II for that year. Instead, qualifying students were able to take the American Online Invitational Mathematics Examination, which contained the problems that were originally going to be on the AIME II. 2021's AIME I and II were also moved online., 2022's AIME I and II were administered both online and in-person, and starting from 2023, all AIME contests must be administered in-person. Sample problems. formula_0 where formula_1 and formula_2 are positive integers and formula_2 is as large as possible, find formula_3 ("2003 AIME I #1") "Answer: 839" formula_5 is strictly increasing and no set of four (not necessarily consecutive) terms forms an arithmetic progression. ("2022 AIME I #6") "Answer: 228" "Answer: 925" "Answer: 375" Note. &lt;templatestyles src="Reflist/styles.css" /&gt; References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": " \\frac{((3!)!)!}{3!} = k \\cdot n!, " }, { "math_id": 1, "text": "k" }, { "math_id": 2, "text": "n" }, { "math_id": 3, "text": "k + n." }, { "math_id": 4, "text": "(a, b)" }, { "math_id": 5, "text": " 3, 4, 5, a, b, 30, 40, 50 " }, { "math_id": 6, "text": "36" }, { "math_id": 7, "text": "300" }, { "math_id": 8, "text": "596" }, { "math_id": 9, "text": "a" }, { "math_id": 10, "text": "b" }, { "math_id": 11, "text": "c" }, { "math_id": 12, "text": "P(z) = z^3+qz+r" }, { "math_id": 13, "text": "|a|^2+|b|^2+|c|^2=250" }, { "math_id": 14, "text": "h" }, { "math_id": 15, "text": "h^2" } ]
https://en.wikipedia.org/wiki?curid=663188
66319632
Two-dimensional Yang–Mills theory
Yang–Mills theory in two dimensions with a well-defined measure In mathematical physics, two-dimensional Yang–Mills theory is the special case of Yang–Mills theory in which the dimension of spacetime is taken to be two. This special case allows for a rigorously defined Yang–Mills measure, meaning that the (Euclidean) path integral can be interpreted as a measure on the set of connections modulo gauge transformations. This situation contrasts with the four-dimensional case, where a rigorous construction of the theory as a measure is currently unknown. An aspect of the subject of particular interest is the large-N limit, in which the structure group is taken to be the unitary group formula_0 and then the formula_1 tends to infinity limit is taken. The large-N limit of two-dimensional Yang–Mills theory has connections to string theory. Background. Interest in the Yang–Mills measure comes from a statistical mechanical or constructive quantum field theoretic approach to formulating a quantum theory for the Yang–Mills field. A gauge field is described mathematically by a 1-form formula_2 on a principal formula_3-bundle over a manifold formula_4 taking values in the Lie algebra formula_5 of the Lie group formula_3. We assume that the structure group formula_3, which describes the physical symmetries of the gauge field, is a compact Lie group with a bi-invariant metric on the Lie algebra formula_5, and we also assume given a Riemannian metric on the manifold formula_4. The Yang–Mills action functional is given by formula_6 where formula_7 is the curvature of the connection form formula_8, the norm-squared in the integrand comes from the metric on the Lie algebra and the one on the base manifold, and formula_9 is the Riemannian volume measure on formula_4. The measure formula_10 is given formally by formula_11 as a normalized probability measure on the space of all connections on the bundle, with formula_12 a parameter, and formula_13 is a formal normalizing constant. More precisely, the probability measure is more likely to be meaningful on the space of orbits of connections under gauge transformations. The Yang–Mills measure for two-dimensional manifolds. Study of Yang–Mills theory in two dimensions dates back at least to work of A. A. Migdal in 1975. Some formulas appearing in Migdal's work can, in retrospect, be seen to be connected to the heat kernel on the structure group of the theory. The role of the heat kernel was made more explicit in various works in the late 1970s, culminating in the introduction of the heat kernel action in work of Menotti and Onofri in 1981. In the continuum theory, the Yang–Mills measure formula_10 was rigorously defined for the case where formula_14 by Bruce Driver and by Leonard Gross, Christopher King, and Ambar Sengupta. For compact manifolds, both oriented and non-oriented, with or without boundary, with specified bundle topology, the Yang–Mills measure was constructed by Sengupta In this approach the 2-dimensional Yang–Mills measure is constructed by using a Gaussian measure on an infinite-dimensional space conditioned to satisfy relations implied by the topologies of the surface and of the bundle. Wilson loop variables (certain important variables on the space) were defined using stochastic differential equations and their expected values computed explicitly and found to agree with the results of the heat kernel action. Dana S. Fine used the formal Yang–Mills functional integral to compute loop expectation values. Other approaches include that of Klimek and Kondracki and Ashtekar et al. Thierry Lévy constructed the 2-dimensional Yang–Mills measure in a very general framework, starting with the loop-expectation value formulas and constructing the measure, somewhat analogously to Brownian motion measure being constructed from transition probabilities. Unlike other works that also aimed to construct the measure from loop expectation values, Lévy's construction makes it possible to consider a very wide family of loop observables. The discrete Yang–Mills measure is a term that has been used for the lattice gauge theory version of the Yang–Mills measure, especially for compact surfaces. The lattice in this case is a triangulation of the surface. Notable facts are: (i) the discrete Yang–Mills measure can encode the topology of the bundle over the continuum surface even if only the triangulation is used to define the measure; (ii) when two surfaces are sewn along a common boundary loop, the corresponding discrete Yang–Mills measures convolve to yield the measure for the combined surface. Wilson loop expectation values in 2 dimensions. For a piecewise smooth loop formula_15 on the base manifold formula_4 and a point formula_16 on the fiber in the principal formula_3-bundle formula_17 over the base point formula_18 of the loop, there is the holonomy formula_19 of any connection formula_2 on the bundle. For regular loops formula_20, all based at formula_21 and any function formula_22 on formula_23 the function formula_24 is called a Wilson loop variable, of interest mostly when formula_22 is a product of traces of the holonomies in representations of the group formula_3. With formula_4 being a two-dimensional Riemannian manifold the loop expectation values formula_25 were computed in the above-mentioned works. If formula_4 is the plane then formula_26 where formula_27 is the heat kernel on the group formula_3, formula_28 is the area enclosed by the loop formula_15, and the integration is with respect to unit-mass Haar measure. This formula was proved by Driver and by Gross et al. using the Gaussian measure construction of the Yang–Mills measure on the plane and by defining parallel transport by interpreting the equation of parallel transport as a Stratonovich stochastic differential equation. If formula_4 is the 2-sphere then formula_29 where now formula_30 is the area of the region "outside" the loop formula_15, and formula_31 is the total area of the sphere. This formula was proved by Sengupta using the conditioned Gaussian measure construction of the Yang–Mills measure and the result agrees with what one gets by using the heat kernel action of Menotti and Onofri. As an example for higher genus surfaces, if formula_4 is a torus, then formula_32 with formula_31 being the total area of the torus, and formula_15 a contractible loop on the torus enclosing an area formula_28. This, and counterparts in higher genus as well as for surfaces with boundary and for bundles with nontrivial topology, were proved by Sengupta. There is an extensive physics literature on loop expectation values in two-dimensional Yang–Mills theory. Many of the above formulas were known in the physics literature from the 1970s, with the results initially expressed in terms of a sum over the characters of the gauge group rather than the heat kernel and with the function formula_22 being the trace in some representation of the group. Expressions involving the heat kernel then appeared explicitly in the form of the "heat kernel action" in work of Menotti and Onofri. The role of the convolution property of the heat kernel was used in works of Sergio Albeverio et al. in constructing stochastic cosurface processes inspired by Yang–Mills theory and, indirectly, by Makeenko and Migdal in the physics literature. The low-T limit. The Yang–Mills partition function is, formally, formula_33 In the two-dimensional case we can view this as being (proportional to) the denominator that appears in the loop expectation values. Thus, for example, the partition function for the torus would be formula_34 where formula_35 is the area of the torus. In two of the most impactful works in the field, Edward Witten showed that as formula_36 the partition function yields the volume of the moduli space of flat connections with respect to a natural volume measure on the moduli space. This volume measure is associated to a natural symplectic structure on the moduli space when the surface is orientable, and is the torsion of a certain complex in the case where the surface is not orientable. Witten's discovery has been studied in different ways by several researchers. Let formula_37 denote the moduli space of flat connections on a trivial bundle, with structure group being a compact connected semi-simple Lie group formula_3 whose Lie algebra is equipped with an Ad-invariant metric, over a compact two-dimensional orientable manifold of genus formula_38. Witten showed that the symplectic volume of this moduli space is given by &lt;templatestyles src="Template:Quote_box/styles.css" /&gt; formula_39 where the sum is over all irreducible representations of formula_3. This was proved rigorous by Sengupta (see also the works by Lisa Jeffrey and by Kefeng Liu). There is a large literature on the symplectic structure on the moduli space of flat connections, and more generally on the moduli space itself, the major early work being that of Michael Atiyah and Raoul Bott. Returning to the Yang–Mills measure, Sengupta proved that the measure itself converges in a weak sense to a suitably scaled multiple of the symplectic volume measure for orientable surfaces of genus formula_40. Thierry Lévy and James R. Norris established a large deviations principle for this convergence, showing that the Yang–Mills measure encodes the Yang–Mills action functional even though this functional does not explicitly appear in the rigorous formulation of the measure. The large-"N" limit. The large-"N" limit of gauge theories refers to the behavior of the theory for gauge groups of the form formula_0, formula_41, formula_42, formula_43, and other such families, as formula_1 goes to formula_44. There is a large physics literature on this subject, including major early works by Gerardus 't Hooft. A key tool in this analysis is the Makeenko–Migdal equation. In two dimensions, the Makeenko–Migdal equation takes a special form developed by Kazakov and Kostov. In the large-N limit, the 2-D form of the Makeenko–Migdal equation relates the Wilson loop functional for a complicated curve with multiple crossings to the product of Wilson loop functionals for a pair of simpler curves with at least one less crossing. In the case of the sphere or the plane, it was the proposed that the Makeenko–Migdal equation could (in principle) reduce the computation of Wilson loop functionals for arbitrary curves to the Wilson loop functional for a simple closed curve. In dimension 2, some of the major ideas were proposed by I. M. Singer, who named this limit the master field (a general notion in some areas of physics). Xu studied the large-formula_1 limit of 2-dimensional Yang–Mills loop expectation values using ideas from random matrix theory. Sengupta computed the large-"N" limit of loop expectation values in the plane and commented on the connection with free probability. Confirming one proposal of Singer, Michael Anshelevich and Sengupta showed that the large-"N" limit of the Yang–Mills measure over the plane for the groups formula_0 is given by a free probability theoretic counterpart of the Yang–Mills measure. An extensive study of the master field in the plane was made by Thierry Lévy. Several major contributions have been made by Bruce K. Driver, Brian C. Hall, and Todd Kemp, Franck Gabriel, and Antoine Dahlqvist. Dahlqvist and Norris have constructed the master field on the two-dimensional sphere. In spacetime dimension larger than 2, there is very little in terms of rigorous mathematical results. Sourav Chatterjee has proved several results in large-"N" gauge theory theory for dimension larger than 2. Chatterjee established an explicit formula for the leading term of the free energy of three-dimensional formula_45 lattice gauge theory for any N, as the lattice spacing tends to zero. Let formula_46 be the partition function of formula_47-dimensional formula_48 lattice gauge theory with coupling strength formula_49 in the box with lattice spacing formula_50 and size being n spacings in each direction. Chatterjee showed that in dimensions d=2 and 3, formula_51 is formula_52 up to leading order in formula_53, where formula_54 is a limiting free-energy term. A similar result was also obtained for in dimension 4, for formula_55, formula_56, and formula_57 independently. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "U(N)" }, { "math_id": 1, "text": "N" }, { "math_id": 2, "text": "A" }, { "math_id": 3, "text": "G" }, { "math_id": 4, "text": "M" }, { "math_id": 5, "text": "L(G)" }, { "math_id": 6, "text": " S_{YM}(A)=\\frac{1}{2} \\int_M \\|F^A\\|^2\\,d\\sigma_M " }, { "math_id": 7, "text": "F^A" }, { "math_id": 8, "text": " A" }, { "math_id": 9, "text": " \\sigma_M " }, { "math_id": 10, "text": "\\mu_T" }, { "math_id": 11, "text": "d\\mu_T(A)= \\frac{1}{Z_T} e^{-S_{YM}(A)/T} DA," }, { "math_id": 12, "text": "T>0" }, { "math_id": 13, "text": "Z_T" }, { "math_id": 14, "text": "M ={\\mathbb R}^2" }, { "math_id": 15, "text": "\\gamma" }, { "math_id": 16, "text": "u" }, { "math_id": 17, "text": "P\\to M" }, { "math_id": 18, "text": "o\\in M" }, { "math_id": 19, "text": "h_{\\gamma}(A)" }, { "math_id": 20, "text": " \\gamma_1, \\ldots, \\gamma_n" }, { "math_id": 21, "text": "o" }, { "math_id": 22, "text": "\\varphi" }, { "math_id": 23, "text": "G^n" }, { "math_id": 24, "text": "A\\mapsto \\varphi\\bigl(h_{\\gamma_1}(A),\\ldots, h_{\\gamma_n}(A)\\bigr)" }, { "math_id": 25, "text": "\\int \\varphi\\bigl(h_{\\gamma_1}(A),\\ldots, h_{\\gamma_n}(A)\\bigr)\\,d\\mu_T(A)" }, { "math_id": 26, "text": "\\int \\varphi\\bigl(h_{\\gamma}(A) \\bigr)\\,d\\mu_T(A) =\\int_G \\varphi(x) Q_{Ta}(x)\\,dx," }, { "math_id": 27, "text": "Q_t(y)" }, { "math_id": 28, "text": "a" }, { "math_id": 29, "text": "\\int \\varphi\\bigl(h_{\\gamma}(A) \\bigr)\\,d\\mu_T(A) =\\frac{1}{Q_{Tc}(e)} \\int_G \\varphi(x) Q_{Ta}(x)Q_{Tb}(x^{-1})\\,dx," }, { "math_id": 30, "text": "b" }, { "math_id": 31, "text": "c" }, { "math_id": 32, "text": "\\int \\varphi\\bigl(h_{\\gamma}(A) \\bigr)\\,d\\mu_T(A) =\\frac{\\int_G \\varphi(x) Q_{Ta}(x)Q_{Tb}(x^{-1}wzw^{-1}z^{-1})\\,dx \\,dw \\,dz}{\\int_G Q_{Tc}(wzw^{-1}z^{-1})\\,dw\\,dz}, " }, { "math_id": 33, "text": "\\int e^{-\\frac{1}{T}S_{YM}(A)}\\,DA " }, { "math_id": 34, "text": "\\int_{G^2} Q_{TS}(aba^{-1}b^{-1})\\,da\\,db," }, { "math_id": 35, "text": "S" }, { "math_id": 36, "text": "T\\downarrow 0" }, { "math_id": 37, "text": " \\mathcal{M}^0_g" }, { "math_id": 38, "text": " g\\geq 2" }, { "math_id": 39, "text": " \\operatorname{vol}_{\\overline{\\Omega}} \\bigl(\\mathcal{M}^0_g\\bigr) = |Z(G)|\\operatorname{vol}(G)^{2g-2} \\sum_\\alpha \\frac{1}{(\\dim\\alpha)^{2g-2}},\n" }, { "math_id": 40, "text": "\\geq 2" }, { "math_id": 41, "text": "SU(N)" }, { "math_id": 42, "text": "O(N)" }, { "math_id": 43, "text": "SO(N)" }, { "math_id": 44, "text": "\\uparrow \\infty" }, { "math_id": 45, "text": " U(N)" }, { "math_id": 46, "text": " Z(n,\\varepsilon ,g) " }, { "math_id": 47, "text": " d" }, { "math_id": 48, "text": " U (N) " }, { "math_id": 49, "text": " g " }, { "math_id": 50, "text": " \\varepsilon" }, { "math_id": 51, "text": "\\log Z(n,\\varepsilon ,g)" }, { "math_id": 52, "text": "n^d \\left( \\frac{1}{2}(d-1)N^2\\log(g^2\\varepsilon^{4-d}) +(d-1)\\log\\left( \\frac{\\prod_{j=1}^{N-1}j!} {(2\\pi)^{N/2}} \\right) +N^2K_d\\right)" }, { "math_id": 53, "text": " n" }, { "math_id": 54, "text": "K_d" }, { "math_id": 55, "text": "n\\to\\infty" }, { "math_id": 56, "text": "\\varepsilon\\to 0" }, { "math_id": 57, "text": "g\\to 0" } ]
https://en.wikipedia.org/wiki?curid=66319632
663203
Arthur–Merlin protocol
Interactive proof system in computational complexity theory In computational complexity theory, an Arthur–Merlin protocol, introduced by , is an interactive proof system in which the verifier's coin tosses are constrained to be public (i.e. known to the prover too). proved that all (formal) languages with interactive proofs of arbitrary length with private coins also have interactive proofs with public coins. Given two participants in the protocol called Arthur and Merlin respectively, the basic assumption is that Arthur is a standard computer (or verifier) equipped with a random number generating device, while Merlin is effectively an oracle with infinite computational power (also known as a prover). However, Merlin is not necessarily honest, so Arthur must analyze the information provided by Merlin in response to Arthur's queries and decide the problem itself. A problem is considered to be solvable by this protocol if whenever the answer is "yes", Merlin has some series of responses which will cause Arthur to accept at least &lt;templatestyles src="Fraction/styles.css" /&gt;2⁄3 of the time, and if whenever the answer is "no", Arthur will never accept more than &lt;templatestyles src="Fraction/styles.css" /&gt;1⁄3 of the time. Thus, Arthur acts as a probabilistic polynomial-time verifier, assuming it is allotted polynomial time to make its decisions and queries. MA. The simplest such protocol is the 1-message protocol where Merlin sends Arthur a message, and then Arthur decides whether to accept or not by running a probabilistic polynomial time computation. (This is similar to the verifier-based definition of NP, the only difference being that Arthur is allowed to use randomness here.) Merlin does not have access to Arthur's coin tosses in this protocol, since it is a single-message protocol and Arthur tosses his coins only after receiving Merlin's message. This protocol is called "MA". Informally, a language "L" is in MA if for all strings in the language, there is a polynomial sized proof that Merlin can send Arthur to convince him of this fact with high probability, and for all strings not in the language there is no proof that convinces Arthur with high probability. Formally, the complexity class MA is the set of decision problems that can be decided in polynomial time by an Arthur–Merlin protocol where Merlin's only move precedes any computation by Arthur. In other words, a language "L" is in MA if there exists a polynomial-time deterministic Turing machine "M" and polynomials "p", "q" such that for every input string "x" of length "n" = |"x"|, The second condition can alternatively be written as To compare this with the informal definition above, "z" is the purported proof from Merlin (whose size is bounded by a polynomial) and "y" is the random string that Arthur uses, which is also polynomially bounded. AM. The complexity class AM (or AM[2]) is the set of decision problems that can be decided in polynomial time by an Arthur–Merlin protocol with two messages. There is only one query/response pair: Arthur tosses some random coins and sends the outcome of "all" his coin tosses to Merlin, Merlin responds with a purported proof, and Arthur deterministically verifies the proof. In this protocol, Arthur is only allowed to send outcomes of coin tosses to Merlin, and in the final stage Arthur must decide whether to accept or reject using only his previously generated random coin flips and Merlin's message. In other words, a language "L" is in AM if there exists a polynomial-time deterministic Turing machine "M" and polynomials "p", "q" such that for every input string "x" of length "n" = |"x"|, The second condition here can be rewritten as As above, "z" is the alleged proof from Merlin (whose size is bounded by a polynomial) and "y" is the random string that Arthur uses, which is also polynomially bounded. The complexity class AM["k"] is the set of problems that can be decided in polynomial time, with "k" queries and responses. AM as defined above is AM[2]. AM[3] would start with one message from Merlin to Arthur, then a message from Arthur to Merlin and then finally a message from Merlin to Arthur. The last message should always be from Merlin to Arthur, since it never helps for Arthur to send a message to Merlin after deciding his answer. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\exists z\\in\\{0,1\\}^{q(n)}\\,\\Pr\\nolimits_{y\\in\\{0,1\\}^{p(n)}}(M(x,y,z)=1)\\ge2/3," }, { "math_id": 1, "text": "\\forall z\\in\\{0,1\\}^{q(n)}\\,\\Pr\\nolimits_{y\\in\\{0,1\\}^{p(n)}}(M(x,y,z)=0)\\ge2/3." }, { "math_id": 2, "text": "\\forall z\\in\\{0,1\\}^{q(n)}\\,\\Pr\\nolimits_{y\\in\\{0,1\\}^{p(n)}}(M(x,y,z)=1)\\le1/3." }, { "math_id": 3, "text": "\\Pr\\nolimits_{y\\in\\{0,1\\}^{p(n)}}(\\exists z\\in\\{0,1\\}^{q(n)}\\,M(x,y,z)=1)\\ge2/3," }, { "math_id": 4, "text": "\\Pr\\nolimits_{y\\in\\{0,1\\}^{p(n)}}(\\forall z\\in\\{0,1\\}^{q(n)}\\,M(x,y,z)=0)\\ge2/3." }, { "math_id": 5, "text": "\\Pr\\nolimits_{y\\in\\{0,1\\}^{p(n)}}(\\exists z\\in\\{0,1\\}^{q(n)}\\,M(x,y,z)=1)\\le1/3." }, { "math_id": 6, "text": " \\exists \\cdot \\mathsf{BPP}" }, { "math_id": 7, "text": "f_i" } ]
https://en.wikipedia.org/wiki?curid=663203
66323235
Network entropy
In network science, the network entropy is a disorder measure derived from information theory to describe the level of randomness and the amount of information encoded in a graph. It is a relevant metric to quantitatively characterize real complex networks and can also be used to quantify network complexity Formulations. According to a 2018 publication by Zenil "et al." there are several formulations by which to calculate network entropy and, as a rule, they all require a particular property of the graph to be focused, such as the adjacency matrix, degree sequence, degree distribution or number of bifurcations, what might lead to values of entropy that aren't invariant to the chosen network description. Degree Distribution Shannon Entropy. The Shannon entropy can be measured for the network degree probability distribution as an average measurement of the heterogeneity of the network. formula_0 This formulation has limited use with regards to complexity, information content, causation and temporal information. Be that as it may, algorithmic complexity has the ability to characterize any general or universal property of a graph or network and it is proven that graphs with low entropy have low algorithmic complexity because the statistical regularities found in a graph are useful for computer programs to recreate it. The same cannot be said for high entropy networks though, as these might have any value for algorithmic complexity. Random Walker Shannon Entropy. Due to the limits of the previous formulation, it is possible to take a different approach while keeping the usage of the original Shannon Entropy equation. Consider a random walker that travels around the graph, going from a node formula_1 to any node formula_2 adjacent to formula_1 with equal probability. The probability distribution formula_3 that describes the behavior of this random walker would thus be formula_4, where formula_5 is the graph adjacency matrix and formula_6 is the node formula_1 degree. From that, the Shannon entropy from each node formula_7 can be defined as formula_8 and, since formula_9, the normalized node entropy formula_10 is calculated formula_11 This leads to a normalized network entropy formula_12, calculated by averaging the normalized node entropy over the whole network: formula_13 The normalized network entropy is maximal formula_14 when the network is fully connected and decreases the sparser the network becomes formula_15. Notice that isolated nodes formula_16 do not have its probability formula_3 defined and, therefore, are not considered when measuring the network entropy. This formulation of network entropy has low sensitivity to hubs due to the logarithmic factor and is more meaningful for weighted networks., what ultimately makes it hard to differentiate scale-free networks using this measure alone. Random Walker Kolmogorov–Sinai Entropy. The limitations of the random walker Shannon entropy can be overcome by adapting it to use a Kolmogorov–Sinai entropy. In this context, network entropy is the entropy of a stochastic matrix associated with the graph adjacency matrix formula_5 and the random walker Shannon entropy is called the "dynamic entropy" of the network. From that, let formula_17 be the dominant eigenvalue of formula_5. It is proven that formula_18 satisfies a variational principal that is equivalent to the "dynamic entropy" for unweighted networks, i.e., the adjacency matrix consists exclusively of boolean values. Therefore, the topological entropy is defined as formula_19 This formulation is important to the study of network robustness, i.e., the capacity of the network to withstand random structural changes. Robustness is actually difficult to be measured numerically whereas the entropy can be easily calculated for any network, which is especially important in the context of non-stationary networks. The entropic fluctuation theorem shows that this entropy is positively correlated to robustness and hence a greater insensitivity of an observable to dynamic or structural perturbations of the network. Moreover, the eigenvalues are inherently related to the multiplicity of internal pathways, leading to a negative correlation between the topological entropy and the shortest average path length. Other than that, the Kolmogorov entropy is related to the Ricci curvature of the network, a metric that has been used to differentiate stages of cancer from gene co-expression networks, as well as to give hallmarks of financial crashes from stock correlation networks Von Neumann entropy. Von Neumann entropy is the extension of the classical Gibbs entropy in a quantum context. This entropy is constructed from a density matrix formula_20: historically, the first proposed candidate for such a density matrix has been an expression of the Laplacian matrix L associated with the network. The average von Neumann entropy of an ensemble is calculated as: formula_21 For random network ensemble formula_22, the relation between formula_23 and formula_24 is nonmonotonic when the average connectivity formula_25 is varied. For canonical power-law network ensembles, the two entropies are linearly related. formula_26 Networks with given expected degree sequences suggest that, heterogeneity in the expected degree distribution implies an equivalence between a quantum and a classical description of networks, which respectively corresponds to the von Neumann and the Shannon entropy. This definition of the Von Neumann entropy can also be extended to multilayer networks with tensorial approach and has been used successfully to reduce their dimensionality from a structural point of perspective. However, it has been shown that this definition of entropy does not satisfy the property of sub-additivity (see Von Neumann entropy's subadditivity), expected to hold theoretically. A more grounded definition, satisfying this fundamental property, has been introduced by Manlio De Domenico and Biamonte as a quantum-like Gibbs state formula_27 where formula_28 is a normalizing factor which plays the role of the partition function, and formula_29 is a tunable parameter which allows multi-resolution analysis. If formula_29 is interpreted as a temporal parameter, this density matrix is formally proportional to the propagator of a diffusive process on the top of the network. This feature has been used to build a statistical field theory of complex information dynamics, where the density matrix can be interpreted in terms of the super-position of streams operators whose action is to activate information flows among nodes. The framework has been successfully applied to analyze the protein-protein interaction networks of virus-human interactomes, including the SARS-CoV-2, to unravel the systemic features of infection of the latter at microscopic, mesoscopic and macroscopic scales, as well as to assess the importance of nodes for integrating information flows within the network and the role they play in network robustness. This approach has been generalized to deal with other types of dynamics, such as random walks, on the top of multilayer networks, providing an effective way to reduce the dimensionality of such systems without altering their structure. Using both classical and maximum-entropy random walks, the corresponding density matrices have been used to encode the network states of the human brain and to assess, at multiple scales, connectome’s information capacity at different stages of dementia. Maximum Entropy Principle. The maximum entropy principle is a variational principal stating that the probability distribution best representing the current state of a system is the one which maximizes the Shannon entropy. This concept can be used to generate an ensemble of random graphs with given structural properties derived from the maximum entropy approach which, in its turn, describes the most probable network configuration: the maximum entropy principle allows for maximally unbiased information when lacking complete knowledge (microscopic configuration is not accessible, e.g.: we don't know the adjacency matrix). On the other hand, this ensemble serves as a null model when the actual microscopic configuration of the network is known, allowing to assess the significance of empirical patterns found in the network Network Ensembles. It is possible to extend the network entropy formulations to instead measure the ensemble entropy. A set of networks that satisfies given structural characteristics can be treated as a network ensemble. Brought up by Ginestra Bianconi in 2007, the entropy of a network ensemble measures the level of the order or uncertainty of a network ensemble. The entropy is the logarithm of the number of graphs. Entropy can also be defined in one network. Basin entropy is the logarithm of the attractors in one Boolean network. Employing approaches from statistical mechanics, the complexity, uncertainty, and randomness of networks can be described by network ensembles with different types of constraints. Gibbs and Shannon entropy. By analogy to statistical mechanics, microcanonical ensembles and canonical ensembles of networks are introduced for the implementation. A partition function Z of an ensemble can be defined as: formula_30 where formula_31 is the constraint, and formula_32 (formula_33) are the elements in the adjacency matrix, formula_34 if and only if there is a link between node i and node j. formula_35 is a step function with formula_36 if formula_37, and formula_38 if formula_39. The auxiliary fields formula_40 and formula_41 have been introduced as analogy to the bath in classical mechanics. For simple undirected networks, the partition function can be simplified as formula_42 where formula_43, formula_44 is the index of the weight, and for a simple network formula_45. Microcanonical ensembles and canonical ensembles are demonstrated with simple undirected networks. For a microcanonical ensemble, the Gibbs entropy formula_46 is defined by: formula_47 where formula_48 indicates the cardinality of the ensemble, i.e., the total number of networks in the ensemble. The probability of having a link between nodes i and j, with weight formula_44 is given by: formula_49 For a canonical ensemble, the entropy is presented in the form of a Shannon entropy: formula_50 Relation between Gibbs and Shannon entropy. Network ensemble formula_51 with given number of nodes formula_52 and links formula_53, and its conjugate-canonical ensemble formula_22 are characterized as microcanonical and canonical ensembles and they have Gibbs entropy formula_46 and the Shannon entropy S, respectively. The Gibbs entropy in the formula_22 ensemble is given by: formula_54 For formula_22 ensemble, formula_55 Inserting formula_3 into the Shannon entropy: formula_56 The relation indicates that the Gibbs entropy formula_46 and the Shannon entropy per node S/N of random graphs are equal in the thermodynamic limit formula_57. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\mathcal{H} = - \\sum_{k=1}^{N - 1} P(k) \\ln{P(k)}" }, { "math_id": 1, "text": "i" }, { "math_id": 2, "text": "j" }, { "math_id": 3, "text": "p_{ij}" }, { "math_id": 4, "text": "p_{ij} = \\begin{cases}\n \\frac{1}{k_i}, & \\text{if } A_{ij} = 1 \\\\\n 0, & \\text{if } A_{ij} = 0 \\\\\n\\end{cases}" }, { "math_id": 5, "text": "(A_{ij})" }, { "math_id": 6, "text": "k_i" }, { "math_id": 7, "text": "\\mathcal{S}_i" }, { "math_id": 8, "text": "\\mathcal{S}_i = - \\sum_{j = 1}^{N - 1} p_{ij} \\ln{p_{ij}} = \\ln{k_i}" }, { "math_id": 9, "text": "max(k_i) = N - 1" }, { "math_id": 10, "text": "\\mathcal{H}_i" }, { "math_id": 11, "text": "\\mathcal{H}_i = \\frac{\\mathcal{S}_i}{max(\\mathcal{S}_i)} = \\frac{\\ln{k_i}}{\\ln(max(k_i))} = \\frac{\\ln{k_i}}{\\ln(N - 1)}" }, { "math_id": 12, "text": "\\mathcal{H}" }, { "math_id": 13, "text": "\\mathcal{H} = \\frac{1}{N} \\sum_{i = 1}^N \\mathcal{H}_i = \\frac{1}{N \\ln(N - 1)} \\sum_{i = 1}^N \\ln{k_i}" }, { "math_id": 14, "text": "\\mathcal{H} = 1" }, { "math_id": 15, "text": "\\mathcal{H} = 0" }, { "math_id": 16, "text": "k_i = 0" }, { "math_id": 17, "text": "\\lambda" }, { "math_id": 18, "text": "\\ln \\lambda" }, { "math_id": 19, "text": "\\mathcal{H} = \\ln \\lambda" }, { "math_id": 20, "text": "\\rho" }, { "math_id": 21, "text": "{S}_{VN} = -\\langle\\mathrm{Tr}\\rho\\log(\\rho)\\rangle" }, { "math_id": 22, "text": "G(N,p)" }, { "math_id": 23, "text": "S_{VN}" }, { "math_id": 24, "text": "S" }, { "math_id": 25, "text": "p(N-1)" }, { "math_id": 26, "text": "{S}_{VN} = \\eta {S/N} + \\beta" }, { "math_id": 27, "text": "\\rho(\\beta)=\\frac{e^{-\\beta L}}{Z(\\beta)}" }, { "math_id": 28, "text": "Z(\\beta)=Tr[e^{-\\beta L}]" }, { "math_id": 29, "text": "\\beta" }, { "math_id": 30, "text": "Z = \\sum_{\\mathbf{a}} \\delta \\left[\\vec{F}(\\mathbf{a})-\\vec{C}\\right] \\exp\\left(\\sum_{ij}h_{ij}\\Theta(a_{ij}) + r_{ij}a_{ij}\\right)" }, { "math_id": 31, "text": "\\vec{F}(\\mathbf{a})=\\vec{C}" }, { "math_id": 32, "text": "a_{ij}" }, { "math_id": 33, "text": "a_{ij} \\geq {0}" }, { "math_id": 34, "text": "a_{ij} > 0" }, { "math_id": 35, "text": "\\Theta(a_{ij})" }, { "math_id": 36, "text": "\\Theta(a_{ij}) = 1" }, { "math_id": 37, "text": "x > 0" }, { "math_id": 38, "text": "\\Theta(a_{ij}) = 0" }, { "math_id": 39, "text": "x = 0" }, { "math_id": 40, "text": "h_{ij}" }, { "math_id": 41, "text": "r_{ij}" }, { "math_id": 42, "text": "Z = \\sum_{\\{a_{ij}\\}} \\prod_{k}\\delta(\\textrm{constraint}_{k}(\\{a_{ij}\\})) \\exp\\left(\\sum_{i<j}\\sum_{\\alpha}h_{ij}(\\alpha)\\delta_{a_{ij},\\alpha}\\right)" }, { "math_id": 43, "text": "a_{ij}\\in\\alpha" }, { "math_id": 44, "text": "\\alpha" }, { "math_id": 45, "text": "\\alpha=\\{0,1\\}" }, { "math_id": 46, "text": "\\Sigma" }, { "math_id": 47, "text": "\\begin{align}\n\\Sigma &= \\frac{1}{N} \\log\\mathcal{N} \\\\\n&= \\frac{1}{N} \\log Z|_{h_{ij}(\\alpha)=0\\forall(i,j,\\alpha)}\n\\end{align}" }, { "math_id": 48, "text": "\\mathcal{N}" }, { "math_id": 49, "text": "\\pi_{ij}(\\alpha) = \\frac{\\partial \\log Z}{\\partial{h_{ij}}(\\alpha)}" }, { "math_id": 50, "text": "{S}=-\\sum_{i<j}\\sum_{\\alpha} \\pi_{ij}(\\alpha) \\log \\pi_{ij}(\\alpha)" }, { "math_id": 51, "text": "G(N,L)" }, { "math_id": 52, "text": "N" }, { "math_id": 53, "text": "L" }, { "math_id": 54, "text": "{N}\\Sigma = \\log\\left(\\begin{matrix}\\cfrac{N(N-1)}{2}\\\\L\\end{matrix}\\right)" }, { "math_id": 55, "text": "{p}_{ij} = p = \\cfrac{2L}{N(N-1)}" }, { "math_id": 56, "text": "\\Sigma = S/N+\\cfrac{1}{2N}\\left[\\log\\left( \\cfrac{N(N-1)}{2L} \\right) - \\log\\left(\\cfrac{N(N-1)}{2}-L\\right)\\right]" }, { "math_id": 57, "text": "N\\to\\infty" } ]
https://en.wikipedia.org/wiki?curid=66323235
66326711
Adams resolution
In mathematics, specifically algebraic topology, there is a resolution analogous to free resolutions of spectra yielding a tool for constructing the Adams spectral sequence. Essentially, the idea is to take a connective spectrum of finite type formula_0 and iteratively resolve with other spectra that are in the homotopy kernel of a map resolving the cohomology classes in formula_1 using Eilenberg–MacLane spectra. This construction can be generalized using a spectrum formula_2, such as the Brown–Peterson spectrum formula_3, or the complex cobordism spectrum formula_4, and is used in the construction of the Adams–Novikov spectral sequencepg 49. Construction. The mod formula_5 Adams resolution formula_6 for a spectrum formula_0 is a certain "chain-complex" of spectra induced from recursively looking at the fibers of maps into generalized Eilenberg–Maclane spectra giving generators for the cohomology of resolved spectrapg 43. By this, we start by considering the mapformula_7where formula_8 is an Eilenberg–Maclane spectrum representing the generators of formula_9, so it is of the formformula_10where formula_11 indexes a basis of formula_12, and the map comes from the properties of Eilenberg–Maclane spectra. Then, we can take the homotopy fiber of this map (which acts as a homotopy kernel) to get a space formula_13. Note, we now set formula_14 and formula_15. Then, we can form a commutative diagramformula_16where the horizontal map is the fiber map. Recursively iterating through this construction yields a commutative diagramformula_17giving the collection formula_6. This meansformula_18is the homotopy fiber of formula_19 and formula_20 comes from the universal properties of the homotopy fiber. Resolution of cohomology of a spectrum. Now, we can use the Adams resolution to construct a free formula_21-resolution of the cohomology formula_9 of a spectrum formula_0. From the Adams resolution, there are short exact sequencesformula_22which can be strung together to form a long exact sequenceformula_23giving a free resolution of formula_9 as an formula_21-module. "E"*-Adams resolution. Because there are technical difficulties with studying the cohomology ring formula_24 in generalpg 280, we restrict to the case of considering the homology coalgebra formula_25 (of co-operations). Note for the case formula_26, formula_27 is the dual Steenrod algebra. Since formula_28 is an formula_25-comodule, we can form the bigraded groupformula_29which contains the formula_30-page of the Adams–Novikov spectral sequence for formula_0 satisfying a list of technical conditionspg 50. To get this page, we must construct the formula_31-Adams resolutionpg 49, which is somewhat analogous to the cohomological resolution above. We say a diagram of the formformula_32where the vertical arrows formula_33 is an formula_31-Adams resolution if Although this seems like a long laundry list of properties, they are very important in the construction of the spectral sequence. In addition, the retract properties affect the structure of construction of the formula_31-Adams resolution since we no longer need to take a wedge sum of spectra for every generator. Construction for ring spectra. The construction of the formula_31-Adams resolution is rather simple to state in comparison to the previous resolution for any associative, commutative, connective ring spectrum formula_2 satisfying some additional hypotheses. These include formula_25 being flat over formula_46, formula_47 on formula_48 being an isomorphism, and formula_49 with formula_50 being finitely generated for which the unique ring mapformula_51extends maximally. If we setformula_52and letformula_33be the canonical map, we can setformula_34Note that formula_2 is a retract of formula_53 from its ring spectrum structure, hence formula_36 is a retract of formula_54, and similarly, formula_41 is a retract of formula_37. In additionformula_55which gives the desired formula_56 terms from the flatness. Relation to cobar complex. It turns out the formula_57-term of the associated Adams–Novikov spectral sequence is then cobar complex formula_58.
[ { "math_id": 0, "text": "X" }, { "math_id": 1, "text": "H^*(X;\\mathbb{Z}/p)" }, { "math_id": 2, "text": "E" }, { "math_id": 3, "text": "BP" }, { "math_id": 4, "text": "MU" }, { "math_id": 5, "text": "p" }, { "math_id": 6, "text": "(X_s,g_s)" }, { "math_id": 7, "text": "\\begin{matrix}\nX \\\\\n\\downarrow \\\\\nK\n\\end{matrix}" }, { "math_id": 8, "text": "K" }, { "math_id": 9, "text": "H^*(X)" }, { "math_id": 10, "text": "K = \\bigwedge_{k=1}^\\infty \\bigwedge_{I_k} \\Sigma^kH\\mathbb{Z}/p " }, { "math_id": 11, "text": "I_k" }, { "math_id": 12, "text": "H^k(X)" }, { "math_id": 13, "text": "X_1" }, { "math_id": 14, "text": "X_0 = X" }, { "math_id": 15, "text": "K_0 = K" }, { "math_id": 16, "text": "\\begin{matrix}\nX_0 & \\leftarrow & X_1 \\\\\n\\downarrow & & \\\\\nK_0\n\\end{matrix}" }, { "math_id": 17, "text": "\\begin{matrix}\nX_0 & \\leftarrow & X_1 & \\leftarrow & X_2 & \\leftarrow \\cdots \\\\\n\\downarrow & & \\downarrow & & \\downarrow \\\\\nK_0 & & K_1 & & K_2\n\\end{matrix}" }, { "math_id": 18, "text": "X_s = \\text{Hofiber}(f_{s-1}:X_{s-1} \\to K_{s-1})" }, { "math_id": 19, "text": "f_{s-1}" }, { "math_id": 20, "text": "g_s:X_s \\to X_{s-1}" }, { "math_id": 21, "text": "\\mathcal{A}_p" }, { "math_id": 22, "text": "0 \\leftarrow H^*(X_s) \\leftarrow H^*(K_s) \\leftarrow H^*(\\Sigma X_{s+1}) \\leftarrow 0" }, { "math_id": 23, "text": "0 \\leftarrow H^*(X) \\leftarrow H^*(K_0) \\leftarrow H^*(\\Sigma K_1)\n\\leftarrow H^*(\\Sigma^2 K_2) \\leftarrow \\cdots " }, { "math_id": 24, "text": "E^*(E)" }, { "math_id": 25, "text": "E_*(E)" }, { "math_id": 26, "text": "E = H\\mathbb{F}_p" }, { "math_id": 27, "text": "H\\mathbb{F}_{p*}(H\\mathbb{F}_p) =\\mathcal{A}_*" }, { "math_id": 28, "text": "E_*(X)" }, { "math_id": 29, "text": "\\text{Ext}_{E_*(E)}(E_*(\\mathbb{S}), E_*(X))" }, { "math_id": 30, "text": "E_2" }, { "math_id": 31, "text": "E_*" }, { "math_id": 32, "text": "\\begin{matrix}\nX_0 & \\xleftarrow{g_0} & X_1 & \\xleftarrow{g_1} & X_2 & \\leftarrow \\cdots \\\\\n\\downarrow & & \\downarrow & & \\downarrow \\\\\nK_0 & & K_1 & & K_2\n\\end{matrix}" }, { "math_id": 33, "text": "f_s: X_s \\to K_s" }, { "math_id": 34, "text": "X_{s+1} = \\text{Hofiber}(f_s)" }, { "math_id": 35, "text": "f_s" }, { "math_id": 36, "text": "E \\wedge X_s" }, { "math_id": 37, "text": "E\\wedge K_s" }, { "math_id": 38, "text": "E_*(f_s)" }, { "math_id": 39, "text": "h_s:E \\wedge K_s \\to E \\wedge X_s" }, { "math_id": 40, "text": "h_s(E\\wedge f_s) = id_{E \\wedge X_s}" }, { "math_id": 41, "text": "K_s" }, { "math_id": 42, "text": "E \\wedge K_s" }, { "math_id": 43, "text": "\\text{Ext}^{t,u}(E_*(\\mathbb{S}), E_*(K_s)) = \\pi_u(K_s)" }, { "math_id": 44, "text": "t = 0" }, { "math_id": 45, "text": "0" }, { "math_id": 46, "text": "\\pi_*(E)" }, { "math_id": 47, "text": "\\mu_*" }, { "math_id": 48, "text": "\\pi_0" }, { "math_id": 49, "text": "H_r(E; A)" }, { "math_id": 50, "text": "\\mathbb{Z} \\subset A \\subset \\mathbb{Q}" }, { "math_id": 51, "text": "\\theta:\\mathbb{Z} \\to \\pi_0(E)" }, { "math_id": 52, "text": "K_s = E \\wedge F_s" }, { "math_id": 53, "text": "E \\wedge E" }, { "math_id": 54, "text": "E \\wedge K_s = E \\wedge E \\wedge X_s" }, { "math_id": 55, "text": "E_*(K_s) = E_*(E)\\otimes_{\\pi_*(E)}E_*(X_s)" }, { "math_id": 56, "text": "\\text{Ext}" }, { "math_id": 57, "text": "E_1" }, { "math_id": 58, "text": "C^*(E_*(X))" } ]
https://en.wikipedia.org/wiki?curid=66326711
663345
Function problem
Type of computational problem In computational complexity theory, a function problem is a computational problem where a single output (of a total function) is expected for every input, but the output is more complex than that of a decision problem. For function problems, the output is not simply 'yes' or 'no'. Formal definition. A functional problem formula_0 is defined by a relation formula_1 over strings of an arbitrary alphabet formula_2: formula_3 An algorithm solves formula_0 if for every input formula_4 such that there exists a formula_5 satisfying formula_6, the algorithm produces one such formula_5, and if there are no such formula_5, it rejects. A promise function problem is allowed to do anything (thus may not terminate) if no such formula_5 exists. Examples. A well-known function problem is given by the Functional Boolean Satisfiability Problem, FSAT for short. The problem, which is closely related to the SAT decision problem, can be formulated as follows: Given a boolean formula formula_7 with variables formula_8, find an assignment formula_9 such that formula_7 evaluates to formula_10 or decide that no such assignment exists. In this case the relation formula_1 is given by tuples of suitably encoded boolean formulas and satisfying assignments. While a SAT algorithm, fed with a formula formula_7, only needs to return "unsatisfiable" or "satisfiable", an FSAT algorithm needs to return some satisfying assignment in the latter case. Other notable examples include the travelling salesman problem, which asks for the route taken by the salesman, and the integer factorization problem, which asks for the list of factors. Relationship to other complexity classes. Consider an arbitrary decision problem formula_11 in the class NP. By the definition of NP, each problem instance formula_4 that is answered 'yes' has a polynomial-size certificate formula_5 which serves as a proof for the 'yes' answer. Thus, the set of these tuples formula_12 forms a relation, representing the function problem "given formula_4 in formula_11, find a certificate formula_5 for formula_4". This function problem is called the "function variant" of formula_11; it belongs to the class FNP. FNP can be thought of as the function class analogue of NP, in that solutions of FNP problems can be efficiently (i.e., in polynomial time in terms of the length of the input) "verified", but not necessarily efficiently "found". In contrast, the class FP, which can be thought of as the function class analogue of P, consists of function problems whose solutions can be found in polynomial time. Self-reducibility. Observe that the problem FSAT introduced above can be solved using only polynomially many calls to a subroutine which decides the SAT problem: An algorithm can first ask whether the formula formula_7 is satisfiable. After that the algorithm can fix variable formula_13 to TRUE and ask again. If the resulting formula is still satisfiable the algorithm keeps formula_13 fixed to TRUE and continues to fix formula_14, otherwise it decides that formula_13 has to be FALSE and continues. Thus, FSAT is solvable in polynomial time using an oracle deciding SAT. In general, a problem in NP is called "self-reducible" if its function variant can be solved in polynomial time using an oracle deciding the original problem. Every NP-complete problem is self-reducible. It is conjectured that the integer factorization problem is not self-reducible, because deciding whether an integer is prime is in P (easy), while the integer factorization problem is believed to be hard for a classical computer. There are several (slightly different) notions of self-reducibility. Reductions and complete problems. Function problems can be reduced much like decision problems: Given function problems formula_15 and formula_16 we say that formula_15 reduces to formula_16 if there exists polynomially-time computable functions formula_17 and formula_18 such that for all instances formula_4 of formula_1 and possible solutions formula_5 of formula_19, it holds that It is therefore possible to define FNP-complete problems analogous to the NP-complete problem: A problem formula_15 is FNP-complete if every problem in FNP can be reduced to formula_15. The complexity class of FNP-complete problems is denoted by FNP-C or FNPC. Hence the problem FSAT is also an FNP-complete problem, and it holds that formula_22 if and only if formula_23. Total function problems. The relation formula_24 used to define function problems has the drawback of being incomplete: Not every input formula_4 has a counterpart formula_5 such that formula_6. Therefore the question of computability of proofs is not separated from the question of their existence. To overcome this problem it is convenient to consider the restriction of function problems to total relations yielding the class TFNP as a subclass of FNP. This class contains problems such as the computation of pure Nash equilibria in certain strategic games where a solution is guaranteed to exist. In addition, if TFNP contains any FNP-complete problem it follows that formula_25. References. &lt;templatestyles src="Reflist/styles.css" /&gt; &lt;templatestyles src="Refbegin/styles.css" /&gt;
[ { "math_id": 0, "text": "P" }, { "math_id": 1, "text": "R" }, { "math_id": 2, "text": "\\Sigma" }, { "math_id": 3, "text": "R \\subseteq \\Sigma^* \\times \\Sigma^*." }, { "math_id": 4, "text": "x" }, { "math_id": 5, "text": "y" }, { "math_id": 6, "text": "(x, y) \\in R" }, { "math_id": 7, "text": "\\varphi" }, { "math_id": 8, "text": "x_1, \\ldots, x_n" }, { "math_id": 9, "text": "x_i \\rightarrow \\{ \\text{TRUE}, \\text{FALSE} \\}" }, { "math_id": 10, "text": "\\text{TRUE}" }, { "math_id": 11, "text": "L" }, { "math_id": 12, "text": "(x,y)" }, { "math_id": 13, "text": "x_1" }, { "math_id": 14, "text": "x_2" }, { "math_id": 15, "text": "\\Pi_R" }, { "math_id": 16, "text": "\\Pi_S" }, { "math_id": 17, "text": "f" }, { "math_id": 18, "text": "g" }, { "math_id": 19, "text": "S" }, { "math_id": 20, "text": "f(x)" }, { "math_id": 21, "text": "(f(x), y) \\in S \\implies (x, g(x,y)) \\in R." }, { "math_id": 22, "text": "\\mathbf{P} = \\mathbf{NP}" }, { "math_id": 23, "text": "\\mathbf{FP} = \\mathbf{FNP}" }, { "math_id": 24, "text": "R(x, y)" }, { "math_id": 25, "text": "\\mathbf{NP} = \\textbf{co-NP}" } ]
https://en.wikipedia.org/wiki?curid=663345
663347
FP (complexity)
Complexity class In computational complexity theory, the complexity class FP is the set of function problems that can be solved by a deterministic Turing machine in polynomial time. It is the function problem version of the decision problem class P. Roughly speaking, it is the class of functions that can be efficiently computed on classical computers without randomization. The difference between FP and P is that problems in P have one-bit, yes/no answers, while problems in FP can have any output that can be computed in polynomial time. For example, adding two numbers is an FP problem, while determining if their sum is odd is in P. Polynomial-time function problems are fundamental in defining polynomial-time reductions, which are used in turn to define the class of NP-complete problems. Formal definition. FP is formally defined as follows: A binary relation formula_0 is in FP if and only if there is a deterministic polynomial time algorithm that, given formula_1, either finds some formula_2 such that formula_0 holds, or signals that no such formula_2 exists. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "P(x,y)" }, { "math_id": 1, "text": "x" }, { "math_id": 2, "text": "y" } ]
https://en.wikipedia.org/wiki?curid=663347
663426
Quantum logic
Theory of logic to account for observations from quantum theory In the mathematical study of logic and the physical analysis of quantum foundations, quantum logic is a set of rules for manip­ulation of propositions inspired by the structure of quantum theory. The formal system takes as its starting point an obs­ervation of Garrett Birkhoff and John von Neumann, that the structure of experimental tests in classical mechanics forms a Boolean algebra, but the structure of experimental tests in quantum mechanics forms a much more complicated structure. A number of other logics have also been proposed to analyze quantum-mechanical phenomena, unfortunately also under the name of "quantum logic(s)". They are not the subject of this article. For discussion of the similarities and differences between quantum logic and some of these competitors, see "". Quantum logic has been proposed as the correct logic for propositional inference generally, most notably by the philosopher Hilary Putnam, at least at one point in his career. This thesis was an important ingredient in Putnam's 1968 paper "Is Logic Empirical?" in which he analysed the epistemological status of the rules of propositional logic. Modern philosophers reject quantum logic as a basis for reasoning, because it lacks a material conditional; a common alternative is the system of linear logic, of which quantum logic is a fragment. Mathematically, quantum logic is formulated by weakening the distributive law for a Boolean algebra, resulting in an ortho­complemented lattice. Quantum-mechanical observables and states can be defined in terms of functions on or to the lattice, giving an alternate formalism for quantum computations. Introduction. The most notable difference between quantum logic and classical logic is the failure of the propositional distributive law: "p" and ("q" or "r") = ("p" and "q") or ("p" and "r"), where the symbols "p", "q" and "r" are propositional variables. To illustrate why the distributive law fails, consider a particle moving on a line and (using some system of units where the reduced Planck constant is 1) let "p" = "the particle has momentum in the interval []" "q" = "the particle is in the interval [−1, 1]" "r" = "the particle is in the interval [1, 3]" We might observe that: "p" and ("q" or "r") = "true" in other words, that the state of the particle is a weighted superposition of momenta between 0 and +1/6 and positions between −1 and +3. On the other hand, the propositions ""p" and "q" and "p" and "r"" each assert tighter restrictions on simultaneous values of position and momentum than are allowed by the uncertainty principle (they each have uncertainty 1/3, which is less than the allowed minimum of 1/2). So there are no states that can support either proposition, and ("p" and "q") or ("p" and "r") = "false" History and modern criticism. In his classic 1932 treatise "Mathematical Foundations of Quantum Mechanics", John von Neumann noted that projections on a Hilbert space can be viewed as propositions about physical observables; that is, as potential "yes-or-no questions" an observer might ask about the state of a physical system, questions that could be settled by some measurement. Principles for manipulating these quantum propositions were then called "quantum logic" by von Neumann and Birkhoff in a 1936 paper. George Mackey, in his 1963 book (also called "Mathematical Foundations of Quantum Mechanics"), attempted to axiomatize quantum logic as the structure of an ortho­complemented lattice, and recognized that a physical observable could be "defined" in terms of quantum propositions. Although Mackey's presentation still assumed that the ortho­complemented lattice is the lattice of closed linear subspaces of a separable Hilbert space, Constantin Piron, Günther Ludwig and others later developed axiomatizations that do not assume an underlying Hilbert space. Inspired by Hans Reichenbach's then-recent defence of general relativity, the philosopher Hilary Putnam popularized Mackey's work in two papers in 1968 and 1975, in which he attributed the idea that anomalies associated to quantum measurements originate with a failure of logic itself to his coauthor, physicist David Finkelstein. Putnam hoped to develop a possible alternative to hidden variables or wavefunction collapse in the problem of quantum measurement, but Gleason's theorem presents severe difficulties for this goal. Later, Putnam retracted his views, albeit with much less fanfare, but the damage had been done. While Birkhoff and von Neumann's original work only attempted to organize the calculations associated with the Copenhagen interpretation of quantum mechanics, a school of researchers had now sprung up, either hoping that quantum logic would provide a viable hidden-variable theory, or obviate the need for one. Their work proved fruitless, and now lies in poor repute. Most philosophers find quantum logic an unappealing competitor to classical logic. It is far from evident (albeit true) that quantum logic is a "logic", in the sense of describing a process of reasoning, as opposed to a particularly convenient language to summarize the measurements performed by quantum apparatuses. In particular, modern philosophers of science argue that quantum logic attempts to substitute metaphysical difficulties for unsolved problems in physics, rather than properly solving the physics problems. Tim Maudlin writes that quantum "logic 'solves' the [measurement] problem by making the problem impossible to state." The horse of quantum logic has been so thrashed, whipped and pummeled, and is so thoroughly deceased that...the question is not whether the horse will rise again, it is: how in the world did this horse get here in the first place? The tale of quantum logic is not the tale of a promising idea gone bad, it is rather the tale of the unrelenting pursuit of a bad idea. ... Many, many philosophers and physicists have become convinced that a change of logic (and most dramatically, the rejection of classical logic) will somehow help in understanding quantum theory, or is somehow suggested or forced on us by quantum theory. But quantum logic, even through its many incarnations and variations, both in technical form and in interpretation, has never yielded the goods. — Maudlin, "Hilary Putnam", pp. 184–185 Quantum logic remains in limited use among logicians as an extremely pathological counterexample (Dalla Chiara and Giuntini: "Why quantum logics? Simply because 'quantum logics are there!'"). Although the central insight to quantum logic remains mathematical folklore as an intuition pump for categorification, discussions rarely mention quantum logic. Quantum logic's best chance at revival is through the recent development of quantum computing, which has engendered a proliferation of new logics for formal analysis of quantum protocols and algorithms (see also ). The logic may also find application in (computational) linguistics. Algebraic structure. Quantum logic can be axiomatized as the theory of propositions modulo the following identities: ¬¬"a" "b"∨¬"b" for any "b". "a". Some authors restrict to orthomodular lattices, which additionally satisfy the orthomodular law: ¬(¬"a"∨¬"b")∨¬("a"∨"b") then "a" "b". Alternative formulations include propositions derivable via a natural deduction, sequent calculus or tableaux system. Despite the relatively developed proof theory, quantum logic is not known to be decidable. Quantum logic as the logic of observables. The remainder of this article assumes the reader is familiar with the spectral theory of self-adjoint operators on a Hilbert space. However, the main ideas can be under­stood in the finite-dimensional case. Logic of classical mechanics. The Hamiltonian formulations of classical mechanics have three ingredients: states, observables and dynamics. In the simplest case of a single particle moving in R3, the state space is the position–momentum space R6. An observable is some real-valued function "f" on the state space. Examples of observables are position, momentum or energy of a particle. For classical systems, the value "f"("x"), that is the value of "f" for some particular system state "x", is obtained by a process of measurement of "f". The propositions concerning a classical system are generated from basic statements of the form "Measurement of "f" yields a value in the interval ["a", "b"] for some real numbers "a", "b"." through the conventional arithmetic operations and pointwise limits. It follows easily from this characterization of propositions in classical systems that the corresponding logic is identical to the Boolean algebra of Borel subsets of the state space. They thus obey the laws of classical propositional logic (such as de Morgan's laws) with the set operations of union and intersection corresponding to the Boolean conjunctives and subset inclusion corresponding to material implication. In fact, a stronger claim is true: they must obey the infinitary logic "L"ω1,ω. We summarize these remarks as follows: The proposition system of a classical system is a lattice with a distinguished "orthocomplementation" operation: The lattice operations of "meet" and "join" are respectively set intersection and set union. The orthocomplementation operation is set complement. Moreover, this lattice is "sequentially complete", in the sense that any sequence {"E""i"}"i"∈N of elements of the lattice has a least upper bound, specifically the set-theoretic union: formula_0 Propositional lattice of a quantum mechanical system. In the Hilbert space formulation of quantum mechanics as presented by von Neumann, a physical observable is represented by some (possibly unbounded) densely defined self-adjoint operator "A" on a Hilbert space "H". "A" has a spectral decomposition, which is a projection-valued measure E defined on the Borel subsets of R. In particular, for any bounded Borel function "f" on R, the following extension of "f" to operators can be made: formula_1 In case "f" is the indicator function of an interval ["a", "b"], the operator "f"("A") is a self-adjoint projection onto the subspace of generalized eigenvectors of "A" with eigenvalue in ["a","b"]. That subspace can be interpreted as the quantum analogue of the classical proposition This suggests the following quantum mechanical replacement for the orthocomplemented lattice of propositions in classical mechanics, essentially Mackey's "Axiom VII": The space "Q" of quantum propositions is also sequentially complete: any pairwise-disjoint sequence {"V""i"}"i" of elements of "Q" has a least upper bound. Here disjointness of "W"1 and "W"2 means "W"2 is a subspace of "W"1⊥. The least upper bound of {"V""i"}"i" is the closed internal direct sum. Standard semantics. The standard semantics of quantum logic is that quantum logic is the logic of projection operators in a separable Hilbert or pre-Hilbert space, where an observable "p" is associated with the set of quantum states for which "p" (when measured) has eigenvalue 1. From there, This semantics has the nice property that the pre-Hilbert space is complete (i.e., Hilbert) if and only if the propositions satisfy the orthomodular law, a result known as the Solèr theorem. Although much of the development of quantum logic has been motivated by the standard semantics, it is not the characterized by the latter; there are additional properties satisfied by that lattice that need not hold in quantum logic. Differences with classical logic. The structure of "Q" immediately points to a difference with the partial order structure of a classical proposition system. In the classical case, given a proposition "p", the equations ⊤ = "p"∨"q" and ⊥ = "p"∧"q" have exactly one solution, namely the set-theoretic complement of "p". In the case of the lattice of projections there are infinitely many solutions to the above equations (any closed, algebraic complement of "p" solves it; it need not be the orthocomplement). More generally, propositional valuation has unusual properties in quantum logic. An orthocomplemented lattice admitting a total lattice homomorphism to {⊥,⊤} must be Boolean. A standard workaround is to study maximal partial homomorphisms "q" with a filtering property: if "a"≤"b" and "q"("a") = ⊤, then "q"("b") = ⊤. Failure of distributivity. Expressions in quantum logic describe observables using a syntax that resembles classical logic. However, unlike classical logic, the distributive law "a" ∧ ("b" ∨ "c") = ("a" ∧ "b") ∨ ("a" ∧ "c") fails when dealing with noncommuting observables, such as position and momentum. This occurs because measurement affects the system, and measurement of whether a disjunction holds does not measure which of the disjuncts is true. For example, consider a simple one-dimensional particle with position denoted by "x" and momentum by "p", and define observables: Now, position and momentum are Fourier transforms of each other, and the Fourier transform of a square-integrable nonzero function with a compact support is entire and hence does not have non-isolated zeroes. Therefore, there is no wave function that is both normalizable in momentum space and vanishes on precisely "x" ≥ 0. Thus, "a" ∧ "b" and similarly "a" ∧ "c" are false, so ("a" ∧ "b") ∨ ("a" ∧ "c") is false. However, "a" ∧ ("b" ∨ "c") equals "a", which is certainly not false (there are states for which it is a viable measurement outcome). Moreover: if the relevant Hilbert space for the particle's dynamics only admits momenta no greater than 1, then "a" is true. To understand more, let "p"1 and "p"2 be the momentum functions (Fourier transforms) for the projections of the particle wave function to "x" ≤ 0 and "x" ≥ 0 respectively. Let |"p"i|↾≥1 be the restriction of "p"i to momenta that are (in absolute value) ≥1. ("a" ∧ "b") ∨ ("a" ∧ "c") corresponds to states with |"p"1|↾≥1 = |"p"2|↾≥1 = 0 (this holds even if we defined "p" differently so as to make such states possible; also, "a" ∧ "b" corresponds to |"p"1|↾≥1=0 and "p"2=0). Meanwhile, "a" corresponds to states with |"p"|↾≥1 = 0. As an operator, "p" = "p"1 + "p"2, and nonzero |"p"1|↾≥1 and |"p"2|↾≥1 might interfere to produce zero |"p"|↾≥1. Such interference is key to the richness of quantum logic and quantum mechanics. Relationship to quantum measurement. Mackey observables. Given a orthocomplemented lattice "Q", a Mackey observable φ is a countably additive homomorphism from the orthocomplemented lattice of Borel subsets of R to "Q". In symbols, this means that for any sequence {"S""i"}"i" of pairwise-disjoint Borel subsets of R, {φ("S""i")}"i" are pairwise-orthogonal propositions (elements of "Q") and formula_2 Equivalently, a Mackey observable is a projection-valued measure on R. Theorem (Spectral theorem). If "Q" is the lattice of closed subspaces of Hilbert "H", then there is a bijective correspondence between Mackey observables and densely-defined self-adjoint operators on "H". Quantum probability measures. A "quantum probability measure" is a function P defined on "Q" with values in [0,1] such that P("⊥)=0, P(⊤)=1 and if {"E""i"}"i" is a sequence of pairwise-orthogonal elements of "Q" then formula_3 Every quantum probability measure on the closed subspaces of a Hilbert space is induced by a density matrix — a nonnegative operator of trace 1. Formally, Theorem. Suppose "Q" is the lattice of closed subspaces of a separable Hilbert space of complex dimension at least 3. Then for any quantum probability measure "P" on "Q" there exists a unique trace class operator "S" such that formula_4 for any self-adjoint projection "E" in "Q". Relationship to other logics. Quantum logic embeds into linear logic and the modal logic "B". Indeed, modern logics for the analysis of quantum computation often begin with quantum logic, and attempt to graft desirable features of an extension of classical logic thereonto; the results then necessarily embed quantum logic. The orthocomplemented lattice of any set of quantum propositions can be embedded into a Boolean algebra, which is then amenable to classical logic. Limitations. Although many treatments of quantum logic assume that the underlying lattice must be orthomodular, such logics cannot handle multiple interacting quantum systems. In an example due to Foulis and Randall, there are orthomodular propositions with finite-dimensional Hilbert models whose pairing admits no orthomodular model. Likewise, quantum logic with the orthomodular law falsifies the deduction theorem. Quantum logic admits no reasonable material conditional; any connective that is monotone in a certain technical sense reduces the class of propositions to a Boolean algebra. Consequently, quantum logic struggles to represent the passage of time. One possible workaround is the theory of quantum filtrations developed in the late 1970s and 1980s by Belavkin. It is known, however, that System BV, a deep inference fragment of linear logic that is very close to quantum logic, can handle arbitrary discrete spacetimes. See also. &lt;templatestyles src="Div col/styles.css"/&gt; Notes. &lt;templatestyles src="Reflist/styles.css" /&gt; Citations. &lt;templatestyles src="Reflist/styles.css" /&gt; "Arranged chronologically"
[ { "math_id": 0, "text": " \\operatorname{LUB}(\\{E_i\\}) = \\bigcup_{i=1}^\\infty E_i\\text{.} " }, { "math_id": 1, "text": " f(A) = \\int_{\\mathbb{R}} f(\\lambda) \\, d \\operatorname{E}(\\lambda)." }, { "math_id": 2, "text": " \\varphi\\left(\\bigcup_{i=1}^\\infty S_i\\right) = \\sum_{i=1}^\\infty \\varphi(S_i). " }, { "math_id": 3, "text": " \\operatorname{P}\\!\\left(\\bigvee_{i=1}^\\infty E_i\\right) = \\sum_{i=1}^\\infty \\operatorname{P}(E_i). " }, { "math_id": 4, "text": "\\operatorname{P}(E) = \\operatorname{Tr}(S E)" } ]
https://en.wikipedia.org/wiki?curid=663426
66342900
H-object
In mathematics, specifically homotopical algebra, an H-object is a categorical generalization of an H-space, which can be defined in any category formula_0 with a product formula_1 and an initial object formula_2. These are useful constructions because they help export some of the ideas from algebraic topology and homotopy theory into other domains, such as in commutative algebra and algebraic geometry. Definition. In a category formula_0 with a product formula_1 and initial object formula_2, an H-object is an object formula_3 together with an operation called multiplication together with a two sided identity. If we denote formula_4, the structure of an H-object implies there are mapsformula_5which have the commutation relationsformula_6 Examples. Magmas. All magmas with units are secretly H-objects in the category formula_7. H-spaces. Another example of H-objects are H-spaces in the homotopy category of topological spaces formula_8. H-objects in homotopical algebra. In homotopical algebra, one class of H-objects considered were by Quillen while constructing André–Quillen cohomology for commutative rings. For this section, let all algebras be commutative, associative, and unital. If we let formula_9 be a commutative ring, and let formula_10 be the undercategory of such algebras over formula_9 (meaning formula_9-algebras), and set formula_11 be the associatived overcategory of objects in formula_10, then an H-object in this category formula_11 is an algebra of the form formula_12 where formula_13 is a formula_14-module. These algebras have the addition and multiplication operationsformula_15Note that the multiplication map given above gives the H-object structure formula_16. Notice that in addition we have the other two structure maps given byformula_17giving the full H-object structure. Interestingly, these objects have the following property:formula_18giving an isomorphism between the formula_9-derivations of formula_19 to formula_13 and morphisms from formula_19 to the H-object formula_12. In fact, this implies formula_12 is an abelian group object in the category formula_11 since it gives a contravariant functor with values in Abelian groups. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\mathcal{C}" }, { "math_id": 1, "text": "\\times" }, { "math_id": 2, "text": "*" }, { "math_id": 3, "text": "X \\in \\text{Ob}(\\mathcal{C})" }, { "math_id": 4, "text": "u_X: X \\to *" }, { "math_id": 5, "text": "\\begin{align}\n\\varepsilon&: * \\to X \\\\\n\\mu&: X\\times X \\to X\n\\end{align}" }, { "math_id": 6, "text": "\\mu(\\varepsilon\\circ u_X, id_X) = \\mu(id_X,\\varepsilon\\circ u_X) = id_X" }, { "math_id": 7, "text": "\\textbf{Set}" }, { "math_id": 8, "text": "\\text{Ho}(\\textbf{Top})" }, { "math_id": 9, "text": "A" }, { "math_id": 10, "text": "A\\backslash R" }, { "math_id": 11, "text": "(A\\backslash R)/B" }, { "math_id": 12, "text": "B\\oplus M" }, { "math_id": 13, "text": "M" }, { "math_id": 14, "text": "B" }, { "math_id": 15, "text": "\\begin{align}\n(b\\oplus m)+(b'\\oplus m') &= (b + b')\\oplus (m+m') \\\\\n(b\\oplus m)\\cdot(b'\\oplus m') &= (bb')\\oplus(bm' + b'm)\n\\end{align}" }, { "math_id": 16, "text": "\\mu" }, { "math_id": 17, "text": "\\begin{align}\nu_{B\\oplus M }(b\\oplus m) &= b\\\\\n\\varepsilon (b) &= b\\oplus 0\n\\end{align}" }, { "math_id": 18, "text": "\\text{Hom}_{(A\\backslash R)/B}(Y,B\\oplus M) \\cong \\text{Der}_A(Y, M)" }, { "math_id": 19, "text": "Y" } ]
https://en.wikipedia.org/wiki?curid=66342900
663496
Triangular distribution
Probability distribution In probability theory and statistics, the triangular distribution is a continuous probability distribution with lower limit "a", upper limit "b", and mode "c", where "a" &lt; "b" and "a" ≤ "c" ≤ "b". Special cases. Mode at a bound. The distribution simplifies when "c" = "a" or "c" = "b". For example, if "a" = 0, "b" = 1 and "c" = 1, then the PDF and CDF become: formula_0 formula_1 Distribution of the absolute difference of two standard uniform variables. This distribution for "a" = 0, "b" = 1 and "c" = 0 is the distribution of "X" = |"X"1 − "X"2|, where "X"1, "X"2 are two independent random variables with standard uniform distribution. formula_2 Symmetric triangular distribution. The symmetric case arises when "c" = ("a" + "b") / 2. In this case, an alternate form of the distribution function is: formula_3 Distribution of the mean of two standard uniform variables. This distribution for "a" = 0, "b" = 1 and "c" = 0.5—the mode (i.e., the peak) is exactly in the middle of the interval—corresponds to the distribution of the mean of two standard uniform variables, that is, the distribution of "X" = ("X"1 + "X"2) / 2, where "X"1, "X"2 are two independent random variables with standard uniform distribution in [0, 1]. It is the case of the Bates distribution for two variables. formula_4 formula_5 formula_6 Generating random variates. Given a random variate "U" drawn from the uniform distribution in the interval (0, 1), then the variate formula_7 where formula_8, has a triangular distribution with parameters formula_9 and formula_10. This can be obtained from the cumulative distribution function. Use of the distribution. The triangular distribution is typically used as a subjective description of a population for which there is only limited sample data, and especially in cases where the relationship between variables is known but data is scarce (possibly because of the high cost of collection). It is based on a knowledge of the minimum and maximum and an "inspired guess" as to the modal value. For these reasons, the triangle distribution has been called a "lack of knowledge" distribution. Business simulations. The triangular distribution is therefore often used in business decision making, particularly in simulations. Generally, when not much is known about the distribution of an outcome (say, only its smallest and largest values), it is possible to use the uniform distribution. But if the most likely outcome is also known, then the outcome can be simulated by a triangular distribution. See for example under corporate finance. Project management. The triangular distribution, along with the PERT distribution, is also widely used in project management (as an input into PERT and hence critical path method (CPM)) to model events which take place within an interval defined by a minimum and maximum value. Audio dithering. The symmetric triangular distribution is commonly used in audio dithering, where it is called TPDF (triangular probability density function). References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": " \\left.\\begin{array}{rl} f(x) &= 2x \\\\[8pt]\nF(x) &= x^2 \\end{array}\\right\\} \\text{ for } 0 \\le x \\le 1 " }, { "math_id": 1, "text": " \\begin{align}\n \\operatorname E(X) & = \\frac{2}{3} \\\\[8pt]\n \\operatorname{Var}(X) &= \\frac{1}{18}\n\\end{align} " }, { "math_id": 2, "text": "\n\\begin{align}\nf(x) & = 2 -2x \\text{ for } 0 \\le x < 1 \\\\[6pt]\nF(x) & = 2x - x^2 \\text{ for } 0 \\le x < 1 \\\\[6pt]\nE(X) & = \\frac{1}{3} \\\\[6pt]\n\\operatorname{Var}(X) & = \\frac{1}{18}\n\\end{align}\n" }, { "math_id": 3, "text": " \\begin{align}\n f(x) &= \\frac{(b-c)-|c-x|}{(b-c)^2} \\\\[6pt]\n\\end{align} " }, { "math_id": 4, "text": "\n f(x) = \\begin{cases}\n 4x & \\text{for }0 \\le x < \\frac{1}{2} \\\\\n 4(1-x) & \\text{for }\\frac{1}{2} \\le x \\le 1\n \\end{cases}\n" }, { "math_id": 5, "text": "\n F(x) = \\begin{cases}\n 2x^2 & \\text{for }0 \\le x < \\frac{1}{2} \\\\\n 2x^2-(2x-1)^2 & \\text{for }\\frac{1}{2} \\le x \\le 1\n \\end{cases}\n" }, { "math_id": 6, "text": "\n\\begin{align}\nE(X) & = \\frac{1}{2} \\\\[6pt]\n\\operatorname{Var}(X) & = \\frac{1}{24}\n\\end{align}\n" }, { "math_id": 7, "text": "\nX = \\begin{cases}\na + \\sqrt{U(b-a)(c-a)} & \\text{ for } 0 < U < F(c) \\\\ & \\\\\nb - \\sqrt{(1-U)(b-a)(b-c)} & \\text{ for } F(c) \\le U < 1\n\\end{cases}\n" }, { "math_id": 8, "text": "F(c) = (c-a)/(b-a)" }, { "math_id": 9, "text": "a, b" }, { "math_id": 10, "text": "c" }, { "math_id": 11, "text": "n = 2" }, { "math_id": 12, "text": "n \\geq 3" } ]
https://en.wikipedia.org/wiki?curid=663496
66350944
Maxwell–Fricke equation
The Maxwell–Fricke equation relates the resistivity of blood to hematocrit. This relationship has been shown to hold for humans, and a variety on non-human warm-blooded species, including canines. Equation. The Maxwell–Fricke equation is written as: formula_0 where "ρ" is the resistivity of blood, "ρ"1 is the resistivity of plasma, "ρ"2 is the resistivity of blood cells and "φ" is the hematocrit. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\frac{\\frac{\\rho_1}{\\rho}-1}{\\frac{\\rho_1}{\\rho}+2} = \\varphi\\frac{\\frac{\\rho_1}{\\rho_2}-1}{\\frac{\\rho_1}{\\rho_2}+2}" } ]
https://en.wikipedia.org/wiki?curid=66350944
663580
Gradient-index optics
Gradient-index (GRIN) optics is the branch of optics covering optical effects produced by a gradient of the refractive index of a material. Such gradual variation can be used to produce lenses with flat surfaces, or lenses that do not have the aberrations typical of traditional spherical lenses. Gradient-index lenses may have a refraction gradient that is spherical, axial, or radial. In nature. The lens of the eye is the most obvious example of gradient-index optics in nature. In the human eye, the refractive index of the lens varies from approximately 1.406 in the central layers down to 1.386 in less dense layers of the lens. This allows the eye to image with good resolution and low aberration at both short and long distances. Another example of gradient index optics in nature is the common mirage of a pool of water appearing on a road on a hot day. The pool is actually an image of the sky, apparently located on the road since light rays are being refracted (bent) from their normal straight path. This is due to the variation of refractive index between the hot, less dense air at the surface of the road, and the denser cool air above it. The variation in temperature (and thus density) of the air causes a gradient in its refractive index, causing it to increase with height. This index gradient causes refraction of light rays (at a shallow angle to the road) from the sky, bending them into the eye of the viewer, with their apparent location being the road's surface. The Earth's atmosphere acts as a GRIN lens, allowing observers to see the sun for a few minutes after it is actually below the horizon, and observers can also view stars that are below the horizon. This effect also allows for observation of electromagnetic signals from satellites after they have descended below the horizon, as in radio occultation measurements. Applications. The ability of GRIN lenses to have flat surfaces simplifies the mounting of the lens, which makes them useful where many very small lenses need to be mounted together, such as in photocopiers and scanners. The flat surface also allows a GRIN lens to be easily optically aligned to a fiber, to produce collimated output, making it applicable for endoscopy as well as for "in vivo" calcium imaging and optogenetic stimulation in brain. In imaging applications, GRIN lenses are mainly used to reduce aberrations. The design of such lenses involves detailed calculations of aberrations as well as efficient manufacture of the lenses. A number of different materials have been used for GRIN lenses including optical glasses, plastics, germanium, zinc selenide, and sodium chloride. Certain optical fibres (graded-index fibres) are made with a radially-varying refractive index profile; this design strongly reduces the modal dispersion of a multi-mode optical fiber. The radial variation in refractive index allows for a sinusoidal height distribution of rays within the fibre, preventing the rays from leaving the core. This differs from traditional optical fibres, which rely on total internal reflection, in that all modes of the GRIN fibres propagate at the same speed, allowing for a higher temporal bandwidth for the fibre. Antireflection coatings are typically effective for narrow ranges of frequency or angle of incidence. Graded-index materials are less constrained. An axial gradient lens has been used to concentrate sunlight onto solar cells, capturing as much as 90% of incident light when the sun is not at an optimal angle. Manufacture. GRIN lenses are made by several techniques: History. In 1854, J C Maxwell suggested a lens whose refractive index distribution would allow for every region of space to be sharply imaged. Known as the "Maxwell fisheye lens", it involves a spherical index function and would be expected to be spherical in shape as well. This lens, however, is impractical to make and has little usefulness since only points on the surface and within the lens are sharply imaged and extended objects suffer from extreme aberrations. In 1905, R. W. Wood used a dipping technique creating a gelatin cylinder with a refractive index gradient that varied symmetrically with the radial distance from the axis. Disk-shaped slices of the cylinder were later shown to have plane faces with radial index distribution. He showed that even though the faces of the lens were flat, they acted like converging and diverging lens depending on whether the index was a decreasing or increasing relative to the radial distance. In 1964, a posthumous book of R. K. Luneburg was published in which he described a lens that focuses incident parallel rays of light onto a point on the opposite surface of the lens. This also limited the applications of the lens because it was difficult to use it to focus visible light; however, it had some usefulness in microwave applications. Some years later several new techniques have been developed to fabricate lenses of the Wood type. Since then at least the thinner GRIN lenses can possess surprisingly good imaging properties considering their very simple mechanical construction, while thicker GRIN lenses found application e.g. in Selfoc rods. Theory. An inhomogeneous gradient-index lens possesses a refractive index whose change follows the function formula_0 of the coordinates of the region of interest in the medium. According to Fermat's principle, the light path integral ("L"), taken along a ray of light joining any two points of a medium, is stationary relative to its value for any nearby curve joining the two points. The light path integral is given by the equation formula_1, where "n" is the refractive index and "S" is the arc length of the curve. If Cartesian coordinates are used, this equation is modified to incorporate the change in arc length for a spherical gradient, to each physical dimension: formula_2 where prime corresponds to d/d"s." The light path integral is able to characterize the path of light through the lens in a qualitative manner, such that the lens may be easily reproduced in the future. The refractive index gradient of GRIN lenses can be mathematically modelled according to the method of production used. For example, GRIN lenses made from a radial gradient index material, such as SELFOC Microlens, have a refractive index that varies according to: formula_3, where "n""r" is the refractive index at a distance, "r", from the optical axis; "n"o is the design index on the optical axis, and "A" is a positive constant. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "n=f(x,y,z)" }, { "math_id": 1, "text": "L=\\int_{S_o}^{S}n\\,ds" }, { "math_id": 2, "text": "L=\\int_{S_o}^{S}n(x,y,z)\\sqrt{x'^{2}+y'^{2}+z'^{2}}\\, ds" }, { "math_id": 3, "text": "n_{r}=n_{o}\\left ( 1-\\frac{A r^2}{2} \\right )" } ]
https://en.wikipedia.org/wiki?curid=663580
66364353
1 Chronicles 4
First Book of Chronicles, chapter 4 1 Chronicles 4 is the fourth chapter of the Books of Chronicles in the Hebrew Bible or the First Book of Chronicles in the Old Testament of the Christian Bible. The book is compiled from older sources by an unknown person or group, designated by modern scholars as "the Chronicler", and had the final shape established in late fifth or fourth century BCE. Together with chapters 2 and 3, this chapter focuses on the descendants of Judah: chapter 2 deals with the tribes of Judah in general, chapter 3 lists the sons of David in particular and chapter 4 concerns the remaining families in the tribe of Judah and the tribe of Simeon, geographically the southernmost west-Jordanian tribe. These chapters belong to the section focusing on the list of genealogies from Adam to the lists of the people returning from exile in Babylon ( to 9:34). Text. This chapter was originally written in the Hebrew language. It is divided into 43 verses. Textual witnesses. Some early manuscripts containing the text of this chapter in Hebrew are of the Masoretic Text tradition, which includes the Aleppo Codex (10th century), and Codex Leningradensis (1008). There is also a translation into Koine Greek known as the Septuagint, made in the last few centuries BCE. Extant ancient manuscripts of the Septuagint version include Codex Vaticanus (B; formula_0B; 4th century), and Codex Alexandrinus (A; formula_0A; 5th century). Structure. The whole chapter belongs to an arrangement comprising 1 Chronicles 2:3–8:40 with the king-producing tribes of Judah (David; 2:3–4:43) and Benjamin (Saul; 8:1–40) bracketing the series of lists as the priestly tribe of Levi (6:1–81) anchors the center, in the following order: A David’s royal tribe of Judah (2:3–4:43) B Northern tribes east of Jordan (5:1–26) X The priestly tribe of Levi (6:1–81) B' Northern tribes west of Jordan (7:1–40) A' Saul’s royal tribe of Benjamin (8:1–40) Another concentric arrangement focuses on David's royal tribe of Judah (2:3–4:23), centering on the family of Hezron, Judah's grandson, through his three sons: Jerahmeel, Ram, and Chelubai (Caleb), as follows: A Descendants of Judah: Er, Onan, and Shelah (2:3–8) B Descendants of Ram up to David (2:9–17) C Descendants of Caleb (2:18–24) D Descendants of Jerahmeel (2:25–33) D' Descendants of Jerahmeel (2:34–41) C' Descendants of Caleb (2:42–55) B' Descendants of Ram following David [David’s descendants] (3:1–24) A' Descendants of Shelah, Judah s only surviving son (4:21–23) Descendants of Judah (4:1–8). This section, continued in verses 11–23, consists of 'many small, seemingly unrelated pieces' with little textual clarity, which potentially could be a valuable historical source, although it is difficult to interpret. These lists partly refer back to chapter 2. A number of prominent women are listed here (as well as in the latter parts): "And Reaiah the son of Shobal begat Jahath; and Jahath begat Ahumai, and Lahad. These are the families of the Zorathites." Prayer of Jabez (4:9–10). These two verses form a unique passage highlighting the Chronicler's respect for wealth and the effectiveness of prayer. It shows one example of the Chronicler's frequent use of meaningful names: "Jabez" (, ya‘-bêz) was given that name because his mother bore him with sorrow (, bə-‘ō-zeḇ, meaning "in pain"; verse 9), while he himself prays that no sorrow' (, ‘ā-zə-bî; verse 10) would fall upon him. More descendants of Judah (4:11–23). Together with verses 1–8, this section partly refers back to chapter 2. Some prominent women are listed here (other than in the previous parts): Descendants of Simeon (4:24–43). This section focuses on the tribe of Simeon, which had constant close ties with Judah (such as in , ; ) and historically was quickly engulfed by the descendants of Judah. In contrast to the previous parts in the same chapter, it has an obvious structure: the genealogy (verses 24–27; drawn from and ) is followed by the lists of the tribe's settlement territories (verses 28–33, drawn from ), the leaders (verses 34–38) and two events in their history, when the tribe pushed out the Meunites and Amalekites to expand the territories for their flocks (verses 39–43). The tribe's warlike attitude correlates to the characterization in , , and . "Beth Marcaboth, Hazar Susim, Beth Biri, and at Shaaraim. These were their cities until the reign of David." Notes. &lt;templatestyles src="Reflist/styles.css" /&gt; References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": " \\mathfrak{G}" } ]
https://en.wikipedia.org/wiki?curid=66364353
663657
NEXPTIME
In computational complexity theory, the complexity class NEXPTIME (sometimes called NEXP) is the set of decision problems that can be solved by a non-deterministic Turing machine using time formula_0. In terms of NTIME, formula_1 Alternatively, NEXPTIME can be defined using deterministic Turing machines as verifiers. A language "L" is in NEXPTIME if and only if there exist polynomials "p" and "q", and a deterministic Turing machine "M", such that We know P ⊆ NP ⊆ EXPTIME ⊆ NEXPTIME and also, by the time hierarchy theorem, that NP ⊊ NEXPTIME If P = NP, then NEXPTIME = EXPTIME (padding argument); more precisely, E ≠ NE if and only if there exist sparse languages in NP that are not in P. Alternative characterizations. In descriptive complexity, the sets of natural numbers that can be recognized in NEXPTIME are exactly those that form the spectrum of a sentence, the set of sizes of finite models of some logical sentence. NEXPTIME often arises in the context of interactive proof systems, where there are two major characterizations of it. The first is the MIP proof system, where we have two all-powerful provers which communicate with a randomized polynomial-time verifier (but not with each other). If the string is in the language, they must be able to convince the verifier of this with high probability. If the string is not in the language, they must not be able to collaboratively trick the verifier into accepting the string except with low probability. The fact that MIP proof systems can solve every problem in NEXPTIME is quite impressive when we consider that when only one prover is present, we can only recognize all of PSPACE; the verifier's ability to "cross-examine" the two provers gives it great power. See interactive proof system#MIP for more details. Another interactive proof system characterizing NEXPTIME is a certain class of probabilistically checkable proofs. Recall that NP can be seen as the class of problems where an all-powerful prover gives a purported proof that a string is in the language, and a deterministic polynomial-time machine verifies that it is a valid proof. We make two changes to this setup: These two extensions together greatly extend the proof system's power, enabling it to recognize all languages in NEXPTIME. The class is called PCP(poly, poly). What more, in this characterization the verifier may be limited to read only a constant number of bits, i.e. NEXPTIME = PCP(poly, 1). See probabilistically checkable proofs for more details. NEXPTIME-complete. A decision problem is NEXPTIME-complete if it is in NEXPTIME, and every problem in NEXPTIME has a polynomial-time many-one reduction to it. In other words, there is a polynomial-time algorithm that transforms instances of one to instances of the other with the same answer. Problems that are NEXPTIME-complete might be thought of as the hardest problems in NEXPTIME. We know that NEXPTIME-complete problems are not in NP; it has been proven that these problems cannot be verified in polynomial time, by the time hierarchy theorem. An important set of NEXPTIME-complete problems relates to succinct circuits. Succinct circuits are simple machines used to describe graphs in exponentially less space. They accept two vertex numbers as input and output whether there is an edge between them. If solving a problem on a graph in a natural representation, such as an adjacency matrix, is NP-complete, then solving the same problem on a succinct circuit representation is NEXPTIME-complete, because the input is exponentially smaller (under some mild condition that the NP-completeness reduction is achieved by a "projection"). As one simple example, finding a Hamiltonian path for a graph thus encoded is NEXPTIME-complete.
[ { "math_id": 0, "text": "2^{n^{O(1)}}" }, { "math_id": 1, "text": "\\mathsf{NEXPTIME} = \\bigcup_{k\\in\\mathbb{N}} \\mathsf{NTIME}(2^{n^k})" }, { "math_id": 2, "text": "2^{p(|x|)}" }, { "math_id": 3, "text": "2^{q(|x|)}" } ]
https://en.wikipedia.org/wiki?curid=663657
6637022
Primitive recursive arithmetic
Formalization of the natural numbers Primitive recursive arithmetic (PRA) is a quantifier-free formalization of the natural numbers. It was first proposed by Norwegian mathematician , as a formalization of his finitistic conception of the foundations of arithmetic, and it is widely agreed that all reasoning of PRA is finitistic. Many also believe that all of finitism is captured by PRA, but others believe finitism can be extended to forms of recursion beyond primitive recursion, up to ε0, which is the proof-theoretic ordinal of Peano arithmetic. PRA's proof theoretic ordinal is ωω, where ω is the smallest transfinite ordinal. PRA is sometimes called "Skolem arithmetic", although that has another meaning, see Skolem arithmetic. The language of PRA can express arithmetic propositions involving natural numbers and any primitive recursive function, including the operations of addition, multiplication, and exponentiation. PRA cannot explicitly quantify over the domain of natural numbers. PRA is often taken as the basic metamathematical formal system for proof theory, in particular for consistency proofs such as Gentzen's consistency proof of first-order arithmetic. Language and axioms. The language of PRA consists of: The logical axioms of PRA are the: The logical rules of PRA are modus ponens and variable substitution.&lt;br&gt; The non-logical axioms are, firstly: where formula_2 always denotes the negation of formula_3 so that, for example, formula_4 is a negated proposition. Further, recursive defining equations for every primitive recursive function may be adopted as axioms as desired. For instance, the most common characterization of the primitive recursive functions is as the 0 constant and successor function closed under projection, composition and primitive recursion. So for a ("n"+1)-place function "f" defined by primitive recursion over a "n"-place base function "g" and ("n"+2)-place iteration function "h" there would be the defining equations: Especially: PRA replaces the axiom schema of induction for first-order arithmetic with the rule of (quantifier-free) induction: In first-order arithmetic, the only primitive recursive functions that need to be explicitly axiomatized are addition and multiplication. All other primitive recursive predicates can be defined using these two primitive recursive functions and quantification over all natural numbers. Defining primitive recursive functions in this manner is not possible in PRA, because it lacks quantifiers. Logic-free calculus. It is possible to formalise PRA in such a way that it has no logical connectives at all—a sentence of PRA is just an equation between two terms. In this setting a term is a primitive recursive function of zero or more variables. gave the first such system. The rule of induction in Curry's system was unusual. A later refinement was given by . The rule of induction in Goodstein's system is: formula_15 Here "x" is a variable, "S" is the successor operation, and "F", "G", and "H" are any primitive recursive functions which may have parameters other than the ones shown. The only other inference rules of Goodstein's system are substitution rules, as follows: formula_16 Here "A", "B", and "C" are any terms (primitive recursive functions of zero or more variables). Finally, there are symbols for any primitive recursive functions with corresponding defining equations, as in Skolem's system above. In this way the propositional calculus can be discarded entirely. Logical operators can be expressed entirely arithmetically, for instance, the absolute value of the difference of two numbers can be defined by primitive recursion: formula_17 Thus, the equations "x"="y" and formula_18 are equivalent. Therefore, the equations formula_19 and formula_20 express the logical conjunction and disjunction, respectively, of the equations "x"="y" and "u"="v". Negation can be expressed as formula_21.
[ { "math_id": 0, "text": "S(x) \\neq 0" }, { "math_id": 1, "text": "S(x)=S(y) \\to x = y," }, { "math_id": 2, "text": "x \\neq y" }, { "math_id": 3, "text": "x = y" }, { "math_id": 4, "text": "S(0) = 0" }, { "math_id": 5, "text": "f(0,y_1,\\ldots,y_n) = g(y_1,\\ldots,y_n)" }, { "math_id": 6, "text": "f(S(x),y_1,\\ldots,y_n) = h(x,f(x,y_1,\\ldots,y_n),y_1,\\ldots,y_n)" }, { "math_id": 7, "text": "x+0 = x\\ " }, { "math_id": 8, "text": "x+S(y) = S(x+y)\\ " }, { "math_id": 9, "text": "x \\cdot 0 = 0\\ " }, { "math_id": 10, "text": "x \\cdot S(y) = x \\cdot y + x\\ " }, { "math_id": 11, "text": "\\varphi(0)" }, { "math_id": 12, "text": "\\varphi(x)\\to\\varphi(S(x))" }, { "math_id": 13, "text": "\\varphi(y)" }, { "math_id": 14, "text": "\\varphi." }, { "math_id": 15, "text": "{F(0) = G(0) \\quad F(S(x)) = H(x,F(x)) \\quad G(S(x)) = H(x,G(x)) \\over F(x) = G(x)}." }, { "math_id": 16, "text": "{F(x) = G(x) \\over F(A) = G(A)} \\qquad {A = B \\over F(A) = F(B)} \\qquad {A = B \\quad A = C \\over B = C}." }, { "math_id": 17, "text": "\n\\begin{align} \nP(0) = 0 \\quad & \\quad P(S(x)) = x \\\\ \nx \\dot - 0 = x \\quad & \\quad x \\mathrel{\\dot -} S(y) = P(x \\mathrel{\\dot -} y) \\\\\n|x - y| = & (x \\mathrel{\\dot -} y) + (y \\mathrel{\\dot -} x). \\\\\n\\end{align}\n" }, { "math_id": 18, "text": "|x - y| = 0" }, { "math_id": 19, "text": "|x - y| + |u - v| = 0" }, { "math_id": 20, "text": "|x - y| \\cdot |u - v| = 0" }, { "math_id": 21, "text": "1 \\dot - |x - y| = 0" } ]
https://en.wikipedia.org/wiki?curid=6637022
66374616
Convergence space
Generalization of the notion of convergence that is found in general topology In mathematics, a convergence space, also called a generalized convergence, is a set together with a relation called a convergence that satisfies certain properties relating elements of "X" with the family of filters on "X". Convergence spaces generalize the notions of convergence that are found in point-set topology, including metric convergence and uniform convergence. Every topological space gives rise to a canonical convergence but there are convergences, known as non-topological convergences, that do not arise from any topological space. Examples of convergences that are in general non-topological include convergence in measure and almost everywhere convergence. Many topological properties have generalizations to convergence spaces. Besides its ability to describe notions of convergence that topologies are unable to, the category of convergence spaces has an important categorical property that the category of topological spaces lacks. The category of topological spaces is not an exponential category (or equivalently, it is not Cartesian closed) although it is contained in the exponential category of pseudotopological spaces, which is itself a subcategory of the (also exponential) category of convergence spaces. Definition and notation. Preliminaries and notation. Denote the power set of a set formula_0 by formula_1 The upward closure or isotonization in formula_0 of a family of subsets formula_2 is defined as formula_3 and similarly the downward closure of formula_4 is formula_5 If formula_6 (resp. formula_7) then formula_4 is said to be upward closed (resp. downward closed) in formula_8 For any families formula_9 and formula_10 declare that formula_11 if and only if for every formula_12 there exists some formula_13 such that formula_14 or equivalently, if formula_15 then formula_11 if and only if formula_16 The relation formula_17 defines a preorder on formula_18 If formula_19 which by definition means formula_20 then formula_21 is said to be subordinate to formula_9 and also finer than formula_22 and formula_9 is said to be coarser than formula_23 The relation formula_24 is called subordination. Two families formula_9 and formula_21 are called equivalent (with respect to subordination formula_24) if formula_11 and formula_25 A filter on a set formula_0 is a non-empty subset formula_26 that is upward closed in formula_27 closed under finite intersections, and does not have the empty set as an element (i.e. formula_28). A prefilter is any family of sets that is equivalent (with respect to subordination) to some filter or equivalently, it is any family of sets whose upward closure is a filter. A family formula_4 is a prefilter, also called a filter base, if and only if formula_29 and for any formula_30 there exists some formula_31 such that formula_32 A filter subbase is any non-empty family of sets with the finite intersection property; equivalently, it is any non-empty family formula_4 that is contained as a subset of some filter (or prefilter), in which case the smallest (with respect to formula_33 or formula_34) filter containing formula_4 is called the filter (on formula_0) generated by formula_4. The set of all filters (resp. prefilters, filter subbases, ultrafilters) on formula_0 will be denoted by formula_35 (resp. formula_36 formula_37 formula_38). The principal or discrete filter on formula_0 at a point formula_39 is the filter formula_40 Definition of (pre)convergence spaces. For any formula_41 if formula_26 then define formula_42 and if formula_39 then define formula_43 so if formula_44 then formula_45 if and only if formula_46 The set formula_0 is called the underlying set of formula_47 and is denoted by formula_48 A preconvergence on a non-empty set formula_0 is a binary relation formula_49 with the following property: and if in addition it also has the following property: then the preconvergence formula_47 is called a convergence on formula_8 A generalized convergence or a convergence space (resp. a preconvergence space) is a pair consisting of a set formula_0 together with a convergence (resp. preconvergence) on formula_8 A preconvergence formula_49 can be canonically extended to a relation on formula_52 also denoted by formula_53 by defining formula_54 for all formula_55 This extended preconvergence will be isotone on formula_36 meaning that if formula_56 then formula_50 implies formula_57 Examples. Convergence induced by a topological space. Let formula_58 be a topological space with formula_59 If formula_60 then formula_21 is said to converge to a point formula_39 in formula_61 written formula_62 in formula_61 if formula_63 where formula_64 denotes the neighborhood filter of formula_51 in formula_65 The set of all formula_39 such that formula_62 in formula_58 is denoted by formula_66 formula_67 or simply formula_68 and elements of this set are called limit points of formula_21 in formula_65 The (canonical) convergence associated with or induced by formula_58 is the convergence on formula_27 denoted by formula_69 defined for all formula_39 and all formula_60 by: formula_70 if and only if formula_62 in formula_65 Equivalently, it is defined by formula_71 for all formula_72 A (pre)convergence that is induced by some topology on formula_0 is called a topological (pre)convergence; otherwise, it is called a non-topological (pre)convergence. Power. Let formula_58 and formula_73 be topological spaces and let formula_74 denote the set of continuous maps formula_75 The power with respect to formula_76 and formula_77 is the coarsest topology formula_78 on formula_79 that makes the natural coupling formula_80 into a continuous map formula_81 The problem of finding the power has no solution unless formula_58 is locally compact. However, if searching for a convergence instead of a topology, then there always exists a convergence that solves this problem (even without local compactness). In other words, the category of topological spaces is not an exponential category (i.e. or equivalently, it is not Cartesian closed) although it is contained in the exponential category of pseudotopologies, which is itself a subcategory of the (also exponential) category of convergences. formula_86 if and only if formula_87 formula_89 if and only if formula_90 A preconvergence formula_47 on formula_0 is a convergence if and only if formula_91 Although it is a preconvergence on formula_27 it is not a convergence on formula_8 The empty preconvergence on formula_94 is a non-topological preconvergence because for every topology formula_76 on formula_27 the neighborhood filter at any given point formula_39 necessarily converges to formula_51 in formula_65 Properties. A preconvergence formula_47 on set non-empty formula_0 is called Hausdorff or "T"2 if formula_97 is a singleton set for all formula_72 It is called "T"1 if formula_98 for all formula_39 and it is called "T"0 if formula_99 for all distinct formula_100 Every "T"1 preconvergence on a finite set is Hausdorff. Every "T"1 convergence on a finite set is discrete. While the category of topological spaces is not exponential (i.e. Cartesian closed), it can be extended to an exponential category through the use of a subcategory of convergence spaces. Citations. &lt;templatestyles src="Reflist/styles.css" /&gt; References. &lt;templatestyles src="Refbegin/styles.css" /&gt;
[ { "math_id": 0, "text": "X" }, { "math_id": 1, "text": "\\wp(X)." }, { "math_id": 2, "text": "\\mathcal{B} \\subseteq \\wp(X)" }, { "math_id": 3, "text": "\\mathcal{B}^{\\uparrow X} := \\left\\{ S \\subseteq X ~:~ B \\subseteq S \\text{ for some } B \\in \\mathcal{B} \\, \\right\\} = \\bigcup_{B \\in \\mathcal{B}} \\left\\{ S ~:~ B \\subseteq S \\subseteq X \\right\\}" }, { "math_id": 4, "text": "\\mathcal{B}" }, { "math_id": 5, "text": "\\mathcal{B}^{\\downarrow} := \\left\\{ S \\subseteq B ~:~ B \\in \\mathcal{B} \\, \\right\\} = \\bigcup_{B \\in \\mathcal{B}} \\wp(B)." }, { "math_id": 6, "text": "\\mathcal{B}^{\\uparrow X} = \\mathcal{B}" }, { "math_id": 7, "text": "\\mathcal{B}^{\\downarrow} = \\mathcal{B}" }, { "math_id": 8, "text": "X." }, { "math_id": 9, "text": "\\mathcal{C}" }, { "math_id": 10, "text": "\\mathcal{F}," }, { "math_id": 11, "text": "\\mathcal{C} \\leq \\mathcal{F}" }, { "math_id": 12, "text": "C \\in \\mathcal{C}," }, { "math_id": 13, "text": "F \\in \\mathcal{F}" }, { "math_id": 14, "text": "F \\subseteq C" }, { "math_id": 15, "text": "\\mathcal{F} \\subseteq \\wp(X)," }, { "math_id": 16, "text": "\\mathcal{C} \\subseteq \\mathcal{F}^{\\uparrow X}." }, { "math_id": 17, "text": "\\,\\leq\\," }, { "math_id": 18, "text": "\\wp(\\wp(X))." }, { "math_id": 19, "text": "\\mathcal{F} \\geq \\mathcal{C}," }, { "math_id": 20, "text": "\\mathcal{C} \\leq \\mathcal{F}," }, { "math_id": 21, "text": "\\mathcal{F}" }, { "math_id": 22, "text": "\\mathcal{C}," }, { "math_id": 23, "text": "\\mathcal{F}." }, { "math_id": 24, "text": "\\,\\geq\\," }, { "math_id": 25, "text": "\\mathcal{F} \\leq \\mathcal{C}." }, { "math_id": 26, "text": "\\mathcal{F} \\subseteq \\wp(X)" }, { "math_id": 27, "text": "X," }, { "math_id": 28, "text": "\\varnothing \\not\\in \\mathcal{F}" }, { "math_id": 29, "text": "\\varnothing \\not\\in \\mathcal{B} \\neq \\varnothing" }, { "math_id": 30, "text": "B, C \\in \\mathcal{B}," }, { "math_id": 31, "text": "A \\in \\mathcal{B}" }, { "math_id": 32, "text": "A \\subseteq B \\cap C." }, { "math_id": 33, "text": "\\subseteq" }, { "math_id": 34, "text": "\\leq" }, { "math_id": 35, "text": "\\operatorname{Filters}(X)" }, { "math_id": 36, "text": "\\operatorname{Prefilters}(X)," }, { "math_id": 37, "text": "\\operatorname{FilterSubbases}(X)," }, { "math_id": 38, "text": "\\operatorname{UltraFilters}(X)" }, { "math_id": 39, "text": "x \\in X" }, { "math_id": 40, "text": "\\{ x \\}^{\\uparrow X}." }, { "math_id": 41, "text": "\\xi \\subseteq X \\times \\wp(\\wp(X))," }, { "math_id": 42, "text": "\\lim {}_\\xi \\mathcal{F} := \\left\\{ x \\in X ~:~ \\left( x, \\mathcal{F} \\right) \\in \\xi \\right\\}" }, { "math_id": 43, "text": "\\lim {}^{-1}_{\\xi} (x) := \\left\\{ \\mathcal{F} \\subseteq \\wp(X) ~:~ \\left( x, \\mathcal{F} \\right) \\in \\xi \\right\\}" }, { "math_id": 44, "text": "\\left( x, \\mathcal{F} \\right) \\in X \\times \\wp(\\wp(X))" }, { "math_id": 45, "text": "x \\in \\lim {}_{\\xi} \\mathcal{F}" }, { "math_id": 46, "text": "\\left( x, \\mathcal{F} \\right) \\in \\xi." }, { "math_id": 47, "text": "\\xi" }, { "math_id": 48, "text": "\\left| \\xi \\right| := X." }, { "math_id": 49, "text": "\\xi \\subseteq X \\times \\operatorname{Filters}(X)" }, { "math_id": 50, "text": "\\mathcal{F} \\leq \\mathcal{G}" }, { "math_id": 51, "text": "x" }, { "math_id": 52, "text": "X \\times \\operatorname{Prefilters}(X)," }, { "math_id": 53, "text": "\\xi," }, { "math_id": 54, "text": "\\lim {}_{\\xi} \\mathcal{F} := \\lim {}_{\\xi} \\left( \\mathcal{F}^{\\uparrow X} \\right)" }, { "math_id": 55, "text": "\\mathcal{F} \\in \\operatorname{Prefilters}(X)." }, { "math_id": 56, "text": "\\mathcal{F}, \\mathcal{G} \\in \\operatorname{Prefilters}(X)" }, { "math_id": 57, "text": "\\lim {}_{\\xi} \\mathcal{F} \\subseteq \\lim {}_{\\xi} \\mathcal{G}." }, { "math_id": 58, "text": "(X, \\tau)" }, { "math_id": 59, "text": "X \\neq \\varnothing." }, { "math_id": 60, "text": "\\mathcal{F} \\in \\operatorname{Filters}(X)" }, { "math_id": 61, "text": "(X, \\tau)," }, { "math_id": 62, "text": "\\mathcal{F} \\to x" }, { "math_id": 63, "text": "\\mathcal{F} \\geq \\mathcal{N}(x)," }, { "math_id": 64, "text": "\\mathcal{N}(x)" }, { "math_id": 65, "text": "(X, \\tau)." }, { "math_id": 66, "text": "\\lim {}_{(X, \\tau)} \\mathcal{F}," }, { "math_id": 67, "text": "\\lim {}_X \\mathcal{F}," }, { "math_id": 68, "text": "\\lim \\mathcal{F}," }, { "math_id": 69, "text": "\\xi_{\\tau}," }, { "math_id": 70, "text": "x \\in \\lim {}_{\\xi_{\\tau}} \\mathcal{F}" }, { "math_id": 71, "text": "\\lim {}_{\\xi_{\\tau}} \\mathcal{F} := \\lim {}_{(X, \\tau)} \\mathcal{F}" }, { "math_id": 72, "text": "\\mathcal{F} \\in \\operatorname{Filters}(X)." }, { "math_id": 73, "text": "(Z, \\sigma)" }, { "math_id": 74, "text": "C := C\\left( (X, \\tau); (Z, \\sigma) \\right)" }, { "math_id": 75, "text": "f : (X, \\tau) \\to (Z, \\sigma)." }, { "math_id": 76, "text": "\\tau" }, { "math_id": 77, "text": "\\sigma" }, { "math_id": 78, "text": "\\theta" }, { "math_id": 79, "text": "C" }, { "math_id": 80, "text": "\\left\\langle x, f \\right\\rangle = f(x)" }, { "math_id": 81, "text": "(X, \\tau) \\times \\left( C, \\theta \\right) \\to (Z, \\sigma)." }, { "math_id": 82, "text": "\\mathbb{R}" }, { "math_id": 83, "text": "X := \\mathbb{R}" }, { "math_id": 84, "text": "\\nu" }, { "math_id": 85, "text": "x \\in X = \\mathbb{R}" }, { "math_id": 86, "text": "x \\in \\lim {}_{\\nu} \\mathcal{F}" }, { "math_id": 87, "text": "\\mathcal{F} ~\\geq~ \\left\\{ \\left( x - \\frac1{n}, x + \\frac1{n} \\right) ~:~ n \\in \\mathbb{N} \\right\\}." }, { "math_id": 88, "text": "\\iota_{X}" }, { "math_id": 89, "text": "x \\in \\lim {}_{\\iota_{X}} \\mathcal{F}" }, { "math_id": 90, "text": "\\mathcal{F} ~=~ \\{ x \\}^{\\uparrow X}." }, { "math_id": 91, "text": "\\xi \\leq \\iota_{X}." }, { "math_id": 92, "text": "\\varnothing_{X}" }, { "math_id": 93, "text": "\\lim {}_{\\varnothing_{X}} \\mathcal{F} := \\emptyset." }, { "math_id": 94, "text": "X \\neq \\varnothing" }, { "math_id": 95, "text": "o_{X}" }, { "math_id": 96, "text": "\\lim {}_{o_{X}} \\mathcal{F} := X." }, { "math_id": 97, "text": "\\lim {}_{\\xi} \\mathcal{F}" }, { "math_id": 98, "text": "\\lim {}_{\\xi} \\left( \\{ x \\}^{\\uparrow X} \\right) \\subseteq \\{ x \\}" }, { "math_id": 99, "text": "\\operatorname{lim}^{-1}{}_{\\xi} (x) \\neq \\operatorname{lim}^{-1}{}_{\\xi} (y)" }, { "math_id": 100, "text": "x, y \\in X." } ]
https://en.wikipedia.org/wiki?curid=66374616
663772
Strict conditional
In logic, a strict conditional (symbol: formula_0, or ⥽) is a conditional governed by a modal operator, that is, a logical connective of modal logic. It is logically equivalent to the material conditional of classical logic, combined with the necessity operator from modal logic. For any two propositions "p" and "q", the formula "p" → "q" says that "p" materially implies "q" while formula_1 says that "p" strictly implies "q". Strict conditionals are the result of Clarence Irving Lewis's attempt to find a conditional for logic that can adequately express indicative conditionals in natural language. They have also been used in studying Molinist theology. Avoiding paradoxes. The strict conditionals may avoid paradoxes of material implication. The following statement, for example, is not correctly formalized by material implication: If Bill Gates graduated in medicine, then Elvis never died. This condition should clearly be false: the degree of Bill Gates has nothing to do with whether Elvis is still alive. However, the direct encoding of this formula in classical logic using material implication leads to: Bill Gates graduated in medicine → Elvis never died. This formula is true because whenever the antecedent "A" is false, a formula "A" → "B" is true. Hence, this formula is not an adequate translation of the original sentence. An encoding using the strict conditional is: formula_0 (Bill Gates graduated in medicine → Elvis never died). In modal logic, this formula means (roughly) that, in every possible world in which Bill Gates graduated in medicine, Elvis never died. Since one can easily imagine a world where Bill Gates is a medicine graduate and Elvis is dead, this formula is false. Hence, this formula seems to be a correct translation of the original sentence. Problems. Although the strict conditional is much closer to being able to express natural language conditionals than the material conditional, it has its own problems with consequents that are necessarily true (such as 2 + 2 = 4) or antecedents that are necessarily false. The following sentence, for example, is not correctly formalized by a strict conditional: If Bill Gates graduated in medicine, then 2 + 2 = 4. Using strict conditionals, this sentence is expressed as: formula_0 (Bill Gates graduated in medicine → 2 + 2 = 4) In modal logic, this formula means that, in every possible world where Bill Gates graduated in medicine, it holds that 2 + 2 = 4. Since 2 + 2 is equal to 4 in all possible worlds, this formula is true, although it does not seem that the original sentence should be. A similar situation arises with 2 + 2 = 5, which is necessarily false: If 2 + 2 = 5, then Bill Gates graduated in medicine. Some logicians view this situation as indicating that the strict conditional is still unsatisfactory. Others have noted that the strict conditional cannot adequately express counterfactual conditionals, and that it does not satisfy certain logical properties. In particular, the strict conditional is transitive, while the counterfactual conditional is not. Some logicians, such as Paul Grice, have used conversational implicature to argue that, despite apparent difficulties, the material conditional is just fine as a translation for the natural language 'if...then...'. Others still have turned to relevance logic to supply a connection between the antecedent and consequent of provable conditionals. Constructive logic. In a constructive setting, the symmetry between ⥽ and formula_0 is broken, and the two connectives can be studied independently. Constructive strict implication can be used to investigate interpretability of Heyting arithmetic and to model arrows and guarded recursion in computer science. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\Box" }, { "math_id": 1, "text": "\\Box (p \\rightarrow q)" } ]
https://en.wikipedia.org/wiki?curid=663772
66382863
Free choice inference
Phenomenon in natural language Free choice is a phenomenon in natural language where a linguistic disjunction appears to receive a logical conjunctive interpretation when it interacts with a modal operator. For example, the following English sentences can be interpreted to mean that the addressee can watch a movie "and" that they can also play video games, depending on their preference: Free choice inferences are a major topic of research in formal semantics and philosophical logic because they are not valid in classical systems of modal logic. If they were valid, then the semantics of natural language would validate the "Free Choice Principle". This symbolic logic formula above is not valid in classical modal logic: Adding this principle as an axiom to standard modal logics would allow one to conclude formula_1 from formula_2, for any formula_3 and formula_4. This observation is known as the "Paradox of Free Choice". To resolve this paradox, some researchers have proposed analyses of free choice within nonclassical frameworks such as dynamic semantics, linear logic, alternative semantics, and inquisitive semantics. Others have proposed ways of deriving free choice inferences as scalar implicatures which arise on the basis of classical lexical entries for disjunction and modality. Free choice inferences are most widely studied for deontic modals, but also arise with other flavors of modality as well as imperatives, conditionals, and other kinds of operators. Indefinite noun phrases give rise to a similar inference which is also referred to as "free choice" though researchers disagree as to whether it forms a natural class with disjunctive free choice. Notes. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": " (\\Diamond P \\lor \\Diamond Q) \\rightarrow (\\Diamond P \\land \\Diamond Q) " }, { "math_id": 1, "text": "\\Diamond Q" }, { "math_id": 2, "text": "\\Diamond P" }, { "math_id": 3, "text": "P" }, { "math_id": 4, "text": "Q" } ]
https://en.wikipedia.org/wiki?curid=66382863
663864
Arthur Moritz Schoenflies
German mathematician Arthur Moritz Schoenflies (; 17 April 1853 – 27 May 1928), sometimes written as Schönflies, was a German mathematician, known for his contributions to the application of group theory to crystallography, and for work in topology. Schoenflies was born in Landsberg an der Warthe (modern Gorzów, Poland). Arthur Schoenflies married Emma Levin (1868–1939) in 1896. He studied under Ernst Kummer and Karl Weierstrass, and was influenced by Felix Klein. The Schoenflies problem is to prove that an formula_0-sphere in Euclidean "n"-space bounds a topological ball, however embedded. This question is much more subtle than it initially appears. He studied at the University of Berlin from 1870 to 1875. He obtained a doctorate in 1877, and in 1878 he was a teacher at a school in Berlin. In 1880, he went to Colmar to teach. Schoenflies was a frequent contributor to Klein's encyclopedia: In 1898 he wrote on set theory, in 1902 on kinematics, and on projective geometry in 1910. He was a great-uncle of Walter Benjamin. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "(n - 1)" } ]
https://en.wikipedia.org/wiki?curid=663864
66387487
Ellipsoid packing
In geometry, ellipsoid packing is the problem of arranging identical ellipsoid throughout three-dimensional space to fill the maximum possible fraction of space. The currently densest known packing structure for ellipsoid has two candidates, a simple monoclinic crystal with two ellipsoids of different orientations and a square-triangle crystal containing 24 ellipsoids in the fundamental cell. The former monoclinic structure can reach a maximum packing fraction around formula_0 for ellipsoids with maximal aspect ratios larger than formula_1. The packing fraction of the square-triangle crystal exceeds that of the monoclinic crystal for specific biaxial ellipsoids, like ellipsoids with ratios of the axes formula_2 and formula_3. Any ellipsoids with aspect ratios larger than one can pack denser than spheres. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "0.77073" }, { "math_id": 1, "text": "\\sqrt{3}" }, { "math_id": 2, "text": "\\alpha:\\sqrt{\\alpha}:1" }, { "math_id": 3, "text": "\\alpha \\in (1.365,1.5625)" } ]
https://en.wikipedia.org/wiki?curid=66387487
66390572
Monogenic function
A monogenic function is a complex function with a single finite derivative. More precisely, a function formula_0 defined on formula_1 is called monogenic at formula_2, if formula_3 exists and is finite, with: formula_4 Alternatively, it can be defined as the above limit having the same value for all paths. Functions can either have a single derivative (monogenic) or infinitely many derivatives (polygenic), with no intermediate cases. Furthermore, a function formula_5 which is monogenic formula_6, is said to be monogenic on formula_7, and if formula_7 is a domain of formula_8, then it is analytic as well (The notion of domains can also be generalized in a manner such that functions which are monogenic over non-connected subsets of formula_9, can show a weakened form of analyticity) Monogenic term was coined by Cauchy. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": " f(z) " }, { "math_id": 1, "text": "A \\subseteq \\mathbb{C}" }, { "math_id": 2, "text": " \\zeta \\in A " }, { "math_id": 3, "text": " f'(\\zeta) " }, { "math_id": 4, "text": "f'(\\zeta) = \\lim_{z\\to\\zeta}\\frac{f(z) - f(\\zeta)}{z - \\zeta}" }, { "math_id": 5, "text": " f(x) " }, { "math_id": 6, "text": " \\forall \\zeta \\in B " }, { "math_id": 7, "text": " B " }, { "math_id": 8, "text": " \\mathbb{C}" }, { "math_id": 9, "text": " \\mathbb{C} " } ]
https://en.wikipedia.org/wiki?curid=66390572
66390800
Guillotine partition
Process of partitioning a rectilinear polygon Guillotine partition is the process of partitioning a rectilinear polygon, possibly containing some holes, into rectangles, using only guillotine-cuts. A guillotine-cut (also called an edge-to-edge cut) is a straight bisecting line going from one edge of an existing polygon to the opposite edge, similarly to a paper guillotine. Guillotine partition is particularly common in designing floorplans in microelectronics. An alternative term for a guillotine-partition in this context is a slicing partition or a slicing floorplan. Guillotine partitions are also the underlying structure of binary space partitions. There are various optimization problems related to guillotine partition, such as: minimizing the number of rectangles or the total length of cuts. These are variants of polygon partitioning problems, where the cuts are constrained to be guillotine cuts. A related but different problem is "guillotine cutting". In that problem, the original sheet is a plain rectangle without holes. The challenge comes from the fact that the dimensions of the small rectangles are fixed in advance. The optimization goals are usually to maximize the area of the produced rectangles or their value, or minimize the waste or the number of required sheets. Computing a guillotine partition with a smallest edge-length. In the minimum edge-length rectangular-partition problem, the goal is to partition the original rectilinear polygon into rectangles, such that the total edge length is a minimum. This problem can be solved in time formula_0 even if the raw polygon has holes. The algorithm uses dynamic programming based on the following observation: "there exists a minimum-length guillotine rectangular partition in which every maximal line segment contains a vertex of the boundary". Therefore, in each iteration, there are formula_1 possible choices for the next guillotine cut, and there are altogether formula_2 subproblems. In the special case in which all holes are degenerate (single points), the minimum-length guillotine rectangular partition is at most 2 times the minimum-length rectangular partition. By a more careful analysis, it can be proved that the approximation factor is in fact at most 1.75. It is not known if the 1.75 is tight, but there is an instance in which the approximation factor is 1.5. Therefore, the guillotine partition provides a constant-factor approximation to the general problem, which is NP-hard. These results can be extended to a "d"-dimensional box: a guillotine-partition with minimum edge-length can be found in time formula_3, and the total ("d"-1)-volume in the optimal guillotine-partition is at most formula_4 times that of an optimal "d"-box partition. Arora and Mitchell used the guillotine-partitioning technique to develop polynomial-time approximation schemes for various geometric optimization problems. Number of guillotine partitions. Besides the computational problems, guillotine partitions were also studied from a combinatorial perspective. Suppose a given rectangle should be partitioned into smaller rectangles using guillotine cuts only. Obviously, there are infinitely many ways to do this, since even a single cut can take infinitely many values. However, the number of "structurally-different" guillotine partitions is bounded. Coloring guillotine partitions. A polychromatic coloring of a planar graph is a coloring of its vertices such that, in each face of the graph, each color appears at least once. Several researchers have tried to find the largest "k" such that a polychromatic "k"-coloring always exists. An important special case is when the graph represents a partition of a rectangle into rectangles. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "O(n^5)" }, { "math_id": 1, "text": "O(n)" }, { "math_id": 2, "text": "O(n^4)" }, { "math_id": 3, "text": "O(d n^{2 d + 1})" }, { "math_id": 4, "text": "2d-4+4/d" }, { "math_id": 5, "text": "O\\left ( \\frac{ n! 2^{5n-3}}{n^{3/2}} \\right)" }, { "math_id": 6, "text": "\\Theta\\left (\\frac{ (2 d - 1 + 2 \\sqrt{d(d-1)})^n}{n^{3/2}} \\right)" }, { "math_id": 7, "text": "\\Theta\\left (\\frac{ (3 + 2 \\sqrt{2})^n}{n^{3/2}} \\right)" } ]
https://en.wikipedia.org/wiki?curid=66390800
66392756
Squiggle operator
Linguistic formalism In formal semantics, the squiggle operator formula_0 is an operator that constrains the occurrence of focus. In one common definition, the squiggle operator takes a syntactic argument formula_1 and a discourse salient argument formula_2 and introduces a presupposition that the "ordinary semantic value" of formula_2 is either a subset or an element of the "focus semantic value" of formula_1. The squiggle was first introduced by Mats Rooth in 1992 as part of his treatment of focus within the framework of alternative semantics. It has become one of the standard tools in formal work on focus, playing a key role in accounts of contrastive focus, ellipsis, deaccenting, and question-answer congruence. Empirical motivation. The empirical motivation for the squiggle operator comes from cases in which focus marking requires a salient antecedent in discourse that stands in some particular relation with the focused expression. For instance, the following pairs shows that contrastive focus is only felicitous when there is a salient "focus antecedent", which contrasts with the focused expression (capital letters indicate the focused expression). Another instance of this phenomenon is "question-answer congruence", also known as "answer focus". Informally, a focused constituent in an answer to a question must represent the part of the utterance which resolves the issue raised by the question. For instance, the following pair of dialogues show that in response to a question of who likes stroopwafel, focus must be placed on the name of the person who likes stroopwafel. When focus is instead placed on the word "stroopwafel" itself, the answer is infelicitous, as is indicated by the # sign. If instead the question is what Helen likes, the word "stroopwafel" will be the expression that resolves the issue. Thus, focus will belong on "stroopwafel" instead of "Helen". Formal details. In the Roothian Squiggle Theory, formula_0 is what requires a focused expression to have a suitable focus antecedent. In doing so, it also allows the "focus denotation" and the "ordinary denotation" to interact. In the alternative Semantics approach to focus, each constituent formula_1 has both an ordinary denotation formula_3 and a focus denotation formula_4 which are composed by parallel computations. The ordinary denotation of formula_1 is simply whatever denotation it would have in a non-alternative-based system. The focus denotation of a constituent is typically the set of all ordinary denotations one could get by substituting a focused constituent for another expression of the same type. The squiggle operator takes two arguments, a contextually provided antecedent formula_2 and an overt argument formula_1. In the above examples, formula_2 is a variable which can be valued as formula_1's focus antecedent, while formula_1 itself could be the constituent [HELEN likes stroopwafel]. On one common definition, formula_0 introduces a presupposition that formula_2's ordinary denotation is either a subset or an element of formula_9's focus denotation, or in other words that either formula_10 or formula_11. If this presupposition is satisfied, formula_0 passes along its overt argument's ordinary denotation while "resetting" its focus denotation. In other words, when the presupposition is satisfied, formula_12 and formula_13. Notes. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\sim" }, { "math_id": 1, "text": "\\alpha" }, { "math_id": 2, "text": "C" }, { "math_id": 3, "text": " [\\![\\alpha]\\!]_o" }, { "math_id": 4, "text": "[\\![\\alpha]\\!]_f" }, { "math_id": 5, "text": "[\\![ \\text{HELEN likes stroopwafel} ]\\!]_o = \\lambda w \\, . \\text{ Helen likes stroopwafel in } w" }, { "math_id": 6, "text": "[\\![ \\text{HELEN likes stroopwafel} ]\\!]_f = \\{ \\lambda w \\, . x \\text{ likes stroopwafel in } w \\, | \\, x \\in \\mathcal{D}_e \\}" }, { "math_id": 7, "text": "[\\![ \\text{Helen likes STROOPWAFEL} ]\\!]_o = \\lambda w \\, . \\text{ Helen likes stroopwafel in } w" }, { "math_id": 8, "text": "[\\![ \\text{Helen likes STROOPWAFEL} ]\\!]_f = \\{ \\lambda w \\, . \\text{ Helen likes } x \\text{ in } w \\, | \\, x \\in \\mathcal{D}_e \\}" }, { "math_id": 9, "text": " \\alpha " }, { "math_id": 10, "text": "[\\![ C ]\\!]_o \\subseteq [\\![ \\alpha ]\\!]_f" }, { "math_id": 11, "text": "[\\![ C ]\\!]_o \\in [\\![ \\alpha ]\\!]_f" }, { "math_id": 12, "text": "[\\![ \\alpha \\sim C ]\\!]_o = [\\![ \\alpha ]\\!]_o" }, { "math_id": 13, "text": "[\\![ \\alpha \\sim C ]\\!]_f = \\{ [\\![ \\alpha ]\\!]_o \\} " } ]
https://en.wikipedia.org/wiki?curid=66392756
663997
Eduard Study
German mathematician (1862 – 1930) Christian Hugo Eduard Study ( ; 23 March 1862 – 6 January 1930) was a German mathematician known for work on invariant theory of ternary forms (1889) and for the study of spherical trigonometry. He is also known for contributions to space geometry, hypercomplex numbers, and criticism of early physical chemistry. Study was born in Coburg in the Duchy of Saxe-Coburg-Gotha. Career. Eduard Study began his studies in Jena, Strasbourg, Leipzig, and Munich. He loved to study biology, especially entomology. He was awarded the doctorate in mathematics at the University of Munich in 1884. Paul Gordan, an expert in invariant theory was at Leipzig, and Study returned there as Privatdozent. In 1888 he moved to Marburg and in 1893 embarked on a speaking tour in the U.S.A. He appeared at a Congress of Mathematicians in Chicago as part of the World's Columbian Exposition and took part in mathematics at Johns Hopkins University. Back in Germany, in 1894, he was appointed extraordinary professor at Göttingen. Then he gained the rank of full professor in 1897 at Greifswald. In 1904 he was called to the University of Bonn as the position held by Rudolf Lipschitz was vacant. There he settled until retirement in 1927. Study gave a plenary address at the International Congress of Mathematicians in 1904 at Heidelberg and another in 1912 at Cambridge, UK. Euclidean space group and dual quaternions. In 1891 Eduard Study published "Of Motions and Translations, in two parts". It treats the Euclidean group E(3). The second part of his article introduces the associative algebra of dual quaternions, that is numbers formula_0 where "a", "b", "c", and "d" are dual numbers and {1, "i", "j", "k"} multiply as in the quaternion group. Actually Study uses notation such that formula_1 formula_2 The multiplication table is found on page 520 of volume 39 (1891) in Mathematische Annalen under the title "Von Bewegungen und Umlegungen, I. und II. Abhandlungen". Eduard Study cites William Kingdon Clifford as an earlier source on these biquaternions. In 1901 Study published "Geometrie der Dynamen" also using dual quaternions. In 1913 he wrote a review article treating both E(3) and elliptic geometry. This article, "Foundations and goals of analytical kinematics" develops the field of kinematics, in particular exhibiting an element of E(3) as a homography of dual quaternions. Study's use of abstract algebra was noted in "A History of Algebra" (1985) by B. L. van der Waerden. On the other hand, Joe Rooney recounts these developments in relation to kinematics. Hypercomplex numbers. Study showed an early interest in systems of complex numbers and their application to transformation groups with his article in 1890. He addressed this popular subject again in 1898 in "Klein's encyclopedia". The essay explored quaternions and other hypercomplex number systems. This 34 page article was expanded to 138 pages in 1908 by Élie Cartan, who surveyed the hypercomplex systems in "Encyclopédie des sciences mathématiques pures et appliqueés". Cartan acknowledged Eduard Study's guidance, in his title, with the words "after Eduard Study". In the 1993 biography of Cartan by Akivis and Rosenfeld, one reads: [Study] defined the algebra °H of 'semiquaternions' with the units 1, "i", "ε", "η" having the properties formula_3 Semiquaternions are often called 'Study's quaternions'. In 1985 Helmut Karzel and Günter Kist developed "Study's quaternions" as the kinematic algebra corresponding to the group of motions of the Euclidean plane. These quaternions arise in "Kinematic algebras and their geometries" alongside ordinary quaternions and the ring of 2×2 real matrices which Karzel and Kist cast as the kinematic algebras of the elliptic plane and hyperbolic plane respectively. See the "Motivation and Historical Review" at page 437 of "Rings and Geometry", R. Kaya editor. Some of the other hypercomplex systems that Study worked with are dual numbers, dual quaternions, and split-biquaternions, all being associative algebras over R. Ruled surfaces. Study's work with dual numbers and line coordinates was noted by Heinrich Guggenheimer in 1963 in his book "Differential Geometry" (see pages 162–5). He cites and proves the following theorem of Study: The oriented lines in R3 are in one-to-one correspondence with the points of the dual unit sphere in D3. Later he says "A differentiable curve A("u") on the dual unit sphere, depending on a "real" parameter "u", represents a differentiable family of straight lines in R3: a ruled surface. The lines A("u") are the "generators" or "rulings" of the surface." Guggenheimer also shows the representation of the Euclidean motions in R3 by orthogonal dual matrices. Hermitian form metric. In 1905 Study wrote "Kürzeste Wege im komplexen Gebiet" (Shortest paths in the complex domain) for Mathematische Annalen (60:321–378). Some of its contents were anticipated by Guido Fubini a year before. The distance Study refers to is a Hermitian form on complex projective space. Since then this metric has been called the Fubini–Study metric. Study was careful in 1905 to distinguish the hyperbolic and elliptic cases in Hermitian geometry. Valence theory. Somewhat surprisingly Eduard Study is known by practitioners of quantum chemistry. Like James Joseph Sylvester, Paul Gordan believed that invariant theory could contribute to the understanding of chemical valence. In 1900 Gordan and his student G. Alexejeff contributed an article on an analogy between the coupling problem for angular momenta and their work on invariant theory to the "Zeitschrift für Physikalische Chemie" (v. 35, p. 610). In 2006 Wormer and Paldus summarized Study's role as follows: The analogy, lacking a physical basis at the time, was criticised heavily by the mathematician E. Study and ignored completely by the chemistry community of the 1890s. After the advent of quantum mechanics it became clear, however, that chemical valences arise from electron–spin couplings ... and that electron spin functions are, in fact, binary forms of the type studied by Gordan and Clebsch. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "q = a + bi + cj + dk \\!" }, { "math_id": 1, "text": "e_0 = 1,\\ e_1 = i,\\ e_2 = j,\\ e_3 = k, \\!" }, { "math_id": 2, "text": "\\varepsilon _0 = \\varepsilon ,\\ \\varepsilon _1 = \\varepsilon i,\\ \\varepsilon _2 = \\varepsilon j,\\ \\varepsilon _3 = \\varepsilon k. \\!" }, { "math_id": 3, "text": "i^2 = -1, \\ \\varepsilon ^2 = 0, \\ i \\varepsilon = - \\varepsilon i = \\eta. \\! " } ]
https://en.wikipedia.org/wiki?curid=663997
66422858
Alternative semantics
Alternative semantics (or Hamblin semantics) is a framework in formal semantics and logic. In alternative semantics, expressions denote "alternative sets", understood as sets of objects of the same semantic type. For instance, while the word "Lena" might denote Lena herself in a classical semantics, it would denote the singleton set containing Lena in alternative semantics. The framework was introduced by Charles Leonard Hamblin in 1973 as a way of extending Montague grammar to provide an analysis for questions. In this framework, a question denotes the set of its possible answers. Thus, if formula_0 and formula_1 are propositions, then formula_2 is the denotation of the question whether formula_0 or formula_1 is true. Since the 1970s, it has been extended and adapted to analyze phenomena including focus, scope, disjunction, NPIs, presupposition, and implicature. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "P" }, { "math_id": 1, "text": "Q" }, { "math_id": 2, "text": "\\{P,Q\\}" } ]
https://en.wikipedia.org/wiki?curid=66422858
66425715
Grid bracing
Mathematical problem of making a square grid structure rigid In the mathematics of structural rigidity, grid bracing is a problem of adding cross bracing to a square grid to make it into a rigid structure. It can be solved optimally by translating it into a problem in graph theory on the connectivity of bipartite graphs. Problem statement. The problem considers a framework in the form of a square grid, with formula_0 rows and formula_1 columns of squares. The grid has formula_2 edges, each of which has unit length and is considered to be a rigid rod, free to move continuously within the Euclidean plane but unable to change its length. These rods are attached to each other by flexible joints at the formula_3 vertices of the grid. A valid continuous motion of this framework is a way of continuously varying the placement of its edges and joints into the plane in such a way that they keep the same lengths and the same attachments, but without requiring them to form squares. Instead, each square of the grid may be deformed to form a rhombus, and the whole grid may form an irregular structure with a different shape for each of its faces, as shown in the figure. Unlike squares, triangles made of rigid rods and flexible joints cannot change their shapes: any two triangles with sides of the same lengths must be congruent (this is the SSS postulate). If a square is cross-braced by adding one of its diagonals as another rigid bar, the diagonal divides it into two triangles which similarly cannot change shape, so the square must remain square through any continuous motion of the cross-braced framework. (The same framework could also be placed in the plane in a different way, by folding its two triangles onto each other over their shared diagonal, but this folded placement cannot be obtained by a continuous motion.) Thus, if all squares of the given grid are cross-braced, the grid cannot change shape; its only continuous motions would be to rotate it or translate it as a single rigid body. However, this method of making the grid rigid, by adding cross-braces to all its squares, uses many more cross-braces than necessary. The grid bracing problem asks for a description of the minimal sets of cross-braces that have the same effect, of making the whole framework rigid. Graph theoretic solution. As Ethan Bolker and Henry Crapo (1977) originally observed, the grid bracing problem can be translated into a problem in graph theory by considering an undirected bipartite graph that has a vertex for each row and column of the given grid, and an edge for each cross-braced square of the grid. They proved that the cross-braced grid is rigid if and only if this bipartite graph is connected. It follows that the minimal cross-bracings of the grid correspond to the trees connecting all vertices in the graph, and that they have exactly formula_4 cross-braced squares. Any overbraced but rigid cross-bracing (with more than this number of cross-braced squares) can be reduced to a minimal cross-bracing by finding a spanning tree of its graph. More generally, suppose that a cross-braced grid is not rigid. Then the number of degrees of freedom in its family of shapes equals the number of connected components of the bipartite graph, minus one. If a partially braced grid is to be made rigid by cross-bracing more squares, the minimum number of additional squares that need to be cross-braced is this number of degrees of freedom. A solution with this number of squares can be obtained by adding this number of edges to the bipartite graph, connecting pairs of its connected components so that after the addition there is only one remaining component. Variations. Double bracing. Another version of the problem asks for a "double bracing", a set of cross-braces that is sufficiently redundant that it will stay rigid even if one of the diagonals is removed. This version allows both diagonals of a single square to be used, but it is not required to do so. In the same bipartite graph view used to solve the bracing problem, a double bracing of a grid corresponds to an undirected bipartite multigraph that is connected and bridgeless, meaning that every edge belongs to at least one cycle. The minimum number of diagonals needed for a double bracing is formula_5. In the special case of grids with equal numbers of rows and columns, the only double bracings of this minimum size are Hamiltonian cycles. Hamiltonian cycles are easy to find in the complete bipartite graphs representing the bracing problem, but finding them in other bipartite graphs is NP-complete. Because of this, finding the smallest double braced subset of a larger bracing is NP-hard. However, it is possible to approximate this smallest double braced subset to within a constant approximation ratio. Tension bracing. An analogous theory, using directed graphs, was discovered by Jenny Baglivo and Jack Graver (1983) for "tension bracing", in which squares are braced by wires or strings (which cannot expand past their initial length, but can bend or collapse to a shorter length) instead of by rigid rods. To make a single square rigid by tension bracing, it is necessary to brace both of its diagonals, instead of just one diagonal. One can represent a tension bracing by a bipartite graph, which has an edge directed from a row vertex to a column vertex if the shared square of that row and column is braced by the positively-sloped diagonal, and an edge from a column vertex to a row vertex if the shared square is braced by the negatively-sloped diagonal. The braced structure is rigid if and only if the resulting graph is strongly connected. If a given set of braces is insufficient, additional bracing needs to be added, corresponding in the graph view to adding edges that connect together the strongly connected components of a graph. In this way problem of finding a minimal set of additional braces to add can be seen as an instance of strong connectivity augmentation, and can be solved in linear time. According to Robbins' theorem, the undirected graphs that can be made strongly connected by directing their edges are exactly the bridgeless graphs; reinterpreting this theorem in terms of grid bracing, a bracing by rigid rods forms a double bracing if and only if each of its rods can be replaced by a single wire (possibly on the other diagonal of its square) to form a rigid tension bracing. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "r" }, { "math_id": 1, "text": "c" }, { "math_id": 2, "text": "r(c+1)+(r+1)c" }, { "math_id": 3, "text": "(r+1)(c+1)" }, { "math_id": 4, "text": "r+c-1" }, { "math_id": 5, "text": "\\min(2r,2c)" } ]
https://en.wikipedia.org/wiki?curid=66425715
66425729
Strong connectivity augmentation
Strong connectivity augmentation is a computational problem in the mathematical study of graph algorithms, in which the input is a directed graph and the goal of the problem is to add a small number of edges, or a set of edges with small total weight, so that the added edges make the graph into a strongly connected graph. The strong connectivity augmentation problem was formulated by Kapali Eswaran and Robert Tarjan (1976). They showed that a weighted version of the problem is NP-complete, but the unweighted problem can be solved in linear time. Subsequent research has considered the approximation ratio and parameterized complexity of the weighted problem. Unweighted version. In the unweighted strong connectivity augmentation problem, the input is a directed graph and the goal is to add as few edges as possible to it to make the result into a strongly connected graph. The algorithm for the unweighted case by Eswaran and Tarjan considers the condensation of the given directed graph, a directed acyclic graph that has one vertex per strongly connected component of the given graph. Letting formula_0 denote the number of source vertices in the condensation (strongly connected components with at least one outgoing edge but no incoming edges), formula_1 denote the number of sink vertices in the condensation (strongly connected components with incoming but no outgoing edges), and formula_2 denote the number of isolated vertices in the condensation (strongly connected components with neither incoming nor outgoing edges), they observe that the number of edges to be added is necessarily at least formula_3. This follows because formula_4 edges need to be added to provide an incoming edge for each source or isolated vertex, and symmetrically at least formula_5 edges need to be added to provide an outgoing edge for each sink or isolated vertex. Their algorithm for the problem finds a set of exactly formula_3 edges to add to the graph to make it strongly connected. Their algorithm uses a depth-first search on the condensation to find a collection of pairs of sources and sinks, with the following properties: A minor error in the part of their algorithm that finds the pairs of sources and sinks was later found and corrected. Once these pairs have been found, one can obtain a strong connectivity augmentation by adding three sets of edges: The total number of edges in these three sets is formula_3. Weighted and parameterized version. The weighted version of the problem, in which each edge that might be added has a given weight and the goal is to choose a set of added edges of minimum weight that makes the given graph strongly connected, is NP-complete. An approximation algorithm with approximation ratio 2 was provided by . A parameterized and weighted version of the problem, in which one must add at most formula_6 edges of minimum total weight to make the given graph strongly connected, is fixed-parameter tractable. Bipartite version and grid bracing application. If a square grid is made of rigid rods (the edges of the grid) connected to each other by flexible joints at the edges of the grid, then the overall structure can bend in many ways rather than remaining square. The grid bracing problem asks how to stabilize such a structure by adding additional cross bracing within some of its squares. This problem can be modeled using graph theory, by making a bipartite graph with a vertex for each row or column of squares in the grid, and an edge between two of these vertices when a square in a given row and column is cross-braced. If the cross-bracing within each square makes that completely rigid, then this graph is undirected, and represents a rigid structure if and only if it is a connected graph. However, if squares are only partially braced (for instance by connecting two opposite corners by a string or wire that prevents expansive motion but does not prevent contractive motion), then the graph is directed, and represents a rigid structure if and only if it is a strongly connected graph. An associated strong connectivity augmentation problem asks how to add more partial bracing to a grid that already has partial bracing in some of its squares. The existing partial bracing can be represented as a directed graph, and the additional partial bracing to be added should form a strong connectivity augmentation of that graph. In order to be able to translate a solution to the strong connectivity augmentation problem back to a solution of the original bracing problem, an extra restriction is required: each added edge must respect the bipartition of the original graph, and only connect row vertices with column vertices rather than attempting to connect rows to rows or columns to columns. This restricted version of the strong connectivity augmentation problem can again be solved in linear time. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "s" }, { "math_id": 1, "text": "t" }, { "math_id": 2, "text": "q" }, { "math_id": 3, "text": "\\max(s+q,t+q)" }, { "math_id": 4, "text": "s+q" }, { "math_id": 5, "text": "t+q" }, { "math_id": 6, "text": "k" } ]
https://en.wikipedia.org/wiki?curid=66425729
664332
Econophysics
Application of physics to the study of economics Econophysics is a non-orthodox (in economics) interdisciplinary research field, applying theories and methods originally developed by physicists in order to solve problems in economics, usually those including uncertainty or stochastic processes and nonlinear dynamics. Some of its application to the study of financial markets has also been termed statistical finance referring to its roots in statistical physics. Econophysics is closely related to social physics. History. Physicists' interest in the social sciences is not new (see e.g.,); Daniel Bernoulli, as an example, was the originator of utility-based preferences. One of the founders of neoclassical economic theory, former Yale University Professor of Economics Irving Fisher, was originally trained under the renowned Yale physicist, Josiah Willard Gibbs. Likewise, Jan Tinbergen, who won the first Nobel Memorial Prize in Economic Sciences in 1969 for having developed and applied dynamic models for the analysis of economic processes, studied physics with Paul Ehrenfest at Leiden University. In particular, Tinbergen developed the gravity model of international trade that has become the workhorse of international economics. Econophysics was started in the mid-1990s by several physicists working in the subfield of statistical mechanics. Unsatisfied with the traditional explanations and approaches of economists – which usually prioritized simplified approaches for the sake of soluble theoretical models over agreement with empirical data – they applied tools and methods from physics, first to try to match financial data sets, and then to explain more general economic phenomena. One driving force behind econophysics arising at this time was the sudden availability of large amounts of financial data, starting in the 1980s. It became apparent that traditional methods of analysis were insufficient – standard economic methods dealt with homogeneous agents and equilibrium, while many of the more interesting phenomena in financial markets fundamentally depended on heterogeneous agents and far-from-equilibrium situations. The term "econophysics" was coined by H. Eugene Stanley, to describe the large number of papers written by physicists in the problems of (stock and other) markets, in a conference on statistical physics in Kolkata (erstwhile Calcutta) in 1995 and first appeared in its proceedings publication in Physica A 1996. The inaugural meeting on econophysics was organised in 1998 in Budapest by János Kertész and Imre Kondor. The first book on econophysics was by R. N. Mantegna &amp; H. E. Stanley in 2000. The almost regular meeting series on the topic include: ECONOPHYS-KOLKATA (held in Kolkata &amp; Delhi), Econophysics Colloquium, ESHIA/ WEHIA. In recent years network science, heavily reliant on analogies from statistical mechanics, has been applied to the study of productive systems. That is the case with the works done at the Santa Fe Institute in European Funded Research Projects as Forecasting Financial Crises and the Harvard-MIT Observatory of Economic Complexity If "econophysics" is taken to denote the principle of applying statistical mechanics to economic analysis, as opposed to a particular literature or network, priority of innovation is probably due to Emmanuel Farjoun and Moshé Machover (1983). Their book "Laws of Chaos: A Probabilistic Approach to Political Economy" proposes "dis"solving (their words) the transformation problem in Marx's political economy by re-conceptualising the relevant quantities as random variables. If, on the other hand, "econophysics" is taken to denote the application of physics to economics, one can consider the works of Léon Walras and Vilfredo Pareto as part of it. Indeed, as shown by Bruna Ingrao and Giorgio Israel, general equilibrium theory in economics is based on the physical concept of mechanical equilibrium. Econophysics has nothing to do with the "physical quantities approach" to economics, advocated by Ian Steedman and others associated with neo-Ricardianism. Notable econophysicists are Emmanuel Bacry, Giulio Bottazzi, Jean-Philippe Bouchaud, Bikas K Chakrabarti, J. Doyne Farmer, Diego Garlaschelli, Dirk Helbing, János Kertész, Fabrizio Lillo, Rosario N. Mantegna, Matteo Marsili, Joseph L. McCauley, Jean-Francois Muzy, Enrico Scalas, Angelo Secchi, Didier Sornette, H. Eugene Stanley, Victor Yakovenko and Yi-Cheng Zhang. Particularly noteworthy among the formal courses on econophysics is the one offered continuously for more than a decade by Diego Garlaschelli at the Physics Department of the Leiden University. Basic tools. Basic tools of econophysics are probabilistic and statistical methods often taken from statistical physics. Physics models that have been applied in economics include the kinetic theory of gas (called the kinetic exchange models of markets), percolation models, chaotic models developed to study cardiac arrest, and models with self-organizing criticality as well as other models developed for earthquake prediction. Moreover, there have been attempts to use the mathematical theory of complexity and information theory, as developed by many scientists among whom are Murray Gell-Mann and Claude E. Shannon, respectively. For potential games, it has been shown that an emergence-producing equilibrium based on information via Shannon information entropy produces the same equilibrium measure (Gibbs measure from statistical mechanics) as a stochastic dynamical equation which represents noisy decisions, both of which are based on bounded rationality models used by economists. The fluctuation-dissipation theorem connects the two to establish a concrete correspondence of "temperature", "entropy", "free potential/energy", and other physics notions to an economics system. The statistical mechanics model is not constructed a-priori - it is a result of a boundedly rational assumption and modeling on existing neoclassical models. It has been used to prove the "inevitability of collusion" result of Huw Dixon in a case for which the neoclassical version of the model does not predict collusion. Here the demand is increasing, as with Veblen goods, stock buyers with the "hot hand" fallacy preferring to buy more successful stocks and sell those that are less successful, or among short traders during a short squeeze as occurred with the WallStreetBets group's collusion to drive up GameStop stock price in 2021. Nobel laureate and founder of experimental economics Vernon L. Smith has used econophysics to model sociability via implementation of ideas in Humanomics. There, noisy decision making and interaction parameters that facilitate the social action responses of reward and punishment result in spin glass models identical to those in physics. Quantifiers derived from information theory were used in several papers by econophysicist Aurelio F. Bariviera and coauthors in order to assess the degree in the informational efficiency of stock markets. Zunino et al. use an innovative statistical tool in the financial literature: the complexity-entropy causality plane. This Cartesian representation establish an efficiency ranking of different markets and distinguish different bond market dynamics. It was found that more developed countries have stock markets with higher entropy and lower complexity, while those markets from emerging countries have lower entropy and higher complexity. Moreover, the authors conclude that the classification derived from the complexity-entropy causality plane is consistent with the qualifications assigned by major rating companies to the sovereign instruments. A similar study developed by Bariviera et al. explore the relationship between credit ratings and informational efficiency of a sample of corporate bonds of US oil and energy companies using also the complexity–entropy causality plane. They find that this classification agrees with the credit ratings assigned by Moody's. Another good example is random matrix theory, which can be used to identify the noise in financial correlation matrices. One paper has argued that this technique can improve the performance of portfolios, e.g., in applied in portfolio optimization. The ideology of econophysics is embodied in a new probabilistic economic theory and, on its basis, a unified theory of stock markets. There are also analogies between finance theory and diffusion theory. For instance, the Black–Scholes equation for option pricing is a diffusion-advection equation (see however for a critique of the Black–Scholes methodology). The Black–Scholes theory can be extended to provide an analytical theory of main factors in economic activities. Subfields. Various other tools from physics that have so far been used, such as fluid dynamics, classical mechanics and quantum mechanics (including so-called classical economy, quantum economics and quantum finance), and the Feynman–Kac formula of statistical mechanics. Statistical mechanics. When mathematician Mark Kac attended a lecture by Richard Feynman he realized their work overlapped. Together they worked out a new approach to solving stochastic differential equations. Their approach is used to efficiently calculate solutions to the Black–Scholes equation to price options on stocks. Quantum finance. Quantum statistical models have been successfully applied to finance by several groups of econophysicists using different approaches, but the origin of their success may not be due to quantum analogies. Quantum economics. The editorial in the inaugural issue of the journal "Quantum Economics and Finance" says: "Quantum economics and finance is the application of probability based on projective geometry—also known as quantum probability—to modelling in economics and finance. It draws on related areas such as quantum cognition, quantum game theory, quantum computing, and quantum physics." In his overview article in the same issue, David Orrell outlines how neoclassical economics benefited from the concepts of classical mechanics, and yet concepts of quantum mechanics "apparently left economics untouched". He reviews different avenues for quantum economics, some of which he notes are contradictory, settling on "quantum economics therefore needs to take a different kind of leaf from the book of quantum physics, by adopting quantum methods, not because they appear natural or elegant or come pre-approved by some higher authority or bear resemblance to something else, but because they capture in a useful way the most basic properties of what is being studied." Main results. Econophysics is having some impacts on the more applied field of quantitative finance, whose scope and aims significantly differ from those of economic theory. Various econophysicists have introduced models for price fluctuations in physics of financial markets or original points of view on established models. Presently, one of the main results of econophysics comprises the explanation of the "fat tails" in the distribution of many kinds of financial data as a universal self-similar scaling property (i.e. scale invariant over many orders of magnitude in the data), arising from the tendency of individual market competitors, or of aggregates of them, to exploit systematically and optimally the prevailing "microtrends" (e.g., rising or falling prices). These "fat tails" are not only mathematically important, because they comprise the risks, which may be on the one hand, very small such that one may tend to neglect them, but which - on the other hand - are not negligible at all, i.e. they can never be made exponentially tiny, but instead follow a measurable algebraically decreasing power law, for example with a "failure probability" of only formula_0 where "x" is an increasingly large variable in the tail region of the distribution considered (i.e. a price statistics with much more than 108 data). I.e., the events considered are not simply "outliers" but must really be taken into account and cannot be "insured away". It appears that it also plays a role that near a change of the tendency (e.g. from falling to rising prices) there are typical "panic reactions" of the selling or buying agents with algebraically increasing bargain rapidities and volumes. As in quantum field theory the "fat tails" can be obtained by complicated "nonperturbative" methods, mainly by numerical ones, since they contain the deviations from the usual Gaussian approximations, e.g. the Black–Scholes theory. Fat tails can, however, also be due to other phenomena, such as a random number of terms in the central-limit theorem, or any number of other, non-econophysics models. Due to the difficulty in testing such models, they have received less attention in traditional economic analysis. Criticism. In 2006 economists Mauro Gallegati, Steve Keen, Thomas Lux, and Paul Ormerod, published a critique of econophysics. They cite important empirical contributions primarily in the areas of finance and industrial economics, but list four concerns with work in the field: lack of awareness of economics work, resistance to rigor, a misplaced belief in universal empirical regularity, and inappropriate models. See also. &lt;templatestyles src="Div col/styles.css"/&gt; References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "P\\propto x^{-4}\\,," } ]
https://en.wikipedia.org/wiki?curid=664332
66436172
Massless free scalar bosons in two dimensions
2D conformal field theories Massless free scalar bosons are a family of two-dimensional conformal field theories, whose symmetry is described by an abelian affine Lie algebra. Since they are free i.e. non-interacting, free bosonic CFTs are easily solved exactly. Via the Coulomb gas formalism, they lead to exact results in interacting CFTs such as minimal models. Moreover, they play an important role in the worldsheet approach to string theory. In a free bosonic CFT, the Virasoro algebra's central charge can take any complex value. However, the value formula_0 is sometimes implicitly assumed. For formula_0, there exist compactified free bosonic CFTs with arbitrary values of the compactification radius. Lagrangian formulation. The action of a free bosonic theory in two dimensions is a functional of the free boson formula_1, formula_2 where formula_3 is the metric of the two-dimensional space on which the theory is formulated, formula_4 is the Ricci scalar of that space. The parameter formula_5 is called the background charge. What is special to two dimensions is that the scaling dimension of the free boson formula_1 vanishes. This permits the presence of a non-vanishing background charge, and is at the origin of the theory's conformal symmetry. In probability theory, the free boson can be constructed as a Gaussian free field. This provides realizations of correlation functions as expected values of random variables. Symmetries. Abelian affine Lie algebra. The symmetry algebra is generated by two chiral conserved currents: a left-moving current and a right-moving current, respectively formula_6 which obey formula_7. Each current generates an abelian affine Lie algebra formula_8. The structure of the left-moving affine Lie algebra is encoded in the left-moving current's self-OPE, formula_9 Equivalently, if the current is written as a Laurent series formula_10 about the point formula_11, the abelian affine Lie algebra is characterized by the Lie bracket formula_12 The center of the algebra is generated by formula_13, and the algebra is a direct sum of mutually commuting subalgebras of dimension 1 or 2: formula_14 Conformal symmetry. For any value of formula_5, the abelian affine Lie algebra's universal enveloping algebra has a Virasoro subalgebra with the generators formula_15 The central charge of this Virasoro subalgebra is formula_16 and the commutation relations of the Virasoro generators with the affine Lie algebra generators are formula_17 If the parameter formula_18 coincides with the free boson's background charge, then the field formula_19 coincides with the free boson's energy-momentum tensor. The corresponding Virasoro algebra therefore has a geometrical interpretation as the algebra of infinitesimal conformal maps, and encodes the theory's local conformal symmetry. Extra symmetries. For special values of the central charge and/or of the radius of compactification, free bosonic theories can have not only their formula_8 symmetry, but also additional symmetries. In particular, at formula_0, for special values of the radius of compactification, there may appear non-abelian affine Lie algebras, supersymmetry, etc. Affine primary fields. In a free bosonic CFT, all fields are either affine primary fields or affine descendants thereof. Thanks to the affine symmetry, correlation functions of affine descendant fields can in principle be deduced from correlation functions of affine primary fields. Definition. An affine primary field formula_20 with the left and right formula_8-charges formula_21 is defined by its OPEs with the currents, formula_22 These OPEs are equivalent to the relations formula_23 The charges formula_21 are also called the left- and right-moving momentums. If they coincide, the affine primary field is called diagonal and written as formula_24. Normal-ordered exponentials of the free boson are affine primary fields. In particular, the field formula_25 is a diagonal affine primary field with momentum formula_26. This field, and affine primary fields in general, are sometimes called vertex operators. An affine primary field is also a Virasoro primary field with the conformal dimension formula_27 The two fields formula_28 and formula_29 have the same left and right conformal dimensions, although their momentums are different. OPEs and momentum conservation. Due to the affine symmetry, momentum is conserved in free bosonic CFTs. At the level of fusion rules, this means that only one affine primary field can appear in the fusion of any two affine primary fields, formula_30 Operator product expansions of affine primary fields therefore take the form formula_31 where formula_32 is the OPE coefficient, and the term formula_33 is the contribution of affine descendant fields. OPEs have no manifest dependence on the background charge. Correlation functions. According to the affine Ward identities for formula_34-point functions on the sphere, formula_35 Moreover, the affine symmetry completely determines the dependence of sphere formula_34-point functions on the positions, formula_36 Single-valuedness of correlation functions leads to constraints on momentums, formula_37 Models. Non-compact free bosons. A free bosonic CFT is called non-compact if the momentum can take continuous values. Non-compact free bosonic CFTs with formula_38 are used for describing non-critical string theory. In this context, a non-compact free bosonic CFT is called a linear dilaton theory. A free bosonic CFT with formula_39 i.e. formula_0 is a sigma model with a one-dimensional target space. Compactified free bosons. The compactified free boson with radius formula_44 is the free bosonic CFT where the left and right momentums take the values formula_45 The integers formula_46 are then called the momentum and winding number. The allowed values of the compactification radius are formula_47 if formula_39 and formula_48 otherwise. If formula_39, free bosons with radiuses formula_44 and formula_49 describe the same CFT. From a sigma model point of view, this equivalence is called T-duality. If formula_39, the compactified free boson CFT exists on any Riemann surface. Its partition function on the torus formula_50 is formula_51 where formula_52, and formula_53 is the Dedekind eta-function. This partition function is the sum of characters of the Virasoro algebra over the theory's spectrum of conformal dimensions. As in all free bosonic CFTs, correlation functions of affine primary fields have a dependence on the fields' positions that is determined by the affine symmetry. The remaining constant factors are signs that depend on the fields' momentums and winding numbers. Boundary conditions in the case c=1. Neumann and Dirichlet boundary conditions. Due to the formula_54 automorphism formula_55 of the abelian affine Lie algebra there are two types of boundary conditions that preserve the affine symmetry, namely formula_56 If the boundary is the line formula_57, these conditions correspond respectively to the Neumann boundary condition and Dirichlet boundary condition for the free boson formula_58. Boundary states. In the case of a compactified free boson, each type of boundary condition leads to a family of boundary states, parametrized by formula_59. The corresponding one-point functions on the upper half-plane formula_60 are formula_61 In the case of a non-compact free boson, there is only one Neumann boundary state, while Dirichlet boundary states are parametrized by a real parameter. The corresponding one-point functions are formula_62 where formula_63 and formula_64 for a Euclidean boson. Conformal boundary conditions. Neumann and Dirichlet boundaries are the only boundaries that preserve the free boson's affine symmetry. However, there exist additional boundaries that preserve only the conformal symmetry. If the radius is irrational, the additional boundary states are parametrized by a number formula_65. The one-point functions of affine primary fields with formula_66 vanish. However, the Virasoro primary fields that are affine descendants of the affine primary field with formula_67 have nontrivial one-point functions. If the radius is rational formula_68, the additional boundary states are parametrized by the manifold formula_69. Conformal boundary conditions at arbitrary formula_70 were also studied under the misnomer "boundary Liouville theory". Related theories and generalizations. Multiple bosons and orbifolds. From formula_34 massless free scalar bosons, it is possible to build a product CFT with the symmetry algebra formula_71. Some or all of the bosons can be compactified. In particular, compactifying formula_34 bosons without background charge on an formula_34-dimensional torus (with Neveu–Schwarz B-field) gives rise to a family of CFTs called Narain compactifications. These CFTs exist on any Riemann surface, and play an important role in perturbative string theory. Due to the existence of the automorphism formula_55 of the affine Lie algebra formula_8, and of more general automorphisms of formula_71, there exist orbifolds of free bosonic CFTs. For example, the formula_54 orbifold of the compactified free boson with formula_39 is the critical two-dimensional Ashkin–Teller model. Coulomb gas formalism. The Coulomb gas formalism is a technique for building interacting CFTs, or some of their correlation functions, from free bosonic CFTs. The idea is to perturb the free CFT using screening operators of the form formula_72, where formula_73 is an affine primary field of conformal dimensions formula_74. In spite of its perturbative definition, the technique leads to exact results, thanks to momentum conservation. In the case of a single free boson with background charge formula_18, there exist two diagonal screening operators formula_75, where formula_76. Correlation functions in minimal models can be computed using these screening operators, giving rise to Dotsenko–Fateev integrals. Residues of correlation functions in Liouville theory can also be computed, and this led to the original derivation of the DOZZ formula for the three-point structure constant. In the case of formula_34 free bosons, the introduction of screening charges can be used for defining nontrivial CFTs including conformal Toda theory. The symmetries of these nontrivial CFTs are described by subalgebras of the abelian affine Lie algebra. Depending on the screenings, these subalgebras may or may not be W-algebras. The Coulomb gas formalism can also be used in two-dimensional CFTs such as the q-state Potts model and the formula_77 model. Various generalizations. In arbitrary dimensions, there exist conformal field theories called generalized free theories. These are however not generalizations of the free bosonic CFTs in two dimensions. In the former, it is the conformal dimension which is conserved (modulo integers). In the latter, it is the momentum. In two dimensions, generalizations include: References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "c=1" }, { "math_id": 1, "text": " \\phi " }, { "math_id": 2, "text": "\nS[\\phi] = \\frac{1}{4\\pi } \\int d^2x \\sqrt{g} (g^{\\mu \\nu} \\partial_\\mu \\phi \\partial _{\\nu} \\phi + Q R \\phi )\\ ,\n" }, { "math_id": 3, "text": "g_{\\mu \\nu} " }, { "math_id": 4, "text": " R " }, { "math_id": 5, "text": "Q\\in\\mathbb{C}" }, { "math_id": 6, "text": "\nJ=\\partial \\phi \\quad \\text{and} \\quad \\bar{J}=\\bar\\partial\\phi\n" }, { "math_id": 7, "text": "\\partial\\bar J = \\bar \\partial J = 0" }, { "math_id": 8, "text": "\\hat{\\mathfrak{u}}_1" }, { "math_id": 9, "text": " J(y)J(z)=\\frac{-\\frac12}{(y-z)^2} + O(1) " }, { "math_id": 10, "text": "J(z)=\\sum_{n\\in\\mathbb{Z}} J_nz^{-n-1}" }, { "math_id": 11, "text": "z=0" }, { "math_id": 12, "text": " [J_m,J_n] =\\frac12 n\\delta_{m+n,0} " }, { "math_id": 13, "text": "J_0" }, { "math_id": 14, "text": "\n\\hat{\\mathfrak{u}}_1 = \\text{Span}(J_0) \\oplus \\bigoplus_{n=1}^\\infty \\text{Span}(J_n,J_{-n})\n" }, { "math_id": 15, "text": "\n\\begin{align}\n L_n &= -\\sum_{m\\in{\\mathbb{Z}}} J_{n-m}J_m + Q(n+1)J_n\\ , \\qquad (n\\neq 0)\\ ,\n\\\\\nL_0 &=-2\\sum_{m=1}^\\infty J_{-m}J_m -J_0^2+QJ_0 \\ ,\n\\end{align}\n" }, { "math_id": 16, "text": " c = 1 + 6Q^2 " }, { "math_id": 17, "text": " \n[L_m,J_n] = -nJ_{m+n} -\\frac{Q}{2}m(m+1) \\delta_{m+n,0}\n" }, { "math_id": 18, "text": "Q" }, { "math_id": 19, "text": " T(z) = \\sum_{n\\in\\mathbb{Z}} L_n z^{-n-2}" }, { "math_id": 20, "text": "V_{\\alpha, \\bar\\alpha}(z)" }, { "math_id": 21, "text": "\\alpha,\\bar\\alpha" }, { "math_id": 22, "text": "\nJ(y)V_{\\alpha, \\bar\\alpha}(z) = \\frac{\\alpha}{y-z} V_{\\alpha, \\bar\\alpha}(z) + O(1) \\quad ,\\quad \\bar J(y)V_{\\alpha, \\bar\\alpha}(z) = \\frac{\\bar\\alpha}{\\bar y-\\bar z} V_{\\alpha, \\bar\\alpha}(z) + O(1)\n" }, { "math_id": 23, "text": "\nJ_{n>0} V_{\\alpha, \\bar\\alpha}(z) = \\bar J_{n>0} V_{\\alpha, \\bar\\alpha}(z)=0 \\quad , \\quad J_0V_{\\alpha, \\bar\\alpha}(z) = \\alpha V_{\\alpha, \\bar\\alpha}(z) \\quad , \\quad \\bar J_0V_{\\alpha, \\bar\\alpha}(z) = \\bar\\alpha V_{\\alpha, \\bar\\alpha}(z)\n" }, { "math_id": 24, "text": "V_\\alpha(z)=V_{\\alpha,\\alpha}(z)" }, { "math_id": 25, "text": "\n:e^{2\\alpha\\phi(z)}:\n" }, { "math_id": 26, "text": "\\alpha" }, { "math_id": 27, "text": "\n\\Delta(\\alpha) = \\alpha(Q-\\alpha)\n" }, { "math_id": 28, "text": "V_{\\alpha}(z)" }, { "math_id": 29, "text": "V_{Q-\\alpha}(z)" }, { "math_id": 30, "text": "\nV_{\\alpha_1,\\bar\\alpha_1} \\times V_{\\alpha_2,\\bar\\alpha_2} = V_{\\alpha_1+\\alpha_2,\\bar\\alpha_1+\\bar\\alpha_2}\n" }, { "math_id": 31, "text": " \nV_{\\alpha_1,\\bar\\alpha_1}(z_1)V_{\\alpha_2,\\bar\\alpha_2}(z_2) = C(\\alpha_i,\\bar\\alpha_i) (z_1-z_2)^{-2\\alpha_1\\alpha_2} (\\bar z_1-\\bar z_2)^{-2\\bar \\alpha_1\\bar\\alpha_2}\\left( V_{\\alpha_1+\\alpha_2,\\bar\\alpha_1+\\bar\\alpha_2}(z_2) + O(z_1-z_2)\\right)\n" }, { "math_id": 32, "text": "C(\\alpha_i,\\bar \\alpha_i)" }, { "math_id": 33, "text": "O(z_1-z_2)" }, { "math_id": 34, "text": "N" }, { "math_id": 35, "text": "\n\\left\\langle\\prod_{i=1}^N V_{\\alpha_i,\\bar\\alpha_i}(z_i)\\right\\rangle \\neq 0\n\\implies \n\\sum_{i=1}^N \\alpha_i = \\sum_{i=1}^N\\bar \\alpha_i = Q\n" }, { "math_id": 36, "text": "\n\\left\\langle\\prod_{i=1}^N V_{\\alpha_i,\\bar\\alpha_i}(z_i)\\right\\rangle \n\\propto\n\\prod_{i<j} (z_i-z_j)^{-2\\alpha_i\\alpha_j} (\\bar z_i-\\bar z_j)^{-2\\bar \\alpha_i\\bar \\alpha_j}\n" }, { "math_id": 37, "text": "\n\\Delta(\\alpha_i) -\\Delta(\\bar \\alpha_i) \\in \\frac12\\mathbb{Z}\n" }, { "math_id": 38, "text": "Q\\neq 0" }, { "math_id": 39, "text": "Q=0" }, { "math_id": 40, "text": "\\alpha=\\bar\\alpha\\in i\\mathbb{R}" }, { "math_id": 41, "text": "\\Delta(\\alpha)\\geq 0" }, { "math_id": 42, "text": "\\alpha=\\bar\\alpha\\in \\mathbb{R}" }, { "math_id": 43, "text": "\\Delta(\\alpha)\\leq 0" }, { "math_id": 44, "text": "R" }, { "math_id": 45, "text": "\n(\\alpha,\\bar \\alpha) =\\left(\\frac{i}{2}\\left[\\frac{n}{R}+Rw\\right], \\frac{i}{2}\\left[\\frac{n}{R}-Rw\\right]\\right) \\quad \\text{with} \\quad (n,w)\\in\\mathbb{Z}^2\n" }, { "math_id": 46, "text": "n,w" }, { "math_id": 47, "text": "R\\in\\mathbb{C}^*" }, { "math_id": 48, "text": "R\\in\\frac{1}{iQ}\\mathbb{Z}" }, { "math_id": 49, "text": "\\frac{1}{R}" }, { "math_id": 50, "text": "\\frac{\\mathbb{C}}{\\mathbb{Z}+\\tau\\mathbb{Z}}" }, { "math_id": 51, "text": "\nZ_R(\\tau) = Z_{\\frac{1}{R}}(\\tau) = \\frac{1}{|\\eta(\\tau)|^2} \\sum_{n,w\\in\\mathbb{Z}} q^{\\frac14\\left[\\frac{n}{R}+Rw\\right]^2} \\bar{q}^{\\frac14\\left[\\frac{n}{R}-Rw\\right]^2}\n" }, { "math_id": 52, "text": "q=e^{2\\pi i\\tau}" }, { "math_id": 53, "text": "\\eta(\\tau)" }, { "math_id": 54, "text": "\\mathbb{Z}_2" }, { "math_id": 55, "text": "J\\to -J" }, { "math_id": 56, "text": "\nJ = \\bar{J} \\quad \\text{or} \\quad J = -\\bar{J}\n" }, { "math_id": 57, "text": "z=\\bar{z}" }, { "math_id": 58, "text": "\\phi" }, { "math_id": 59, "text": "\\theta\\in \\frac{\\mathbb{R}}{2\\pi \\mathbb{Z}}" }, { "math_id": 60, "text": "\\{\\Im z > 0\\}" }, { "math_id": 61, "text": "\n\\begin{align}\n\\left\\langle V_{(n,w)}(z)\\right\\rangle_{\\text{Dirichlet}, \\theta} &= \\frac{e^{in\\theta}\\delta_{w,0}}{|z-\\bar z|^{\\frac{n^2}{2R^2}}} \n\\\\\n\\left\\langle V_{(n,w)}(z)\\right\\rangle_{\\text{Neumann}, \\theta} &= \\frac{e^{iw\\theta}\\delta_{n,0}}{|z-\\bar z|^{\\frac{R^2w^2}{2}}} \n\\end{align}\n" }, { "math_id": 62, "text": "\n\\begin{align}\n\\left\\langle V_{\\alpha}(z)\\right\\rangle_{\\text{Dirichlet}, \\theta} &= \\frac{e^{\\alpha\\theta}}{|z-\\bar z|^{2\\Delta(\\alpha)} }\n\\\\\n\\left\\langle V_{\\alpha}(z)\\right\\rangle_{\\text{Neumann}} &= \\delta(i\\alpha)\n\\end{align}\n" }, { "math_id": 63, "text": "\\alpha\\in i\\mathbb{R}" }, { "math_id": 64, "text": "\\theta\\in\\mathbb{R}" }, { "math_id": 65, "text": "x\\in [-1,1]" }, { "math_id": 66, "text": "(n,w)\\neq (0,0)" }, { "math_id": 67, "text": "(n,w)=(0,0)" }, { "math_id": 68, "text": "R=\\frac{p}{q}" }, { "math_id": 69, "text": "\\frac{SU(2)}{\\mathbb{Z}_p\\times \\mathbb{Z}_q}" }, { "math_id": 70, "text": "c" }, { "math_id": 71, "text": "\\hat{\\mathfrak{u}}_1^N" }, { "math_id": 72, "text": "\\textstyle{\\int} d^2z\\, O(z)" }, { "math_id": 73, "text": "O(z)" }, { "math_id": 74, "text": "(\\Delta,\\bar\\Delta) = (1, 1)" }, { "math_id": 75, "text": "\\textstyle{\\int} V_b, \\textstyle{\\int} V_{b^{-1}}" }, { "math_id": 76, "text": "Q=b+b^{-1}" }, { "math_id": 77, "text": "O(n)" } ]
https://en.wikipedia.org/wiki?curid=66436172
66437566
Ellen Maycock
American mathematician (born 1950) Ellen Johnston Maycock (born September 15, 1950 in Maryland) is an American mathematician and mathematics educator. She is the former Johnson Family University Professor and professor emerita of mathematics at DePauw University in Greencastle, Indiana. Her mathematical research was in functional analysis. Education and career. In 1972, Maycock received a B.A. degree in mathematics and economics from Wellesley College in Wellesley, Massachusetts. In 1974, she received a M.S. degree in mathematics and in 1986, a Ph.D. in mathematics, both from Purdue University in West Lafayette, Indiana. Her dissertation "The Brauer Group of Graded Continuous trace formula_0-algebras was supervised by Jerome Alvin Kaminker. After teaching at Wellesley for two years following her degree, in 1988, Maycock joined the faculty at DePauw as an assistant professor, was promoted to associate professor in 1993 and to professor in 2001. She developed a series of workshops that brought faculty from across the nation to DePauw to learn innovative teaching styles. Maycock is known for her development of creative approaches to teaching abstract algebra. She developed a course that used the software package "Exploring Small Groups" to assist students in their mastery of the concepts of abstract algebra. She also introduced computer technology in courses on Euclidean and non-Euclidean geometry and analysis. Maycock has served on the Editorial Boards of the Mathematical Association of America (MAA) Notes and Spectrum series and the American Mathematical Society Committee on the Profession. She served on the AMS-MAA-SIAM Committee on Employment Opportunities Past Members from 2007 to 2014. In September 2005, Maycock joined the staff of the American Mathematical Society (AMS) as an associate executive director. In that role, she was responsible for AMS meetings and professional services, programs that served AMS members, supported and improved the public image of the profession. She remained in that position until 2015, when she was replaced by T. Christine Stevens. Maycock was on the steering committee of INGenIOuS (Investing in the Next Generation through Innovative and Outstanding Strategies), a project involving the National Science Foundation, mathematics, and statistics professional socieities. The project culminated with a workshop in 2013 that highlighted ways to increase the number of mathematics students that enter the workforce. Recognition. Maycock was selected by DePauw to receive a University Professor Award for 2003–2007. She was honored for her sustained excellence in teaching, service, and professional accomplishments and was named Johnson Family University Professor for this period. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "C^*" } ]
https://en.wikipedia.org/wiki?curid=66437566
66445023
Perpetual futures
Financial agreement In finance, a perpetual futures contract, also known as a perpetual swap, is an agreement to non-optionally buy or sell an asset at an unspecified point in the future. Perpetual futures are cash-settled, and differ from regular futures in that they lack a pre-specified delivery date, and can thus be held indefinitely without the need to roll over contracts as they approach expiration. Payments are periodically exchanged between holders of the two sides of the contracts, long and short, with the direction and magnitude of the settlement based on the difference between the contract price and that of the underlying asset, as well as, if applicable, the difference in leverage between the two sides. Perpetual futures were first proposed by economist Robert Shiller in 1992, to enable derivatives markets for illiquid assets. However, perpetual futures markets have only developed for cryptocurrencies, with specific "inverse perpetual" type being invented by Alexey Bragin in 2011 for ICBIT exchange first, following their wider adoption in 2016 by other derivatives exchanges like BitMEX. Kraken Cryptocurrency perpetuals are characterised by the availability of high leverage, sometimes over 100 times the margin, and by the use of auto-deleveraging, which compels high-leverage, profitable traders to forfeit a portion of their profits to cover the losses of the other side during periods of high market volatility, as well as insurance funds, pools of assets intended to prevent the need for auto-deleveraging. Prior to spread of stablecoins in cryptomarkets all perpetual futures traded on unlicensed crypto exchanges were inverse (non-linear) futures contract, with asset being US dollar, and the price being quoted in US dollars for 1 Bitcoin. The contract is called non-linear inverse bitcoin futures because of the added non-linearity in the calculation. This makes the contract useful as a financial instrument and enables to do all accounting in Bitcoin at the same time, unlike quanto futures, while also not requiring exchange to have financial license due to accounting not being done in any fiduciary currency. Perpetuals serve the same function as contracts for difference (CFDs), allowing indefinite, leveraged tracking of an underlying asset or flow, but differ in that a single, uniform contract is traded on an exchange for all time-horizons, quantities of leverage, and positions, as opposed to separate contracts for separate quantities of leverage typically traded directly with a broker. History. Holding a futures contract indefinitely requires periodically rolling over the contract into a new one before the contract's expiry. However, given that futures prices typically differ from spot prices, repeatedly rolling over contracts creates significant basis risk, leading to inefficiencies when used for hedging or speculation. In an attempt to remedy these ills, the Chinese Gold and Silver Exchange of Hong Kong developed an "undated futures" market, wherein one-day futures would be rolled over automatically, with the difference between future and spot prices settled between the counterparties. In 1992, Robert Shiller proposed perpetual futures, alongside a method for generating asset-price indices using hedonic regression, accounting for unmeasured qualities by adding dummy variables that represent elements of the index, indicating the unique quality of each element, a form of repeated measures design. This was intended to permit the creation of derivatives markets for illiquid, infrequently-priced assets, such as single-family homes, as well as untraded indices and flows of income, such as labour costs or the consumer price index. In 2011, Alexey Bragin developed a solution to simplify leverage trading of cryptocurrencies for unlicensed exchanges. The product provided several improvements specific to crypto sphere: Inverse nature (asset itself used as a margin for trading), funding mechanism (to keep perpetual futures price close to asset price, funding is paid, on regular basis, to incentivize price move closer to asset price). All these innovations enabled to set lots to convenient size in USD, keep price contango or backwardation under control, and settle all operations in cryptocurrency, which simplified legal side of crypto trading. The drawback of this solution was non-linear PnL, generating specific convexity (the second derivative of a contract’s value with respect to price), so that long positions will be liquidated faster on price fall, than short positions will be on price rise Mechanism. Perpetual futures for the value of a cash flow, dividend or index, as envisioned by Shiller, require the payment of a daily settlement, intended to mirror the value of the flow, from one side of the contract to the other. At any day "t", the dividend formula_0, paid from shorts to longs, is defined as: formula_1 where formula_2 is the price of the perpetual at day "t", formula_3 is the dividend paid to owners of the underlying asset on day "t", and formula_4 is the return on an alternative asset (expected to be a short-term, low-risk rate) between time "t" and "t+1".
[ { "math_id": 0, "text": "s_{t+1}" }, { "math_id": 1, "text": "s_{t+1}=(f_{t+1}-f_t)+(d_{t+1}-r_tf_t)" }, { "math_id": 2, "text": "f_t" }, { "math_id": 3, "text": "d_t" }, { "math_id": 4, "text": "r_t" } ]
https://en.wikipedia.org/wiki?curid=66445023
664488
Reciprocal lattice
Fourier transform of a real-space lattice, important in solid-state physics The reciprocal lattice is a term associated with solids with translational symmetry, and plays a major role in many areas such as X-ray and electron diffraction as well as the energies of electrons in a solid. It emerges from the Fourier transform of the lattice associated with the arrangement of the atoms. The "direct lattice" or "real lattice" is a periodic function in physical space, such as a crystal system (usually a Bravais lattice). The reciprocal lattice exists in the mathematical space of spatial frequencies, known as reciprocal space or k space, which is the dual of "physical space" considered as a vector space, and the reciprocal lattice is the sublattice of that space that is dual to the "direct lattice". In quantum physics, reciprocal space is closely related to momentum space according to the proportionality formula_0, where formula_1 is the momentum vector and formula_2 is the reduced Planck constant. The reciprocal lattice of a reciprocal lattice is equivalent to the original direct lattice, because the defining equations are symmetrical with respect to the vectors in real and reciprocal space. Mathematically, direct and reciprocal lattice vectors represent covariant and contravariant vectors, respectively. The reciprocal lattice is the set of all vectors formula_3, that are wavevectors of plane waves in the Fourier series of a spatial function whose periodicity is the same as that of a direct lattice formula_4. Each plane wave in this Fourier series has the same phase or phases that are differed by multiples of formula_5 at each direct lattice point (so essentially same phase at all the direct lattice points). The Brillouin zone is a Wigner–Seitz cell of the reciprocal lattice. Wave-based description. Reciprocal space. Reciprocal space (also called k-space) provides a way to visualize the results of the Fourier transform of a spatial function. It is similar in role to the frequency domain arising from the Fourier transform of a time dependent function; reciprocal space is a space over which the Fourier transform of a spatial function is represented at spatial frequencies or wavevectors of plane waves of the Fourier transform. The domain of the spatial function itself is often referred to as real space. In physical applications, such as crystallography, both real and reciprocal space will often each be two or three dimensional. Whereas the number of spatial dimensions of these two associated spaces will be the same, the spaces will differ in their quantity dimension, so that when the real space has the dimension length (L), its reciprocal space will of inverse length, so L−1 (the reciprocal of length). Reciprocal space comes into play regarding waves, both classical and quantum mechanical. Because a sinusoidal plane wave with unit amplitude can be written as an oscillatory term formula_6, with initial phase formula_7, angular wavenumber formula_8 and angular frequency formula_9, it can be regarded as a function of both formula_8 and formula_10 (and the time-varying part as a function of both formula_9 and formula_11). This complementary role of formula_8 and formula_10 leads to their visualization within complementary spaces (the real space and the reciprocal space). The spatial periodicity of this wave is defined by its wavelength formula_12, where formula_13; hence the corresponding wavenumber in reciprocal space will be formula_14. In three dimensions, the corresponding plane wave term becomes formula_15, which simplifies to formula_16 at a fixed time formula_11, where formula_17 is the position vector of a point in real space and now formula_18 is the wavevector in the three dimensional reciprocal space. (The magnitude of a wavevector is called wavenumber.) The constant formula_19 is the phase of the wavefront (a plane of a constant phase) through the origin formula_20 at time formula_11, and formula_21 is a unit vector perpendicular to this wavefront. The wavefronts with phases formula_22, where formula_23 represents any integer, comprise a set of parallel planes, equally spaced by the wavelength formula_12. Reciprocal lattice. In general, a geometric lattice is an infinite, regular array of vertices (points) in space, which can be modelled vectorially as a Bravais lattice. Some lattices may be skew, which means that their primary lines may not necessarily be at right angles. In reciprocal space, a reciprocal lattice is defined as the set of wavevectors formula_24 of plane waves in the Fourier series of any function formula_25 whose periodicity is compatible with that of an initial direct lattice in real space. Equivalently, a wavevector is a vertex of the reciprocal lattice if it corresponds to a plane wave in real space whose phase at any given time is the same (actually differs by formula_26 with an integer formula_23) at every direct lattice vertex. One heuristic approach to constructing the reciprocal lattice in three dimensions is to write the position vector of a vertex of the direct lattice as formula_27, where the formula_28 are integers defining the vertex and the formula_29 are linearly independent primitive translation vectors (or shortly called primitive vectors) that are characteristic of the lattice. There is then a unique plane wave (up to a factor of negative one), whose wavefront through the origin formula_30 contains the direct lattice points at formula_31 and formula_32, and with its adjacent wavefront (whose phase differs by formula_5 or formula_33 from the former wavefront passing the origin) passing through formula_34. Its angular wavevector takes the form formula_35, where formula_36 is the unit vector perpendicular to these two adjacent wavefronts and the wavelength formula_37 must satisfy formula_38, means that formula_37 is equal to the distance between the two wavefronts. Hence by construction formula_39 and formula_40. Cycling through the indices in turn, the same method yields three wavevectors formula_41 with formula_42, where the Kronecker delta formula_43 equals one when formula_44 and is zero otherwise. The formula_41 comprise a set of three primitive wavevectors or three primitive translation vectors for the reciprocal lattice, each of whose vertices takes the form formula_45, where the formula_46 are integers. The reciprocal lattice is also a Bravais lattice as it is formed by integer combinations of the primitive vectors, that are formula_47, formula_48, and formula_49 in this case. Simple algebra then shows that, for any plane wave with a wavevector formula_50 on the reciprocal lattice, the total phase shift formula_51 between the origin and any point formula_52 on the direct lattice is a multiple of formula_5 (that can be possibly zero if the multiplier is zero), so the phase of the plane wave with formula_50 will essentially be equal for every direct lattice vertex, in conformity with the reciprocal lattice definition above. (Although any wavevector formula_50 on the reciprocal lattice does always take this form, this derivation is motivational, rather than rigorous, because it has omitted the proof that no other possibilities exist.) The Brillouin zone is a primitive cell (more specifically a Wigner–Seitz cell) of the reciprocal lattice, which plays an important role in solid state physics due to Bloch's theorem. In pure mathematics, the dual space of linear forms and the dual lattice provide more abstract generalizations of reciprocal space and the reciprocal lattice. Mathematical description. Assuming a three-dimensional Bravais lattice and labelling each lattice vector (a vector indicating a lattice point) by the subscript formula_53 as 3-tuple of integers, formula_54 where formula_55 where formula_56 is the set of integers and formula_29 is a primitive translation vector or shortly primitive vector. Taking a function formula_25 where formula_17 is a position vector from the origin formula_57 to any position, if formula_25 follows the periodicity of this lattice, e.g. the function describing the electronic density in an atomic crystal, it is useful to write formula_25 as a multi-dimensional Fourier series formula_58 where now the subscript formula_59, so this is a triple sum. As formula_25 follows the periodicity of the lattice, translating formula_17 by any lattice vector formula_4 we get the same value, hence formula_60 Expressing the above instead in terms of their Fourier series we have formula_61 Because equality of two Fourier series implies equality of their coefficients, formula_62, which only holds when formula_63 where formula_64 Mathematically, the reciprocal lattice is the set of all vectors formula_3, that are wavevectors of plane waves in the Fourier series of a spatial function whose periodicity is the same as that of a direct lattice as the set of all direct lattice point position vectors formula_4, and formula_3 satisfy this equality for all formula_4. Each plane wave in the Fourier series has the same phase (actually can be differed by a multiple of formula_5) at all the lattice point formula_4. As shown in the section multi-dimensional Fourier series, formula_3 can be chosen in the form of formula_65 where formula_42. With this form, the reciprocal lattice as the set of all wavevectors formula_3 for the Fourier series of a spatial function which periodicity follows formula_4, is itself a Bravais lattice as it is formed by integer combinations of its own primitive translation vectors formula_66, and the reciprocal of the reciprocal lattice is the original lattice, which reveals the Pontryagin duality of their respective vector spaces. (There may be other form of formula_3. Any valid form of formula_3 results in the same reciprocal lattice.) Two dimensions. For an infinite two-dimensional lattice, defined by its primitive vectors formula_67, its reciprocal lattice can be determined by generating its two reciprocal primitive vectors, through the following formulae, formula_68 where formula_69 is an integer and formula_70 Here formula_71 represents a 90 degree rotation matrix, i.e. a "q"uarter turn. The anti-clockwise rotation and the clockwise rotation can both be used to determine the reciprocal lattice: If formula_71 is the anti-clockwise rotation and formula_72 is the clockwise rotation, formula_73 for all vectors formula_74. Thus, using the permutation formula_75 we obtain formula_76 Notably, in a 3D space this 2D reciprocal lattice is an infinitely extended set of Bragg rods—described by Sung et al. Three dimensions. For an infinite three-dimensional lattice formula_54, defined by its primitive vectors formula_77 and the subscript of integers formula_78, its reciprocal lattice formula_65 with the integer subscript formula_59 can be determined by generating its three reciprocal primitive vectors formula_66 formula_79 where formula_80 is the scalar triple product. The choice of these formula_66 is to satisfy formula_42 as the known condition (There may be other condition.) of primitive translation vectors for the reciprocal lattice derived in the heuristic approach above and the section multi-dimensional Fourier series. This choice also satisfies the requirement of the reciprocal lattice formula_62 mathematically derived above. Using column vector representation of (reciprocal) primitive vectors, the formulae above can be rewritten using matrix inversion: formula_81 This method appeals to the definition, and allows generalization to arbitrary dimensions. The cross product formula dominates introductory materials on crystallography. The above definition is called the "physics" definition, as the factor of formula_82 comes naturally from the study of periodic structures. An essentially equivalent definition, the "crystallographer's" definition, comes from defining the reciprocal lattice formula_83. which changes the reciprocal primitive vectors to be formula_84 and so on for the other primitive vectors. The crystallographer's definition has the advantage that the definition of formula_47 is just the reciprocal magnitude of formula_34 in the direction of formula_85, dropping the factor of formula_82. This can simplify certain mathematical manipulations, and expresses reciprocal lattice dimensions in units of spatial frequency. It is a matter of taste which definition of the lattice is used, as long as the two are not mixed. formula_59 is conventionally written as formula_86 or formula_87, called Miller indices; formula_88 is replaced with formula_89, formula_90 replaced with formula_8, and formula_91 replaced with formula_92. Each lattice point formula_87 in the reciprocal lattice corresponds to a set of lattice planes formula_87 in the real space lattice. (A lattice plane is a plane crossing lattice points.) The direction of the reciprocal lattice vector corresponds to the normal to the real space planes. The magnitude of the reciprocal lattice vector formula_93 is given in reciprocal length and is equal to the reciprocal of the interplanar spacing of the real space planes. Higher dimensions. The formula for formula_23 dimensions can be derived assuming an formula_23-dimensional real vector space formula_94 with a basis formula_95 and an inner product formula_96. The reciprocal lattice vectors are uniquely determined by the formula formula_97. Using the permutation formula_98 they can be determined with the following formula: formula_99 Here, formula_100 is the volume form, formula_101 is the inverse of the vector space isomorphism formula_102 defined by formula_103 and formula_104 denotes the inner multiplication. One can verify that this formula is equivalent to the known formulas for the two- and three-dimensional case by using the following facts: In three dimensions, formula_105 and in two dimensions, formula_106, where formula_107 is the rotation by 90 degrees (just like the volume form, the angle assigned to a rotation depends on the choice of orientation). Reciprocal lattices of various crystals. Reciprocal lattices for the cubic crystal system are as follows. Simple cubic lattice. The simple cubic Bravais lattice, with cubic primitive cell of side formula_108, has for its reciprocal a simple cubic lattice with a cubic primitive cell of side formula_109 (or formula_110 in the crystallographer's definition). The cubic lattice is therefore said to be self-dual, having the same symmetry in reciprocal space as in real space. Face-centered cubic (FCC) lattice. The reciprocal lattice to an FCC lattice is the body-centered cubic (BCC) lattice, with a cube side of formula_111. Consider an FCC compound unit cell. Locate a primitive unit cell of the FCC; i.e., a unit cell with one lattice point. Now take one of the vertices of the primitive unit cell as the origin. Give the basis vectors of the real lattice. Then from the known formulae, you can calculate the basis vectors of the reciprocal lattice. These reciprocal lattice vectors of the FCC represent the basis vectors of a BCC real lattice. The basis vectors of a real BCC lattice and the reciprocal lattice of an FCC resemble each other in direction but not in magnitude. Body-centered cubic (BCC) lattice. The reciprocal lattice to a BCC lattice is the FCC lattice, with a cube side of formula_112. It can be proven that only the Bravais lattices which have 90 degrees between formula_113 (cubic, tetragonal, orthorhombic) have primitive translation vectors for the reciprocal lattice, formula_114, parallel to their real-space vectors. Simple hexagonal lattice. The reciprocal to a simple hexagonal Bravais lattice with lattice constants formula_115 and formula_116 is another simple hexagonal lattice with lattice constants formula_117 and formula_118 rotated through 90° about the "c" axis with respect to the direct lattice. The simple hexagonal lattice is therefore said to be self-dual, having the same symmetry in reciprocal space as in real space. Primitive translation vectors for this simple hexagonal Bravais lattice vectors are formula_119 Arbitrary collection of atoms. One path to the reciprocal lattice of an arbitrary collection of atoms comes from the idea of scattered waves in the Fraunhofer (long-distance or lens back-focal-plane) limit as a Huygens-style sum of amplitudes from all points of scattering (in this case from each individual atom). This sum is denoted by the complex amplitude formula_120 in the equation below, because it is also the Fourier transform (as a function of spatial frequency or reciprocal distance) of an effective scattering potential in direct space: formula_121 Here g = q/(2π) is the scattering vector q in crystallographer units, "N" is the number of atoms, "f""j"[g] is the atomic scattering factor for atom "j" and scattering vector g, while r"j" is the vector position of atom "j". The Fourier phase depends on one's choice of coordinate origin. For the special case of an infinite periodic crystal, the scattered amplitude "F" = "M" "Fh,k,ℓ" from "M" unit cells (as in the cases above) turns out to be non-zero only for integer values of formula_122, where formula_123 when there are "j" = 1,"m" atoms inside the unit cell whose fractional lattice indices are respectively {"u""j", "v""j", "w""j"}. To consider effects due to finite crystal size, of course, a shape convolution for each point or the equation above for a finite lattice must be used instead. Whether the array of atoms is finite or infinite, one can also imagine an "intensity reciprocal lattice" I[g], which relates to the amplitude lattice F via the usual relation "I" = "F"*"F" where "F"* is the complex conjugate of F. Since Fourier transformation is reversible, of course, this act of conversion to intensity tosses out "all except 2nd moment" (i.e. the phase) information. For the case of an arbitrary collection of atoms, the intensity reciprocal lattice is therefore: formula_124 Here r"jk" is the vector separation between atom "j" and atom "k". One can also use this to predict the effect of nano-crystallite shape, and subtle changes in beam orientation, on detected diffraction peaks even if in some directions the cluster is only one atom thick. On the down side, scattering calculations using the reciprocal lattice basically consider an incident plane wave. Thus after a first look at reciprocal lattice (kinematic scattering) effects, beam broadening and multiple scattering (i.e. dynamical) effects may be important to consider as well. Generalization of a dual lattice. There are actually two versions in mathematics of the abstract dual lattice concept, for a given lattice "L" in a real vector space "V", of finite dimension. The first, which generalises directly the reciprocal lattice construction, uses Fourier analysis. It may be stated simply in terms of Pontryagin duality. The dual group "V"^ to "V" is again a real vector space, and its closed subgroup "L"^ dual to "L" turns out to be a lattice in "V"^. Therefore, "L"^ is the natural candidate for "dual lattice", in a different vector space (of the same dimension). The other aspect is seen in the presence of a quadratic form "Q" on "V"; if it is non-degenerate it allows an identification of the dual space "V"* of "V" with "V". The relation of "V"* to "V" is not intrinsic; it depends on a choice of Haar measure (volume element) on "V". But given an identification of the two, which is in any case well-defined up to a scalar, the presence of "Q" allows one to speak to the dual lattice to "L" while staying within "V". In mathematics, the dual lattice of a given lattice "L" in an abelian locally compact topological group "G" is the subgroup "L"∗ of the dual group of "G" consisting of all continuous characters that are equal to one at each point of "L". In discrete mathematics, a lattice is a locally discrete set of points described by all integral linear combinations of dim "n" linearly independent vectors in R"n". The dual lattice is then defined by all points in the linear span of the original lattice (typically all of R"n") with the property that an integer results from the inner product with all elements of the original lattice. It follows that the dual of the dual lattice is the original lattice. Furthermore, if we allow the matrix "B" to have columns as the linearly independent vectors that describe the lattice, then the matrix formula_125 has columns of vectors that describe the dual lattice. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\mathbf{p} = \\hbar \\mathbf{k}" }, { "math_id": 1, "text": "\\mathbf{p}" }, { "math_id": 2, "text": "\\hbar" }, { "math_id": 3, "text": "\\mathbf{G}_m" }, { "math_id": 4, "text": "\\mathbf{R}_n" }, { "math_id": 5, "text": "2\\pi" }, { "math_id": 6, "text": "\\cos(kx - \\omega t + \\varphi_0)" }, { "math_id": 7, "text": "\\varphi_0" }, { "math_id": 8, "text": "k" }, { "math_id": 9, "text": "\\omega" }, { "math_id": 10, "text": "x" }, { "math_id": 11, "text": "t" }, { "math_id": 12, "text": "\\lambda" }, { "math_id": 13, "text": "k \\lambda = 2\\pi" }, { "math_id": 14, "text": "k = 2\\pi / \\lambda" }, { "math_id": 15, "text": "\\cos(\\mathbf{k} \\cdot \\mathbf{r} - \\omega t + \\varphi_0)" }, { "math_id": 16, "text": "\\cos(\\mathbf{k} \\cdot \\mathbf{r} + \\varphi)" }, { "math_id": 17, "text": "\\mathbf{r}" }, { "math_id": 18, "text": "\\mathbf{k}=2\\pi \\mathbf{e} / \\lambda" }, { "math_id": 19, "text": "\\varphi" }, { "math_id": 20, "text": "\\mathbf{r}=0" }, { "math_id": 21, "text": "\\mathbf{e}" }, { "math_id": 22, "text": "\\varphi + (2\\pi)n" }, { "math_id": 23, "text": "n" }, { "math_id": 24, "text": "\\mathbf{k}" }, { "math_id": 25, "text": "f(\\mathbf{r})" }, { "math_id": 26, "text": "(2\\pi)n" }, { "math_id": 27, "text": "\\mathbf{R} = n_1\\mathbf{a}_1 + n_2\\mathbf{a}_2 + n_3\\mathbf{a}_3" }, { "math_id": 28, "text": "n_i" }, { "math_id": 29, "text": "\\mathbf{a}_i" }, { "math_id": 30, "text": "\\mathbf{R} = 0" }, { "math_id": 31, "text": "\\mathbf{a}_2" }, { "math_id": 32, "text": "\\mathbf{a}_3" }, { "math_id": 33, "text": "-2\\pi" }, { "math_id": 34, "text": "\\mathbf{a}_1" }, { "math_id": 35, "text": "\\mathbf{b}_1 = 2\\pi \\mathbf{e}_1 / \\lambda_{1}" }, { "math_id": 36, "text": "\\mathbf{e}_1" }, { "math_id": 37, "text": "\\lambda_1" }, { "math_id": 38, "text": "\\lambda_1 = \\mathbf{a}_1 \\cdot \\mathbf{e}_1" }, { "math_id": 39, "text": "\\mathbf{a}_1 \\cdot \\mathbf{b}_1 = 2\\pi" }, { "math_id": 40, "text": "\\mathbf{a}_2 \\cdot \\mathbf{b}_1 = \\mathbf{a}_3 \\cdot \\mathbf{b}_1 = 0" }, { "math_id": 41, "text": "\\mathbf{b}_j" }, { "math_id": 42, "text": "\\mathbf{a}_i \\cdot \\mathbf{b}_j = 2\\pi \\, \\delta_{ij}" }, { "math_id": 43, "text": "\\delta_{ij}" }, { "math_id": 44, "text": "i=j" }, { "math_id": 45, "text": "\\mathbf{G} = m_1\\mathbf{b}_1 + m_2\\mathbf{b}_2 + m_3\\mathbf{b}_3" }, { "math_id": 46, "text": "m_j" }, { "math_id": 47, "text": "\\mathbf{b}_1" }, { "math_id": 48, "text": "\\mathbf{b}_2" }, { "math_id": 49, "text": "\\mathbf{b}_3" }, { "math_id": 50, "text": "\\mathbf{G}" }, { "math_id": 51, "text": "\\mathbf{G} \\cdot \\mathbf{R}" }, { "math_id": 52, "text": "\\mathbf{R}" }, { "math_id": 53, "text": "n = (n_1, n_2, n_3)" }, { "math_id": 54, "text": "\\mathbf{R}_n = n_1 \\mathbf{a}_1 + n_2 \\mathbf{a}_2 + n_3 \\mathbf{a}_3" }, { "math_id": 55, "text": "n_1, n_2, n_3 \\in \\mathbb{Z}" }, { "math_id": 56, "text": "\\mathbb{Z}" }, { "math_id": 57, "text": "\\mathbf{R}_n = 0" }, { "math_id": 58, "text": "\\sum_m f_m e^{i \\mathbf{G}_m \\cdot \\mathbf{r}} = f\\left(\\mathbf{r}\\right) " }, { "math_id": 59, "text": "m = (m_1, m_2, m_3)" }, { "math_id": 60, "text": "f(\\mathbf{r} + \\mathbf{R}_n) = f(\\mathbf{r})." }, { "math_id": 61, "text": "\\sum_m f_m e^{i \\mathbf{G}_m \\cdot \\mathbf{r}} = \n \\sum_m f_m e^{i \\mathbf{G}_m \\cdot (\\mathbf{r} + \\mathbf{R}_n)} =\n \\sum_m f_m e^{i \\mathbf{G}_m \\cdot \\mathbf{R}_n} \\, e^{i \\mathbf{G}_m \\cdot \\mathbf{r}}.\n" }, { "math_id": 62, "text": " e^{i \\mathbf{G}_m \\cdot \\mathbf{R}_n} = 1" }, { "math_id": 63, "text": "\\mathbf{G}_m \\cdot \\mathbf{R}_n = 2\\pi N" }, { "math_id": 64, "text": "N \\in \\mathbb{Z}." }, { "math_id": 65, "text": "\\mathbf{G}_m = m_1 \\mathbf{b}_1 + m_2 \\mathbf{b}_2 + m_3 \\mathbf{b}_3" }, { "math_id": 66, "text": "\\left(\\mathbf{b_{1}}, \\mathbf{b}_2, \\mathbf{b}_3\\right)" }, { "math_id": 67, "text": "\\left(\\mathbf{a}_1, \\mathbf{a}_2\\right)" }, { "math_id": 68, "text": "\\mathbf{G}_m = m_1 \\mathbf{b}_1 + m_2 \\mathbf{b}_2" }, { "math_id": 69, "text": "m_i" }, { "math_id": 70, "text": "\\begin{align}\n \\mathbf{b}_1 &= 2\\pi \\frac{-\\mathbf{Q} \\, \\mathbf{a}_2}{-\\mathbf{a}_1 \\cdot \\mathbf{Q} \\, \\mathbf{a}_2} = 2\\pi \\frac{ \\mathbf{Q} \\, \\mathbf{a}_2}{ \\mathbf{a}_1 \\cdot \\mathbf{Q} \\, \\mathbf{a}_2} \\\\[8pt]\n \\mathbf{b}_2 &= 2\\pi \\frac{ \\mathbf{Q} \\, \\mathbf{a}_1}{ \\mathbf{a}_2 \\cdot \\mathbf{Q} \\, \\mathbf{a}_1}\n\\end{align}" }, { "math_id": 71, "text": "\\mathbf{Q}" }, { "math_id": 72, "text": "\\mathbf{Q'}" }, { "math_id": 73, "text": "\\mathbf{Q}\\,\\mathbf{v}=-\\mathbf{Q'}\\,\\mathbf{v}" }, { "math_id": 74, "text": "\\mathbf{v}" }, { "math_id": 75, "text": "\\sigma = \\begin{pmatrix}\n 1 & 2 \\\\\n 2 & 1\n\\end{pmatrix}" }, { "math_id": 76, "text": "\n\\mathbf{b}_n = 2\\pi \\frac{ \\mathbf{Q} \\, \\mathbf{a}_{\\sigma(n)}}{ \\mathbf{a}_n \\cdot \\mathbf{Q} \\, \\mathbf{a}_{\\sigma(n)}}=2\\pi \\frac{ \\mathbf{Q}' \\, \\mathbf{a}_{\\sigma(n)}}{ \\mathbf{a}_n \\cdot \\mathbf{Q}' \\, \\mathbf{a}_{\\sigma(n)}}.\n" }, { "math_id": 77, "text": "\\left(\\mathbf{a_{1}}, \\mathbf{a}_2, \\mathbf{a}_3\\right)" }, { "math_id": 78, "text": "n = \\left( n_1, n_2, n_3 \\right)" }, { "math_id": 79, "text": "\\begin{align}\n \\mathbf{b}_1 &= \\frac{2\\pi}{V} \\ \\mathbf{a}_2 \\times \\mathbf{a}_3 \\\\[8pt]\n \\mathbf{b}_2 &= \\frac{2\\pi}{V} \\ \\mathbf{a}_3 \\times \\mathbf{a}_1 \\\\[8pt]\n \\mathbf{b}_3 &= \\frac{2\\pi}{V} \\ \\mathbf{a}_1 \\times \\mathbf{a}_2 \n\\end{align}" }, { "math_id": 80, "text": "V = \\mathbf{a}_1 \\cdot \\left(\\mathbf{a}_2 \\times \\mathbf{a}_3\\right) = \\mathbf{a}_2 \\cdot \\left(\\mathbf{a}_3 \\times \\mathbf{a}_1\\right) = \\mathbf{a}_3 \\cdot \\left(\\mathbf{a}_1 \\times \\mathbf{a}_2\\right)" }, { "math_id": 81, "text": "\\left[\\mathbf{b}_1\\mathbf{b}_2\\mathbf{b}_3\\right]^\\mathsf{T} = 2\\pi\\left[\\mathbf{a}_1\\mathbf{a}_2\\mathbf{a}_3\\right]^{-1}." }, { "math_id": 82, "text": "2 \\pi" }, { "math_id": 83, "text": "\\mathbf{K}_m = \\mathbf{G}_m / 2\\pi" }, { "math_id": 84, "text": "\n \\mathbf{b}_1 = \\frac{\\mathbf{a}_2 \\times \\mathbf{a}_3}{\\mathbf{a}_1 \\cdot \\left(\\mathbf{a}_2 \\times \\mathbf{a}_3\\right)} \n" }, { "math_id": 85, "text": "\\mathbf{a}_2 \\times \\mathbf{a}_3" }, { "math_id": 86, "text": "(h, k, \\ell)" }, { "math_id": 87, "text": "(hk\\ell)" }, { "math_id": 88, "text": "m_1" }, { "math_id": 89, "text": "h" }, { "math_id": 90, "text": "m_2" }, { "math_id": 91, "text": "m_3" }, { "math_id": 92, "text": "\\ell" }, { "math_id": 93, "text": "\\mathbf{K}_m" }, { "math_id": 94, "text": "V" }, { "math_id": 95, "text": "(\\mathbf{a}_1,\\ldots,\\mathbf{a}_n)" }, { "math_id": 96, "text": "g\\colon V\\times V\\to\\mathbf{R}" }, { "math_id": 97, "text": "g(\\mathbf{a}_i,\\mathbf{b}_j)=2\\pi\\delta_{ij}" }, { "math_id": 98, "text": "\\sigma = \\begin{pmatrix}\n 1 & 2 & \\cdots &n\\\\\n 2 & 3 & \\cdots &1\n\\end{pmatrix}," }, { "math_id": 99, "text": "\n \\mathbf{b}_i = 2\\pi\\frac{\\varepsilon_{\\sigma^1i\\ldots\\sigma^ni}}{\\omega(\\mathbf{a}_1,\\ldots,\\mathbf{a}_n)}g^{-1}(\\mathbf{a}_{\\sigma^{n-1}i}\\,\\lrcorner\\ldots\\mathbf{a}_{\\sigma^1i}\\,\\lrcorner\\,\\omega)\\in V\n" }, { "math_id": 100, "text": "\\omega\\colon V^n \\to \\mathbf{R}" }, { "math_id": 101, "text": "g^{-1}" }, { "math_id": 102, "text": "\\hat{g}\\colon V \\to V^*" }, { "math_id": 103, "text": "\\hat{g}(v)(w) = g(v,w)" }, { "math_id": 104, "text": "\\lrcorner" }, { "math_id": 105, "text": "\\omega(u,v,w) = g(u \\times v, w)" }, { "math_id": 106, "text": "\\omega(v,w) = g(Rv,w)" }, { "math_id": 107, "text": "R \\in \\text{SO}(2) \\subset L(V,V)" }, { "math_id": 108, "text": "a" }, { "math_id": 109, "text": "\\frac{2\\pi}{a}" }, { "math_id": 110, "text": " \\frac{1}{a}" }, { "math_id": 111, "text": " \\frac{4\\pi}{a}" }, { "math_id": 112, "text": " 4\\pi/a" }, { "math_id": 113, "text": "\\left(\\mathbf{a}_1, \\mathbf{a}_2, \\mathbf{a}_3\\right)" }, { "math_id": 114, "text": "\\left(\\mathbf{b}_1, \\mathbf{b}_2, \\mathbf{b}_3\\right)" }, { "math_id": 115, "text": " a" }, { "math_id": 116, "text": " c" }, { "math_id": 117, "text": " 2\\pi/c" }, { "math_id": 118, "text": " 4\\pi/(a\\sqrt3)" }, { "math_id": 119, "text": "\n\\begin{align}\na_1 & = \\frac{\\sqrt{3}}{2} a \\hat{x} + \\frac{1}{2} a \\hat{y}, \\\\[8pt]\na_2 & = - \\frac{\\sqrt{3}}{2} a \\hat{x} + \\frac{1}{2}a\\hat{y}, \\\\[8pt]\na_3 & = c \\hat{z}.\n\\end{align} " }, { "math_id": 120, "text": "F" }, { "math_id": 121, "text": "F[\\vec{g}] = \\sum_{j=1}^N f_j\\!\\left[\\vec{g}\\right] e^{2 \\pi i \\vec{g} \\cdot \\vec{r}_j}." }, { "math_id": 122, "text": "(h,k,\\ell)" }, { "math_id": 123, "text": "F_{h,k,\\ell} = \\sum_{j=1}^m f_j\\left[g_{h,k,\\ell}\\right] e^{2\\pi i \\left(h u_j + k v_j + \\ell w_j\\right)}" }, { "math_id": 124, "text": "I[\\vec{g}] = \\sum_{j=1}^N \\sum_{k=1}^N f_j \\left[\\vec{g}\\right] f_k \\left[\\vec{g}\\right] e^{2\\pi i \\vec{g} \\cdot \\vec{r}_{\\!\\!\\;jk}}." }, { "math_id": 125, "text": "A = B\\left(B^\\mathsf{T} B\\right)^{-1}" } ]
https://en.wikipedia.org/wiki?curid=664488
664497
Parallel (geometry)
Relation used in geometry In geometry, parallel lines are coplanar infinite straight lines that do not intersect at any point. Parallel planes are planes in the same three-dimensional space that never meet. "Parallel curves" are curves that do not touch each other or intersect and keep a fixed minimum distance. In three-dimensional Euclidean space, a line and a plane that do not share a point are also said to be parallel. However, two noncoplanar lines are called "skew lines". Line segments and Euclidean vectors are parallel if they have the same direction (not necessarily the same length). Parallel lines are the subject of Euclid's parallel postulate. Parallelism is primarily a property of affine geometries and Euclidean geometry is a special instance of this type of geometry. In some other geometries, such as hyperbolic geometry, lines can have analogous properties that are referred to as parallelism. Symbol. The parallel symbol is formula_0. For example, formula_1 indicates that line "AB" is parallel to line "CD". In the Unicode character set, the "parallel" and "not parallel" signs have codepoints U+2225 (∥) and U+2226 (∦), respectively. In addition, U+22D5 (⋕) represents the relation "equal and parallel to". The same symbol is used for a binary function in electrical engineering (the parallel operator). It is distinct from the double-vertical-line brackets, U+2016 (‖), that indicate a norm (e.g. formula_2), as well as from the logical or operator (codice_0) in several programming languages. Euclidean parallelism. Two lines in a plane. Conditions for parallelism. Given parallel straight lines "l" and "m" in Euclidean space, the following properties are equivalent: Since these are equivalent properties, any one of them could be taken as the definition of parallel lines in Euclidean space, but the first and third properties involve measurement, and so, are "more complicated" than the second. Thus, the second property is the one usually chosen as the defining property of parallel lines in Euclidean geometry. The other properties are then consequences of Euclid's Parallel Postulate. History. The definition of parallel lines as a pair of straight lines in a plane which do not meet appears as Definition 23 in Book I of Euclid's Elements. Alternative definitions were discussed by other Greeks, often as part of an attempt to prove the parallel postulate. Proclus attributes a definition of parallel lines as equidistant lines to Posidonius and quotes Geminus in a similar vein. Simplicius also mentions Posidonius' definition as well as its modification by the philosopher Aganis. At the end of the nineteenth century, in England, Euclid's Elements was still the standard textbook in secondary schools. The traditional treatment of geometry was being pressured to change by the new developments in projective geometry and non-Euclidean geometry, so several new textbooks for the teaching of geometry were written at this time. A major difference between these reform texts, both between themselves and between them and Euclid, is the treatment of parallel lines. These reform texts were not without their critics and one of them, Charles Dodgson (a.k.a. Lewis Carroll), wrote a play, "Euclid and His Modern Rivals", in which these texts are lambasted. One of the early reform textbooks was James Maurice Wilson's "Elementary Geometry" of 1868. Wilson based his definition of parallel lines on the primitive notion of "direction". According to Wilhelm Killing the idea may be traced back to Leibniz. Wilson, without defining direction since it is a primitive, uses the term in other definitions such as his sixth definition, "Two straight lines that meet one another have different directions, and the difference of their directions is the "angle" between them." In definition 15 he introduces parallel lines in this way; "Straight lines which have the "same direction", but are not parts of the same straight line, are called "parallel lines"." Augustus De Morgan reviewed this text and declared it a failure, primarily on the basis of this definition and the way Wilson used it to prove things about parallel lines. Dodgson also devotes a large section of his play (Act II, Scene VI § 1) to denouncing Wilson's treatment of parallels. Wilson edited this concept out of the third and higher editions of his text. Other properties, proposed by other reformers, used as replacements for the definition of parallel lines, did not fare much better. The main difficulty, as pointed out by Dodgson, was that to use them in this way required additional axioms to be added to the system. The equidistant line definition of Posidonius, expounded by Francis Cuthbertson in his 1874 text "Euclidean Geometry" suffers from the problem that the points that are found at a fixed given distance on one side of a straight line must be shown to form a straight line. This can not be proved and must be assumed to be true. The corresponding angles formed by a transversal property, used by W. D. Cooley in his 1860 text, "The Elements of Geometry, simplified and explained" requires a proof of the fact that if one transversal meets a pair of lines in congruent corresponding angles then all transversals must do so. Again, a new axiom is needed to justify this statement. Construction. The three properties above lead to three different methods of construction of parallel lines. Distance between two parallel lines. Because parallel lines in a Euclidean plane are equidistant there is a unique distance between the two parallel lines. Given the equations of two non-vertical, non-horizontal parallel lines, formula_3 formula_4 the distance between the two lines can be found by locating two points (one on each line) that lie on a common perpendicular to the parallel lines and calculating the distance between them. Since the lines have slope "m", a common perpendicular would have slope −1/"m" and we can take the line with equation "y" = −"x"/"m" as a common perpendicular. Solve the linear systems formula_5 and formula_6 to get the coordinates of the points. The solutions to the linear systems are the points formula_7 and formula_8 These formulas still give the correct point coordinates even if the parallel lines are horizontal (i.e., "m" = 0). The distance between the points is formula_9 which reduces to formula_10 When the lines are given by the general form of the equation of a line (horizontal and vertical lines are included): formula_11 formula_12 their distance can be expressed as formula_13 Two lines in three-dimensional space. Two lines in the same three-dimensional space that do not intersect need not be parallel. Only if they are in a common plane are they called parallel; otherwise they are called skew lines. Two distinct lines "l" and "m" in three-dimensional space are parallel if and only if the distance from a point "P" on line "m" to the nearest point on line "l" is independent of the location of "P" on line "m". This never holds for skew lines. A line and a plane. A line "m" and a plane "q" in three-dimensional space, the line not lying in that plane, are parallel if and only if they do not intersect. Equivalently, they are parallel if and only if the distance from a point "P" on line "m" to the nearest point in plane "q" is independent of the location of "P" on line "m". Two planes. Similar to the fact that parallel lines must be located in the same plane, parallel planes must be situated in the same three-dimensional space and contain no point in common. Two distinct planes "q" and "r" are parallel if and only if the distance from a point "P" in plane "q" to the nearest point in plane "r" is independent of the location of "P" in plane "q". This will never hold if the two planes are not in the same three-dimensional space. In non-Euclidean geometry. In non-Euclidean geometry, the concept of a straight line is replaced by the more general concept of a geodesic, a curve which is locally straight with respect to the metric (definition of distance) on a Riemannian manifold, a surface (or higher-dimensional space) which may itself be curved. In general relativity, particles not under the influence of external forces follow geodesics in spacetime, a four-dimensional manifold with 3 spatial dimensions and 1 time dimension. In non-Euclidean geometry (elliptic or hyperbolic geometry) the three Euclidean properties mentioned above are not equivalent and only the second one (Line m is in the same plane as line l but does not intersect l) is useful in non-Euclidean geometries, since it involves no measurements. In general geometry the three properties above give three different types of curves, equidistant curves, parallel geodesics and geodesics sharing a common perpendicular, respectively. Hyperbolic geometry. While in Euclidean geometry two geodesics can either intersect or be parallel, in hyperbolic geometry, there are three possibilities. Two geodesics belonging to the same plane can either be: In the literature "ultra parallel" geodesics are often called "non-intersecting". "Geodesics intersecting at infinity" are called "limiting parallel". As in the illustration through a point "a" not on line "l" there are two limiting parallel lines, one for each direction ideal point of line l. They separate the lines intersecting line l and those that are ultra parallel to line "l". Ultra parallel lines have single common perpendicular (ultraparallel theorem), and diverge on both sides of this common perpendicular. Spherical or elliptic geometry. In spherical geometry, all geodesics are great circles. Great circles divide the sphere in two equal hemispheres and all great circles intersect each other. Thus, there are no parallel geodesics to a given geodesic, as all geodesics intersect. Equidistant curves on the sphere are called parallels of latitude analogous to the latitude lines on a globe. Parallels of latitude can be generated by the intersection of the sphere with a plane parallel to a plane through the center of the sphere. Reflexive variant. If "l, m, n" are three distinct lines, then formula_14 In this case, parallelism is a transitive relation. However, in case "l" = "n", the superimposed lines are "not" considered parallel in Euclidean geometry. The binary relation between parallel lines is evidently a symmetric relation. According to Euclid's tenets, parallelism is "not" a reflexive relation and thus "fails" to be an equivalence relation. Nevertheless, in affine geometry a pencil of parallel lines is taken as an equivalence class in the set of lines where parallelism is an equivalence relation. To this end, Emil Artin (1957) adopted a definition of parallelism where two lines are parallel if they have all or none of their points in common. Then a line "is" parallel to itself so that the reflexive and transitive properties belong to this type of parallelism, creating an equivalence relation on the set of lines. In the study of incidence geometry, this variant of parallelism is used in the affine plane. Notes. &lt;templatestyles src="Reflist/styles.css" /&gt; (3 vols.): (vol. 1), (vol. 2), (vol. 3). Heath's authoritative translation plus extensive historical research and detailed commentary throughout the text.
[ { "math_id": 0, "text": "\\parallel" }, { "math_id": 1, "text": "AB \\parallel CD" }, { "math_id": 2, "text": "\\|x\\|" }, { "math_id": 3, "text": "y = mx+b_1\\," }, { "math_id": 4, "text": "y = mx+b_2\\,," }, { "math_id": 5, "text": "\\begin{cases}\ny = mx+b_1 \\\\\ny = -x/m\n\\end{cases}" }, { "math_id": 6, "text": "\\begin{cases}\ny = mx+b_2 \\\\\ny = -x/m\n\\end{cases}" }, { "math_id": 7, "text": "\\left( x_1,y_1 \\right)\\ = \\left( \\frac{-b_1m}{m^2+1},\\frac{b_1}{m^2+1} \\right)\\," }, { "math_id": 8, "text": "\\left( x_2,y_2 \\right)\\ = \\left( \\frac{-b_2m}{m^2+1},\\frac{b_2}{m^2+1} \\right)." }, { "math_id": 9, "text": "d = \\sqrt{\\left(x_2-x_1\\right)^2 + \\left(y_2-y_1\\right)^2} = \\sqrt{\\left(\\frac{b_1m-b_2m}{m^2+1}\\right)^2 + \\left(\\frac{b_2-b_1}{m^2+1}\\right)^2}\\,," }, { "math_id": 10, "text": "d = \\frac{|b_2-b_1|}{\\sqrt{m^2+1}}\\,." }, { "math_id": 11, "text": "ax+by+c_1=0\\," }, { "math_id": 12, "text": "ax+by+c_2=0,\\," }, { "math_id": 13, "text": "d = \\frac{|c_2-c_1|}{\\sqrt {a^2+b^2}}." }, { "math_id": 14, "text": "l \\parallel m \\ \\land \\ m \\parallel n \\ \\implies \\ l \\parallel n ." } ]
https://en.wikipedia.org/wiki?curid=664497