id
stringlengths
2
8
title
stringlengths
1
130
text
stringlengths
0
252k
formulas
listlengths
1
823
url
stringlengths
38
44
57194579
Geometric mean filter
The geometric mean filter is an image filtering process meant to smooth and reduce noise of an image. It is based on the mathematic geometric mean. The output image G(x,y) of a geometric mean is given by formula_0 Where S(x,y) is the original image, and the filter mask is m by n pixels. Each pixel of the output image at point (x,y) is given by the product of the pixels within the geometric mean mask raised to the power of 1/mn. For example, using a mask size of 3 by 3, pixel (x,y) in the output image will be the product of S(x,y) and all 8 of its surrounding pixels raised to the 1/9th power. Using the following original image with pixel (x,y) at the center: Gives the result of: (5*16*22*6*3*18*12*3*15)^(1/9) = 8.77. Application. The geometric mean filter is most widely used to filter out Gaussian noise. In general it will help smooth the image with less data loss than an arithmetic mean filter. Code example. The following code shows the application of a geometric mean filter to an image using MATLAB. % Applies geometric mean filter to image input_noise that has added gaussian noise [m, n] = size(input_noise); output = zeros(m, n); % output image set with placeholder values of all zeros val = 1; % variable to hold new pixel value for i = 2:m-2 % loop through each pixel in original image for j = 2:n-2 % compute geometric mean of 3x3 window around pixel p = input_noise(i - 1, j - 1); q = input_noise(i - 1, j); r = input_noise(i - 1, j + 1); s = input_noise(i, j - 1); t = input_noise(i, j); u = input_noise(i, j + 1); v = input_noise(i + 1, j - 1); w = input_noise(i + 1, j); x = input_noise(i + 1, j + 1); val = (p * q * r * s * t * u * v * w * x) ^ (1 / 9); output(i, j) = val; % set output pixel to computed geometric mean val = 1; % reset val for next pixel end end References. <templatestyles src="Reflist/styles.css" />
[ { "math_id": 0, "text": "G(x,y) = \\left[\\prod_{i,j\\varepsilon S}S(i,j)\\right]^{{1\\over mn}}" } ]
https://en.wikipedia.org/wiki?curid=57194579
57195675
Emmons problem
In combustion, Emmons problem describes the flame structure which develops inside the boundary layer, created by a flowing oxidizer stream on flat fuel (solid or liquid) surfaces. The problem was first studied by Howard Wilson Emmons in 1956. The flame is of diffusion flame type because it separates fuel and oxygen by a flame sheet. The corresponding problem in a quiescent oxidizer environment is known as Clarke–Riley diffusion flame. Burning rate. Source: Consider a semi-infinite fuel surface with leading edge located at formula_0 and let the free stream oxidizer velocity be formula_1. Through the solution formula_2 of Blasius equation formula_3 (formula_4 is the self-similar Howarth–Dorodnitsyn coordinate), the mass flux formula_5 (formula_6 is density and formula_7 is vertical velocity) in the vertical direction can be obtained formula_8 where formula_9 In deriving this, it is assumed that the density formula_10 and the viscosity formula_11, where formula_12 is the temperature. The subscript formula_13 describes the values far away from the fuel surface. The main interest in combustion process is the fuel burning rate, which is obtained by evaluating formula_5 at formula_14, as given below, formula_15 References. <templatestyles src="Reflist/styles.css" />
[ { "math_id": 0, "text": "x=0" }, { "math_id": 1, "text": "U_\\infty" }, { "math_id": 2, "text": "f(\\eta)" }, { "math_id": 3, "text": "f'''+ff''=0" }, { "math_id": 4, "text": "\\eta" }, { "math_id": 5, "text": "\\rho v" }, { "math_id": 6, "text": "\\rho" }, { "math_id": 7, "text": "v" }, { "math_id": 8, "text": "\\rho v = \\rho_\\infty \\mu_\\infty \\sqrt{\\frac{2\\xi}{U_\\infty}} \\left(f'\\rho \\int_0^\\eta \\rho^{-1} \\ d\\eta - f\\right)," }, { "math_id": 9, "text": "\\xi = \\int_0^x \\rho_\\infty \\mu_\\infty \\ dx." }, { "math_id": 10, "text": "\\rho \\sim 1/T" }, { "math_id": 11, "text": "\\mu \\sim T" }, { "math_id": 12, "text": "T" }, { "math_id": 13, "text": "\\infty" }, { "math_id": 14, "text": "\\eta=0" }, { "math_id": 15, "text": "\\rho_o v_o = \\rho_\\infty \\mu_\\infty \\left[\\frac{2U_\\infty}{\\mu_\\infty^2}\\int_0^x \\rho_\\infty \\mu_\\infty \\ dx\\right]^{-1/2} [-f(0)]." } ]
https://en.wikipedia.org/wiki?curid=57195675
57196745
Burke–Schumann limit
In combustion, Burke–Schumann limit, or large Damköhler number limit, is the limit of infinitely fast chemistry (or in other words, infinite Damköhler number), named after S.P. Burke and T.E.W. Schumann, due to their pioneering work on Burke–Schumann flame. One important conclusion of infinitely fast chemistry is the non-co-existence of fuel and oxidizer simultaneously except in a thin reaction sheet. The inner structure of the reaction sheet is described by Liñán's equation. Limit description. In a typical non-premixed combustion (fuel and oxidizer are separated initially), mixing of fuel and oxidizer takes place based on the mechanical time scale formula_0dictated by the convection/diffusion (the relative importance between convection and diffusion depends on the Reynolds number) terms. Similarly, chemical reaction takes certain amount of time formula_1 to consume reactants. For one-step irreversible chemistry with Arrhenius rate, this chemical time is given by formula_2 where B is the pre-exponential factor, E is the activation energy, R is the universal gas constant and T is the temperature. Similarly, one can define formula_0 appropriate for particular flow configuration. The Damköhler number is then formula_3 Due to the large activation energy, the Damköhler number at unburnt gas temperature formula_4 is formula_5, because formula_6. On the other hand, the shortest chemical time is found at the flame (with burnt gas temperature formula_7), leading to formula_8. Regardless of Reynolds number, the limit formula_9 guarantees that chemical reaction dominates over the other terms. A typical conservation equation for the scalar formula_10 (species concentration or energy) takes the following form, formula_11 where formula_12 is the convective-diffusive operator and formula_13 are the mass fractions of fuel and oxidizer, respectively. Taking the limit formula_14 in the above equation, we find that formula_15 i.e., fuel and oxidizer cannot coexist, since far away from the reaction sheet, only one of the reactant is available (non premixed). On the fuel side of the reaction sheet, formula_16 and on the oxidizer side, formula_17. Fuel and oxygen can coexist (with very small concentrations) only in a thin reaction sheet, where formula_18 (diffusive transport will be comparable to reaction in this zone). In this thin reaction sheet, both fuel and oxygen are consumed and nothing leaks to the other side of the sheet. Due to the instantaneous consumption of fuel and oxidizer, the normal gradients of scalars exhibit discontinuities at the reaction sheet. References. <templatestyles src="Reflist/styles.css" />
[ { "math_id": 0, "text": "t_m" }, { "math_id": 1, "text": "t_c" }, { "math_id": 2, "text": "t_c = \\left(B e^{\\frac E {RT}}\\right)^{-1}" }, { "math_id": 3, "text": "\\mathrm{Da} = \\frac{t_m}{t_c} = t_m B e^{-\\frac E {RT}}." }, { "math_id": 4, "text": "T_u" }, { "math_id": 5, "text": "\\mathrm{Da}_u \\ll 1" }, { "math_id": 6, "text": "\\frac E {RT_u} \\sim 100" }, { "math_id": 7, "text": "T_b" }, { "math_id": 8, "text": "\\mathrm{Da}_b \\gg 1" }, { "math_id": 9, "text": "\\mathrm{Da}_b\\rightarrow \\infty" }, { "math_id": 10, "text": "\\psi" }, { "math_id": 11, "text": "\\mathcal{L}(\\psi) = \\mathrm{Da}_b Y_F Y_O e^{-\\frac E {RT}+\\frac E {RT_b}}" }, { "math_id": 12, "text": "\\mathcal{L}" }, { "math_id": 13, "text": "Y_F\\ \\&\\ Y_O" }, { "math_id": 14, "text": "\\mathrm{Da}_b\\rightarrow\\infty" }, { "math_id": 15, "text": "Y_F Y_O =0," }, { "math_id": 16, "text": "Y_O=0" }, { "math_id": 17, "text": "Y_F=0" }, { "math_id": 18, "text": "\\mathrm{Da}\\sim O(1)" } ]
https://en.wikipedia.org/wiki?curid=57196745
57197043
Prime omega function
Number of prime factors of a natural number n In number theory, the prime omega functions formula_0 and formula_1 count the number of prime factors of a natural number formula_2 Thereby formula_0 (little omega) counts each "distinct" prime factor, whereas the related function formula_1 (big omega) counts the "total" number of prime factors of formula_3 honoring their multiplicity (see arithmetic function). That is, if we have a prime factorization of formula_4 of the form formula_5 for distinct primes formula_6 (formula_7), then the respective prime omega functions are given by formula_8 and formula_9. These prime factor counting functions have many important number theoretic relations. Properties and relations. The function formula_0 is additive and formula_1 is completely additive. formula_10 If formula_11 divides formula_4 at least once we count it only once, e.g. formula_12. formula_13 If formula_11 divides formula_4 formula_14 times then we count the exponents, e.g. formula_15. As usual, formula_16 means formula_17 is the exact power of formula_11 dividing formula_18. formula_19 If formula_20 then formula_4 is squarefree and related to the Möbius function by formula_21 If formula_22 then formula_23 is a prime power, and if formula_24 then formula_4 is a prime number. It is known that the average order of the divisor function satisfies formula_25. Like many arithmetic functions there is no explicit formula for formula_1 or formula_26 but there are approximations. An asymptotic series for the average order of formula_0 is given by formula_27 where formula_28 is the Mertens constant and formula_29 are the Stieltjes constants. The function formula_0 is related to divisor sums over the Möbius function and the divisor function including the next sums. formula_30 formula_31 formula_32 formula_33 formula_34 formula_35 formula_36 The characteristic function of the primes can be expressed by a convolution with the Möbius function: formula_39 A partition-related exact identity for formula_0 is given by formula_40 where formula_41 is the partition function, formula_42 is the Möbius function, and the triangular sequence formula_43 is expanded by formula_44 in terms of the infinite q-Pochhammer symbol and the restricted partition functions formula_45 which respectively denote the number of formula_46's in all partitions of formula_4 into an "odd" ("even") number of distinct parts. Continuation to the complex plane. A continuation of formula_0 has been found, though it is not analytic everywhere. Note that the normalized formula_47 function formula_48 is used. formula_49 This is closely related to the following partition identity. Consider partitions of the form formula_50 where formula_51, formula_52, and formula_53 are positive integers, and formula_54. The number of partitions is then given by formula_55. Average order and summatory functions. An average order of both formula_0 and formula_1 is formula_56. When formula_4 is prime a lower bound on the value of the function is formula_57. Similarly, if formula_4 is primorial then the function is as large as formula_58 on average order. When formula_4 is a power of 2, then formula_59 Asymptotics for the summatory functions over formula_0, formula_1, and formula_60 are respectively computed in Hardy and Wright as formula_61 where formula_62 is the Mertens constant and the constant formula_63 is defined by formula_64 Other sums relating the two variants of the prime omega functions include formula_65 and formula_66 Example I: A modified summatory function. In this example we suggest a variant of the summatory functions formula_67 estimated in the above results for sufficiently large formula_68. We then prove an asymptotic formula for the growth of this modified summatory function derived from the asymptotic estimate of formula_69 provided in the formulas in the main subsection of this article above. To be completely precise, let the odd-indexed summatory function be defined as formula_70 where formula_71 denotes Iverson bracket. Then we have that formula_72 The proof of this result follows by first observing that formula_73 and then applying the asymptotic result from Hardy and Wright for the summatory function over formula_0, denoted by formula_67, in the following form: formula_74 Example II: Summatory functions for so-termed factorial moments of ω(n). The computations expanded in Chapter 22.11 of Hardy and Wright provide asymptotic estimates for the summatory function formula_75 by estimating the product of these two component omega functions as formula_76 We can similarly calculate asymptotic formulas more generally for the related summatory functions over so-termed factorial moments of the function formula_0. Dirichlet series. A known Dirichlet series involving formula_0 and the Riemann zeta function is given by formula_77 We can also see that formula_78 formula_79 The function formula_1 is completely additive, where formula_0 is strongly additive (additive). Now we can prove a short lemma in the following form which implies exact formulas for the expansions of the Dirichlet series over both formula_0 and formula_1: Lemma. Suppose that formula_37 is a strongly additive arithmetic function defined such that its values at prime powers is given by formula_80, i.e., formula_81 for distinct primes formula_6 and exponents formula_82. The Dirichlet series of formula_37 is expanded by formula_83 "Proof." We can see that formula_84 This implies that formula_85 wherever the corresponding series and products are convergent. In the last equation, we have used the Euler product representation of the Riemann zeta function. The lemma implies that for formula_86, formula_87 where formula_38 is the prime zeta function, formula_88 where formula_89 is the formula_46-th harmonic number and formula_90 is the identity for the Dirichlet convolution, formula_91. The distribution of the difference of prime omega functions. The distribution of the distinct integer values of the differences formula_92 is regular in comparison with the semi-random properties of the component functions. For formula_93, define formula_94 These cardinalities have a corresponding sequence of limiting densities formula_95 such that for formula_96 formula_97 These densities are generated by the prime products formula_98 With the absolute constant formula_99, the densities formula_95 satisfy formula_100 Compare to the definition of the prime products defined in the last section of in relation to the Erdős–Kac theorem. Notes. <templatestyles src="Reflist/styles.css" />
[ { "math_id": 0, "text": "\\omega(n)" }, { "math_id": 1, "text": "\\Omega(n)" }, { "math_id": 2, "text": "n." }, { "math_id": 3, "text": "n," }, { "math_id": 4, "text": "n" }, { "math_id": 5, "text": "n = p_1^{\\alpha_1} p_2^{\\alpha_2} \\cdots p_k^{\\alpha_k}" }, { "math_id": 6, "text": "p_i" }, { "math_id": 7, "text": "1 \\leq i \\leq k" }, { "math_id": 8, "text": "\\omega(n) = k" }, { "math_id": 9, "text": "\\Omega(n) = \\alpha_1 + \\alpha_2 + \\cdots + \\alpha_k" }, { "math_id": 10, "text": "\\omega(n)=\\sum_{p\\mid n} 1" }, { "math_id": 11, "text": "p" }, { "math_id": 12, "text": "\\omega(12)=\\omega(2^2 3)=2" }, { "math_id": 13, "text": "\\Omega(n) =\\sum_{p^\\alpha\\mid n} 1 =\\sum_{p^\\alpha\\parallel n}\\alpha" }, { "math_id": 14, "text": "\\alpha \\geq 1" }, { "math_id": 15, "text": "\\Omega(12)=\\Omega(2^2 3^1)=3" }, { "math_id": 16, "text": " p^\\alpha\\parallel n " }, { "math_id": 17, "text": "\\alpha " }, { "math_id": 18, "text": "n " }, { "math_id": 19, "text": "\\Omega(n) \\ge \\omega(n)" }, { "math_id": 20, "text": "\\Omega(n)=\\omega(n) " }, { "math_id": 21, "text": "\\mu(n) = (-1)^{\\omega(n)} = (-1)^{\\Omega(n)}" }, { "math_id": 22, "text": " \\omega(n) = 1 " }, { "math_id": 23, "text": " n " }, { "math_id": 24, "text": "\\Omega(n)=1 " }, { "math_id": 25, "text": "2^{\\omega(n)} \\leq d(n) \\leq 2^{\\Omega(n)}" }, { "math_id": 26, "text": "\\omega(n) " }, { "math_id": 27, "text": "\\frac{1}{n} \\sum\\limits_{k = 1}^n \\omega(k) \\sim \\log\\log n + B_1 + \\sum_{k \\geq 1} \\left(\\sum_{j=0}^{k-1} \\frac{\\gamma_j}{j!} - 1\\right) \\frac{(k-1)!}{(\\log n)^k}, " }, { "math_id": 28, "text": "B_1 \\approx 0.26149721" }, { "math_id": 29, "text": "\\gamma_j" }, { "math_id": 30, "text": "\\sum_{d\\mid n} |\\mu(d)| = 2^{\\omega(n)} " }, { "math_id": 31, "text": "\\sum_{d\\mid n} |\\mu(d)| k^{\\omega(d)} = (k+1)^{\\omega(n)} " }, { "math_id": 32, "text": "\\sum_{r\\mid n} 2^{\\omega(r)} = d(n^2) " }, { "math_id": 33, "text": "\\sum_{r\\mid n} 2^{\\omega(r)} d\\left(\\frac{n}{r}\\right) = d^2(n) " }, { "math_id": 34, "text": "\\sum_{d\\mid n} (-1)^{\\omega(d)} = \\prod\\limits_{p^{\\alpha}||n} (1-\\alpha)" }, { "math_id": 35, "text": "\n\\sum_{\\stackrel{1\\le k\\le m}{(k,m)=1}} \\gcd(k^2-1,m_1)\\gcd(k^2-1,m_2)\n=\\varphi(n)\\sum_{\\stackrel{d_1\\mid m_1} {d_2\\mid m_2}} \\varphi(\\gcd(d_1, d_2)) 2^{\\omega(\\operatorname{lcm}(d_1, d_2))},\\ m_1, m_2 \\text{ odd}, m = \\operatorname{lcm}(m_1, m_2)\n" }, { "math_id": 36, "text": "\\sum_\\stackrel{1\\le k\\le n}{\\operatorname{gcd}(k,m)=1} \\!\\!\\!\\! 1 = n \\frac {\\varphi(m)}{m} + O \\left ( 2^{\\omega(m)} \\right )" }, { "math_id": 37, "text": "f" }, { "math_id": 38, "text": "P(s)" }, { "math_id": 39, "text": " \\chi_{\\mathbb{P}}(n)\n = (\\mu \\ast \\omega)(n) = \\sum_{d|n} \\omega(d) \\mu(n/d). \n " }, { "math_id": 40, "text": "\\omega(n) = \\log_2\\left[\\sum_{k=1}^n \\sum_{j=1}^k \\left(\\sum_{d\\mid k} \\sum_{i=1}^d p(d-ji) \\right) s_{n,k} \\cdot |\\mu(j)|\\right], " }, { "math_id": 41, "text": "p(n)" }, { "math_id": 42, "text": "\\mu(n)" }, { "math_id": 43, "text": "s_{n,k}" }, { "math_id": 44, "text": "s_{n,k} = [q^n] (q; q)_\\infty \\frac{q^k}{1-q^k} = s_o(n, k) - s_e(n, k), " }, { "math_id": 45, "text": "s_{o/e}(n, k)" }, { "math_id": 46, "text": "k" }, { "math_id": 47, "text": "\\operatorname{sinc}" }, { "math_id": 48, "text": " \\operatorname{sinc}(x) = \\frac{\\sin(\\pi x)}{\\pi x} " }, { "math_id": 49, "text": "\\omega(z) = \\log_2\\left(\\sum_{n=1}^{\\lceil Re(z) \\rceil} \\operatorname{sinc} \\left(\\prod_{m=1}^{\\lceil Re(z) \\rceil+1} \\left( n^2+n-mz \\right) \\right) \\right) " }, { "math_id": 50, "text": "a= \\frac{2}{c} + \\frac{4}{c} + \\ldots + \\frac{2(b-1)}{c} + \\frac{2b}{c} " }, { "math_id": 51, "text": " a " }, { "math_id": 52, "text": " b " }, { "math_id": 53, "text": " c " }, { "math_id": 54, "text": " a > b > c " }, { "math_id": 55, "text": " 2^{\\omega(a)} - 2 " }, { "math_id": 56, "text": "\\log\\log n" }, { "math_id": 57, "text": "\\omega(n) = 1" }, { "math_id": 58, "text": "\\omega(n) \\sim \\frac{\\log n}{\\log\\log n}" }, { "math_id": 59, "text": "\\Omega(n) \\sim \\frac{\\log n}{\\log 2}" }, { "math_id": 60, "text": "\\omega(n)^2" }, { "math_id": 61, "text": "\\begin{align} \n \\sum_{n \\leq x} \\omega(n) & = x \\log\\log x + B_1 x + o(x) \\\\ \n \\sum_{n \\leq x} \\Omega(n) & = x \\log\\log x + B_2 x + o(x) \\\\ \n \\sum_{n \\leq x} \\omega(n)^2 & = x (\\log\\log x)^2 + O(x \\log\\log x) \\\\ \n \\sum_{n \\leq x} \\omega(n)^k & = x (\\log\\log x)^k + O(x (\\log\\log x)^{k-1}), k \\in \\mathbb{Z}^{+}, \n \\end{align}\n " }, { "math_id": 62, "text": "B_1 \\approx 0.2614972128" }, { "math_id": 63, "text": "B_2" }, { "math_id": 64, "text": "B_2 = B_1 + \\sum_{p\\text{ prime}} \\frac{1}{p(p-1)} \\approx 1.0345061758." }, { "math_id": 65, "text": "\\sum_{n \\leq x} \\left\\{\\Omega(n) - \\omega(n)\\right\\} = O(x), " }, { "math_id": 66, "text": "\\#\\left\\{n \\leq x : \\Omega(n) - \\omega(n) > \\sqrt{\\log\\log x}\\right\\} = O\\left(\\frac{x}{(\\log\\log x)^{1/2}}\\right). " }, { "math_id": 67, "text": "S_{\\omega}(x) := \\sum_{n \\leq x} \\omega(n)" }, { "math_id": 68, "text": "x" }, { "math_id": 69, "text": "S_{\\omega}(x)" }, { "math_id": 70, "text": "S_{\\operatorname{odd}}(x) := \\sum_{n \\leq x} \\omega(n) [n\\text{ odd}], " }, { "math_id": 71, "text": "[\\cdot]" }, { "math_id": 72, "text": "S_{\\operatorname{odd}}(x) = \\frac{x}{2} \\log\\log x + \\frac{(2B_1-1)x}{4} + \\left\\{\\frac{x}{4}\\right\\} - \\left[x \\equiv 2,3 \\bmod{4}\\right] + O\\left(\\frac{x}{\\log x}\\right)." }, { "math_id": 73, "text": "\n \\omega(2n) = \\begin{cases}\n \\omega(n) + 1, & \\text{if } n \\text{ is odd; } \\\\ \n \\omega(n), & \\text{if } n \\text{ is even,}\n \\end{cases}\n" }, { "math_id": 74, "text": " \\begin{align} \nS_\\omega(x) & = S_{\\operatorname{odd}}(x) + \\sum_{n \\leq \\left\\lfloor\\frac{x}{2}\\right\\rfloor} \\omega(2n) \\\\ \n & = S_{\\operatorname{odd}}(x) + \\sum_{n \\leq \\left\\lfloor\\frac{x}{4}\\right\\rfloor} \\left(\\omega(4n) + \\omega(4n+2)\\right) \\\\ \n & = S_{\\operatorname{odd}}(x) + \\sum_{n \\leq \\left\\lfloor\\frac{x}{4}\\right\\rfloor} \\left(\\omega(2n) + \\omega(2n+1) + 1\\right) \\\\ \n & = S_{\\operatorname{odd}}(x) + S_{\\omega}\\left(\\left\\lfloor\\frac{x}{2}\\right\\rfloor\\right) + \\left\\lfloor\\frac{x}{4}\\right\\rfloor. \n\\end{align} \n" }, { "math_id": 75, "text": "\\omega(n) \\left\\{\\omega(n)-1\\right\\}," }, { "math_id": 76, "text": "\\omega(n) \\left\\{\\omega(n)-1\\right\\} = \\sum_{\\stackrel{pq\\mid n} {\\stackrel{p \\neq q}{p,q\\text{ prime}}}} 1 = \n \\sum_{\\stackrel{pq\\mid n}{p,q\\text{ prime}}} 1 - \\sum_{\\stackrel{p^2\\mid n}{p\\text{ prime}}} 1." }, { "math_id": 77, "text": "\\sum_{n \\geq 1} \\frac{2^{\\omega(n)}}{n^s} = \\frac{\\zeta^2(s)}{\\zeta(2s)},\\ \\Re(s) > 1. " }, { "math_id": 78, "text": " \\sum_{n \\geq 1} \\frac{z^{\\omega(n)}}{n^s} = \\prod_p \\left(1 + \\frac{z}{p^s-1}\\right), |z| < 2, \\Re(s) > 1," }, { "math_id": 79, "text": " \\sum_{n \\geq 1} \\frac{z^{\\Omega(n)}}{n^s} = \\prod_p \\left(1 - \\frac{z}{p^s}\\right)^{-1}, |z| < 2, \\Re(s) > 1," }, { "math_id": 80, "text": "f(p^{\\alpha}) := f_0(p, \\alpha)" }, { "math_id": 81, "text": "f(p_1^{\\alpha_1} \\cdots p_k^{\\alpha_k}) = f_0(p_1, \\alpha_1) + \\cdots + f_0(p_k, \\alpha_k)" }, { "math_id": 82, "text": "\\alpha_i \\geq 1" }, { "math_id": 83, "text": "\\sum_{n \\geq 1} \\frac{f(n)}{n^s} = \\zeta(s) \\times \\sum_{p\\mathrm{\\ prime}} (1-p^{-s}) \\cdot \\sum_{n \\geq 1} f_0(p, n) p^{-ns}, \n\\Re(s) > \\min(1, \\sigma_f). " }, { "math_id": 84, "text": " \\sum_{n \\geq 1} \\frac{u^{f(n)}}{n^s} = \\prod_{p\\mathrm{\\ prime}} \\left(1+\\sum_{n \\geq 1} u^{f_0(p, n)} p^{-ns}\\right). " }, { "math_id": 85, "text": " \\begin{align}\n\\sum_{n \\geq 1} \\frac{f(n)}{n^s} & = \n \\frac{d}{du}\\left[\\prod_{p\\mathrm{\\ prime}} \\left(1+\\sum_{n \\geq 1} u^{f_0(p, n)} p^{-ns}\\right)\\right] \\Biggr|_{u=1} \n = \n \\prod_{p} \\left(1 + \\sum_{n \\geq 1} p^{-ns}\\right) \\times \\sum_{p} \\frac{\\sum_{n \\geq 1} f_0(p, n) p^{-ns}}{ \n 1 + \\sum_{n \\geq 1} p^{-ns}} \\\\ \n & = \\zeta(s) \\times \\sum_{p\\mathrm{\\ prime}} (1-p^{-s}) \\cdot \\sum_{n \\geq 1} f_0(p, n) p^{-ns}, \n\\end{align} \n" }, { "math_id": 86, "text": "\\Re(s) > 1" }, { "math_id": 87, "text": " \\begin{align} \nD_{\\omega}(s) & := \\sum_{n \\geq 1} \\frac{\\omega(n)}{n^s} = \\zeta(s) P(s) \\\\ \n & \\ = \\zeta(s) \\times \\sum_{n \\geq 1} \\frac{\\mu(n)}{n} \\log \\zeta(ns) \\\\ \nD_{\\Omega}(s) & := \\sum_{n \\geq 1} \\frac{\\Omega(n)}{n^s} = \\zeta(s) \\times \\sum_{n \\geq 1} P(ns) \\\\ \n & \\ = \\zeta(s) \\times \\sum_{n \\geq 1} \\frac{\\phi(n)}{n} \\log\\zeta(ns) \\\\\nD_h(s) & := \\sum_{n \\geq 1} \\frac{h(n)}{n^s} = \\zeta(s) \\log \\zeta(s) \\\\\n & \\ = \\zeta(s) \\times \\sum_{n \\geq 1} \\frac{\\varepsilon(n)}{n} \\log \\zeta(ns),\n\\end{align} \n" }, { "math_id": 88, "text": "h(n) = \\sum_{p^k|n}{\\frac{1}{k}} = \\sum_{p^k||n}{H_{k}}" }, { "math_id": 89, "text": "H_{k}" }, { "math_id": 90, "text": "\\varepsilon" }, { "math_id": 91, "text": "\\varepsilon (n) = \\lfloor\\frac{1}{n}\\rfloor" }, { "math_id": 92, "text": "\\Omega(n) - \\omega(n)" }, { "math_id": 93, "text": "k \\geq 0" }, { "math_id": 94, "text": "N_k(x) := \\#(\\{n \\in \\mathbb{Z}^{+}: \\Omega(n) - \\omega(n) = k\\} \\cap [1, x])." }, { "math_id": 95, "text": "d_k" }, { "math_id": 96, "text": "x \\geq 2" }, { "math_id": 97, "text": "N_k(x) = d_k \\cdot x + O\\left(\\left(\\frac{3}{4}\\right)^k \\sqrt{x} (\\log x)^{\\frac{4}{3}}\\right)." }, { "math_id": 98, "text": "\\sum_{k \\geq 0} d_k \\cdot z^k = \\prod_p \\left(1 - \\frac{1}{p}\\right) \\left(1 + \\frac{1}{p-z}\\right)." }, { "math_id": 99, "text": "\\hat{c} := \\frac{1}{4} \\times \\prod_{p > 2} \\left(1 - \\frac{1}{(p-1)^2}\\right)^{-1}" }, { "math_id": 100, "text": "d_k = \\hat{c} \\cdot 2^{-k} + O(5^{-k})." } ]
https://en.wikipedia.org/wiki?curid=57197043
57205820
Peters four-step chemistry
Series of reactions for methane combustion Peters four-step chemistry is a systematically reduced mechanism for methane combustion, named after Norbert Peters, who derived it in 1985. The mechanism reads as formula_0 The mechanism predicted four different regimes where each reaction takes place. The third reaction, known as radical consumption layer, where most of the heat is released, and the first reaction, also known as fuel consumption layer, occur in a narrow region at the flame. The fourth reaction is the hydrogen oxidation layer, whose thickness is much larger than the former two layers. Finally, the carbon monoxide oxidation layer is the largest of them all, corresponding to the second reaction, and oxidizes very slowly. Peters-Williams three-step chemistry. A three-step mechanism was derived in 1987 by Peters and Forman A. Williams by assuming steady-state approximation for the hydrogen radical. Then, formula_1 References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\begin{align}\n& \\text{I)} && \\ce{CH4 + 2H + H2O -> CO + 4H2} \\\\\n& \\text{II)} && \\ce{CO + H2O <-> CO2 + H2} \\\\ \n& \\text{III)} && \\ce{H + H + M -> H2 + M} \\\\\n& \\text{IV)} && \\ce{O2 + 3H2 <-> 2H + 2H2O}\n\\end{align}" }, { "math_id": 1, "text": "\\begin{align}\n& \\text{I)} && \\ce{CH4 + O2 -> CO + H2 + H2O} \\\\\n& \\text{II)} && \\ce{CO + H2O <-> CO2 + H2} \\\\\n& \\text{III)} && \\ce{O2 + 2H2 <-> 2H2O} \n\\end{align}" } ]
https://en.wikipedia.org/wiki?curid=57205820
57210774
Spinor condensate
Spinor condensates are degenerate Bose gases that have degrees of freedom arising from the internal spin of the constituent particles They are described by a multi-component (spinor) order parameter. Since their initial experimental realisation, a wealth of studies have appeared, both experimental and theoretical, focusing on the physical properties of spinor condensates, including their ground states, non-equilibrium dynamics, and vortices. Early work. The study of spinor condensates was initiated in 1998 by experimental groups at JILA and MIT. These experiments utilised 23Na and 87Rb atoms, respectively. In contrast to most prior experiments on ultracold gases, these experiments utilised a purely optical trap, which is spin-insensitive. Shortly thereafter, theoretical work appeared which described the possible mean-field phases of spin-one spinor condensates. Underlying Hamiltonian. The Hamiltonian describing a spinor condensate is most frequently written using the language of second quantization. Here the field operator formula_0 creates a boson in Zeeman level formula_1 at position formula_2. These operators satisfy bosonic commutation relations: formula_3 The free (non-interacting) part of the Hamiltonian is formula_4 where formula_1 denotes the mass of the constituent particles and formula_5 is an external potential. For a spin-one spinor condensate, the interaction Hamiltonian is formula_6 In this expression, formula_7 is the operator corresponding to the density, formula_8 is the local spin operator (formula_9 is a vector composed of the spin-one matrices), and :: denotes normal ordering. The parameters formula_10 can be expressed in terms of the s-wave scattering lengths of the constituent particles. Higher spin versions of the interaction Hamiltonian are slightly more involved, but can generally be expressed by using Clebsch–Gordan coefficients. The full Hamiltonian then is formula_11. Mean-field phases. In Gross-Pitaevskii mean field theory, one replaces the field operators with c-number functions: formula_12. To find the mean-field ground states, one then minimises the resulting energy with respect to these c-number functions. For a spatially uniform system spin-one system, there are two possible mean-field ground states. When formula_13, the ground state is formula_14 while for formula_15 the ground state is formula_16 The former expression is referred to as the polar state while the latter is the ferromagnetic state. Both states are unique up to overall spin rotations. Importantly, formula_17 cannot be rotated into formula_18. The Majorana stellar representation provides a particularly insightful description of the mean-field phases of spinor condensates with larger spin. Vortices. Due to being described by a multi-component order parameter, numerous types of topological defects (vortices) can appear in spinor condensates Homotopy theory provides a natural description of topological defects, and is regularly employed to understand vortices in spinor condensates. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": " \n\\hat{\\psi}_m^\\dagger({\\bf r})\n" }, { "math_id": 1, "text": " m " }, { "math_id": 2, "text": " {\\bf r} " }, { "math_id": 3, "text": " \n[\\hat{\\psi}_m({\\bf r}), \\hat{\\psi}_{m'}^\\dagger({\\bf r}')] = \\delta({\\bf r} - {\\bf r}') \\delta_{mm'}.\n" }, { "math_id": 4, "text": " \nH_0 = \\sum_m \\int d^3 r \\hat{\\psi}_{m}^\\dagger({\\bf r}) \\left(\n-\\frac{\\hbar^2}{2m} \\nabla^2 + V_{\\rm ext}({\\bf r})\n \\right) \\hat{\\psi}_{m}^\\dagger({\\bf r}).\n" }, { "math_id": 5, "text": " V_{\\rm ext}({\\bf r}) " }, { "math_id": 6, "text": " \nH_{\\rm int} = \\frac{1}{2} \\int d^3 r : \\left( \nc_0 \\hat{\\rho}({\\bf r})^2 + c_1 (\\hat{{\\bf F}}({\\bf r}))^2\n\\right):.\n" }, { "math_id": 7, "text": "\n\\hat{\\rho}({\\bf r}) = \\sum_m \\hat{\\psi}_m^\\dagger({\\bf r}) \\hat{\\psi}_m ({\\bf r})\n" }, { "math_id": 8, "text": "\n\\hat{{\\bf F}}({\\bf r}) = \\sum_{mm'} \\hat{\\psi}_m^\\dagger({\\bf r}) {\\bf S}_{mm'} \\hat{\\psi}_{m'}({\\bf r})\n" }, { "math_id": 9, "text": " \n{\\bf S}_{mm'}\n" }, { "math_id": 10, "text": "c_0, c_1 " }, { "math_id": 11, "text": " H = H_0 + H_{\\rm int} " }, { "math_id": 12, "text": " \\hat{\\psi}_{m}({\\bf r}) \\rightarrow {\\psi}_{m}({\\bf r})" }, { "math_id": 13, "text": " c_1 > 0 " }, { "math_id": 14, "text": "\n\\psi_{\\rm polar} = \\sqrt{\\bar{\\rho}}(0,1,0)\n" }, { "math_id": 15, "text": " c_1< 0 " }, { "math_id": 16, "text": "\n\\psi_{\\rm ferro} = \\sqrt{\\bar{\\rho}}(1,0,0).\n" }, { "math_id": 17, "text": " \\psi_{\\rm ferro} " }, { "math_id": 18, "text": " \\psi_{\\rm polar} " } ]
https://en.wikipedia.org/wiki?curid=57210774
5721320
Fleiss' kappa
Statistical measure Fleiss' kappa (named after Joseph L. Fleiss) is a statistical measure for assessing the reliability of agreement between a fixed number of raters when assigning categorical ratings to a number of items or classifying items. This contrasts with other kappas such as Cohen's kappa, which only work when assessing the agreement between not more than two raters or the intra-rater reliability (for one appraiser versus themself). The measure calculates the degree of agreement in classification over that which would be expected by chance. Fleiss' kappa can be used with binary or nominal-scale. It can also be applied to ordinal data (ranked data): the MiniTab online documentation gives an example. However, this document notes: "When you have ordinal ratings, such as defect severity ratings on a scale of 1–5, Kendall's coefficients, which account for ordering, are usually more appropriate statistics to determine association than kappa alone." Keep in mind however, that Kendall rank coefficients are only appropriate for rank data. Introduction. Fleiss' kappa is a generalisation of Scott's pi statistic, a statistical measure of inter-rater reliability. It is also related to Cohen's kappa statistic and Youden's J statistic which may be more appropriate in certain instances. Whereas Scott's pi and Cohen's kappa work for only two raters, Fleiss' kappa works for any number of raters giving categorical ratings, to a fixed number of items, at the condition that for each item raters are randomly sampled. It can be interpreted as expressing the extent to which the observed amount of agreement among raters exceeds what would be expected if all raters made their ratings completely randomly. It is important to note that whereas Cohen's kappa assumes the same two raters have rated a set of items, Fleiss' kappa specifically allows that although there are a fixed number of raters (e.g., three), different items may be rated by different individuals. That is, Item 1 is rated by Raters A, B, and C; but Item 2 could be rated by Raters D, E, and F. The condition of random sampling among raters makes Fleiss' kappa not suited for cases where all raters rate all patients. Agreement can be thought of as follows, if a fixed number of people assign numerical ratings to a number of items then the kappa will give a measure for how consistent the ratings are. The kappa, formula_0, can be defined as, formula_1 The factor formula_2 gives the degree of agreement that is attainable above chance, and, formula_3 gives the degree of agreement actually achieved above chance. If the raters are in complete agreement then formula_4. If there is no agreement among the raters (other than what would be expected by chance) then formula_5. An example of using Fleiss' kappa may be the following: consider several psychiatrists who are asked to look at ten patients. For each patient, 14 psychiatrists give one of possibly five diagnoses. These are compiled into a matrix, and Fleiss' kappa can be computed from this matrix (see example below) to show the degree of agreement between the psychiatrists above the level of agreement expected by chance. Definition. Let N be the total number of subjects, let n be the number of ratings per subject, and let k be the number of categories into which assignments are made. The subjects are indexed by "i" = 1, ..., "N" and the categories are indexed by "j" = 1, ..., "k". Let "n""ij" represent the number of raters who assigned the i-th subject to the j-th category. First calculate "p""j", the proportion of all assignments which were to the j-th category: formula_6 Now calculate formula_7, the extent to which raters agree for the i-th subject (i.e., compute how many rater-rater pairs are in agreement, relative to the number of all possible rater-rater pairs): &lt;templatestyles src="Block indent/styles.css"/&gt;formula_8 Note that formula_9 is bound between 0, when ratings are assigned equally over all categories, and 1, when all ratings are assigned to a single category. Now compute formula_10, the mean of the formula_9's, and formula_11, which go into the formula for formula_12: formula_13 formula_14 Worked example. In the following example, for each of ten "subjects" (formula_15) fourteen raters (formula_16), sampled from a larger group, assign a total of five categories (formula_17). The categories are presented in the columns, while the subjects are presented in the rows. Each cell lists the number of raters who assigned the indicated (row) subject to the indicated (column) category. In the following table, given that formula_18, formula_19, and formula_20. The value formula_21 is the proportion of all assignments that were made to the formula_22th category. For example, taking the first column formula_23 and taking the second row, formula_24 In order to calculate formula_10, we need to know the sum of formula_9, formula_25 Over the whole sheet, &lt;templatestyles src="Block indent/styles.css"/&gt;formula_26 Interpretation. gave the following table for interpreting formula_12 values for a 2-annotator 2-class example. This table is however "by no means" universally accepted. They supplied no evidence to support it, basing it instead on personal opinion. It has been noted that these guidelines may be more harmful than helpful, as the number of categories and subjects will affect the magnitude of the value. For example, the kappa is higher when there are fewer categories. Tests of significance. Statistical packages can calculate a standard score (Z-score) for Cohen's kappa or Fleiss's Kappa, which can be converted into a P-value. However, even when the P value reaches the threshold of statistical significance (typically less than 0.05), it only indicates that the agreement between raters is significantly better than would be expected by chance. The p-value does not tell you, by itself, whether the agreement is good enough to have high predictive value. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\kappa\\," }, { "math_id": 1, "text": " \\kappa = \\frac{\\bar{P} - \\bar{P_e}}{1 - \\bar{P_e}} " }, { "math_id": 2, "text": "1 - \\bar{P_e}" }, { "math_id": 3, "text": "\\bar{P} - \\bar{P_e}" }, { "math_id": 4, "text": "\\kappa = 1~" }, { "math_id": 5, "text": "\\kappa \\le 0" }, { "math_id": 6, "text": " p_{j} = \\frac{1}{N n} \\sum_{i=1}^N n_{i j},\\quad\\quad 1 = \\sum_{j=1}^k p_{j} " }, { "math_id": 7, "text": "P_{i}\\," }, { "math_id": 8, "text": " \\begin{align}\n P_i &= \\frac{1}{n(n - 1)} \\sum_{j=1}^k n_{i j} (n_{i j} - 1) \\\\\n &= \\frac{1}{n(n - 1)} \\sum_{j=1}^k (n_{i j}^2 - n_{i j}) \\\\\n &= \\frac{1}{n(n - 1)} \\biggl[ \\sum_{j=1}^k \\bigl(n_{i j}^2 \\bigr) - n\\biggr]\n\\end{align} " }, { "math_id": 9, "text": "P_i" }, { "math_id": 10, "text": "\\bar{P}" }, { "math_id": 11, "text": "\\bar{P_e}" }, { "math_id": 12, "text": "\\kappa" }, { "math_id": 13, "text": " \\begin{align}\n \\bar{P} &= \\frac{1}{N} \\sum_{i=1}^N P_{i} \\\\ \n &= \\frac{1}{N n (n - 1)} \\biggl[\\sum_{i=1}^N \\sum_{j=1}^k \\bigl(n_{i j}^2\\bigr) - N n\\biggr]\n\\end{align} " }, { "math_id": 14, "text": " \\bar{P_e} = \\sum_{j=1}^k p_j^2" }, { "math_id": 15, "text": "N" }, { "math_id": 16, "text": "n" }, { "math_id": 17, "text": "k" }, { "math_id": 18, "text": " N = 10 " }, { "math_id": 19, "text": " n = 14 " }, { "math_id": 20, "text": " k = 5 " }, { "math_id": 21, "text": " p_j " }, { "math_id": 22, "text": "j" }, { "math_id": 23, "text": " p_1 = \\frac{ 0+0+0+0+2+7+3+2+6+0 }{140} = 0.143," }, { "math_id": 24, "text": " P_2 = \\frac{1}{14(14 - 1)} \\left(0^2 + 2^2 + 6^2 + 4^2 + 2^2 - 14\\right) = 0.253." }, { "math_id": 25, "text": "\\sum_{i=1}^N P_{i}= 1.000 + 0.253 + \\cdots + 0.286 + 0.286 = 3.780." }, { "math_id": 26, "text": " \\begin{align}\n \\bar{P} &= \\frac{1}{(10)} (3.780) = 0.378 \\\\\n \\bar{P}_{e} &= 0.143^2 + 0.200^2 + 0.279^2 + 0.150^2 + 0.229^2 = 0.213 \\\\\n \\kappa &= \\frac{0.378 - 0.213}{1 - 0.213} = 0.210\n\\end{align} " } ]
https://en.wikipedia.org/wiki?curid=5721320
57214562
Split and merge segmentation
Split and merge segmentation is an image processing technique used to segment an image. The image is successively split into quadrants based on a homogeneity criterion and similar regions are merged to create the segmented result. The technique incorporates a quadtree data structure, meaning that there is a parent-child node relationship. The total region is a parent, and each of the four splits is a child. Homogeneity. After each split, a test is necessary to determine whether each new region needs further splitting. The criterion for the test is the homogeneity of the region. There are several ways to define homogeneity, some examples are: formula_0 where r and c are row and column, N is the number of pixels in the region and formula_1 An example incorporation would be that the variance of a region be less than a specified value in order to be considered homogeneous. Data structure. The splitting results in a partitioned image as shown below to 3 levels. Each level of partitioning can be represented in a tree-like structure. Example. The following example shows the segmentation of a gray scale image using matlab. The homogeneity criterion is thresholding, max(region)-min(region) &lt; 10 for a region to be homogeneous. The blocks created during splitting are shown in the following picture: And the segmented image is below.
[ { "math_id": 0, "text": "\\sigma^2 = (1/(N-1))\\sum_{(r,c)\\epsilon R}[I(r,c)-\\bar{I}] ^2" }, { "math_id": 1, "text": "\\bar{I}=(1/N)\\sum_{(r,c)\\epsilon Region}I(r,c)" } ]
https://en.wikipedia.org/wiki?curid=57214562
5721535
Godunov's scheme
In numerical analysis and computational fluid dynamics, Godunov's scheme is a conservative numerical scheme, suggested by Sergei Godunov in 1959, for solving partial differential equations. One can think of this method as a conservative finite volume method which solves exact, or approximate Riemann problems at each inter-cell boundary. In its basic form, Godunov's method is first order accurate in both space and time, yet can be used as a base scheme for developing higher-order methods. Basic scheme. Following the classical finite volume method framework, we seek to track a finite set of discrete unknowns, formula_0 where the formula_1 and formula_2 form a discrete set of points for the hyperbolic problem: formula_3 where the indices formula_4 and formula_5 indicate the derivatives in time and space, respectively. If we integrate the hyperbolic problem over a control volume formula_6 we obtain a method of lines (MOL) formulation for the spatial cell averages: formula_7 which is a classical description of the first order, upwinded finite volume method. Exact time integration of the above formula from time formula_8 to time formula_9 yields the exact update formula: formula_10 Godunov's method replaces the time integral of each formula_11 with a forward Euler method which yields a fully discrete update formula for each of the unknowns formula_12. That is, we approximate the integrals with formula_13 where formula_14 is an approximation to the exact solution of the Riemann problem. For consistency, one assumes that formula_15 and that formula_16 is increasing in the first argument, and decreasing in the second argument. For scalar problems where formula_17, one can use the simple Upwind scheme, which defines formula_18. The full Godunov scheme requires the definition of an approximate, or an exact Riemann solver, but in its most basic form, is given by: formula_19 Linear problem. In the case of a linear problem, where formula_20, and without loss of generality, we'll assume that formula_21, the upwinded Godunov method yields: formula_22 which yields the classical first-order, upwinded Finite Volume scheme whose stability requires formula_23. Three-step algorithm. Following Hirsch, the scheme involves three distinct steps to obtain the solution at formula_24 from the known solution at formula_25, as follows: The first and third steps are solely of a numerical nature and can be considered as a "projection stage", independent of the second, physical step, the "evolution stage". Therefore, they can be modified without influencing the physical input, for instance by replacing the piecewise constant approximation by a piecewise linear variation inside each cell, leading to the definition of second-order space-accurate schemes, such as the MUSCL scheme. See also. &lt;templatestyles src="Div col/styles.css"/&gt; References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": " Q^{n}_i = \\frac{1}{\\Delta x} \\int_{x_{i-1/2}} ^ { x_{i+1/2} } q(t^n, x)\\, dx " }, { "math_id": 1, "text": " x_{i-1/2} = x_{\\text{low}} + \\left( i - 1/2 \\right) \\Delta x " }, { "math_id": 2, "text": " t^n = n \\Delta t " }, { "math_id": 3, "text": " q_t + ( f( q ) )_x = 0, " }, { "math_id": 4, "text": " t " }, { "math_id": 5, "text": " x " }, { "math_id": 6, "text": " [x_{i-1/2}, x_{i+1/2}], " }, { "math_id": 7, "text": " \\frac{\\partial}{\\partial t} Q_i( t ) = -\\frac{1}{\\Delta x} \\left( f( q( t, x_{i+1/2} ) ) - f( q( t, x_{i-1/2} ) ) \\right), " }, { "math_id": 8, "text": " t = t^n " }, { "math_id": 9, "text": " t = t^{n+1} " }, { "math_id": 10, "text": " Q^{n+1}_i = Q^n_i - \\frac{1}{\\Delta x } \\int_{ t^n }^{t^{n+1} } \\left( f( q( t, x_{i+1/2} ) ) - f( q( t, x_{i-1/2} ) ) \\right)\\, dt. " }, { "math_id": 11, "text": " \\int_{t^n}^{t^{n+1} } f( q( t, x_{i-1/2} ) )\\, dt " }, { "math_id": 12, "text": " Q^n_i " }, { "math_id": 13, "text": " \\int_{t^n}^{t^{n+1} } f( q( t, x_{i-1/2} ) )\\, dt \\approx \\Delta t f^\\downarrow\\left( Q^n_{i-1}, Q^n_i \\right), " }, { "math_id": 14, "text": " f^\\downarrow\\left( q_l, q_r \\right) " }, { "math_id": 15, "text": " f^\\downarrow( q_l , q_r ) = f( q_l ) \\quad \\text{ if } \\quad q_l = q_r, " }, { "math_id": 16, "text": " f^\\downarrow " }, { "math_id": 17, "text": " f'( q ) > 0 " }, { "math_id": 18, "text": " f^\\downarrow( q_l, q_r ) = f( q_l ) " }, { "math_id": 19, "text": " Q^{n+1}_i = Q^n_i - \\lambda \\left( \\hat{f}^n_{i+1/2} - \\hat{f}^n_{i-1/2} \\right), \\quad \\lambda = \\frac{\\Delta t}{\\Delta x}, \\quad \\hat{f}^n_{i-1/2} = f^\\downarrow\\left( Q^n_{i-1}, Q^n_i \\right) " }, { "math_id": 20, "text": " f(q) = a q " }, { "math_id": 21, "text": " a > 0 " }, { "math_id": 22, "text": " Q^{n+1}_i = Q^n_i - \\nu \\left( Q^{n}_i - Q^n_{i-1} \\right), \\quad \\nu = a \\frac{\\Delta t } {\\Delta x }, " }, { "math_id": 23, "text": " \\nu = \\left| a \\frac{\\Delta t}{\\Delta x} \\right| \\leq 1 " }, { "math_id": 24, "text": " t = (n+1) \\Delta t \\," }, { "math_id": 25, "text": " {t = n \\Delta t} \\," }, { "math_id": 26, "text": " {t = (n+1) \\Delta t} \\," }, { "math_id": 27, "text": " {\\Delta x} \\," }, { "math_id": 28, "text": " {\\Delta x} \\, " }, { "math_id": 29, "text": " {\\Delta t} \\," }, { "math_id": 30, "text": "| a_\\max | \\Delta t < \\Delta x/2 \\, " }, { "math_id": 31, "text": " | a_\\max | \\, " } ]
https://en.wikipedia.org/wiki?curid=5721535
57216564
Welding of advanced thermoplastic composites
Advanced thermoplastic composites (ACM) have a high strength fibres held together by a thermoplastic matrix. Advanced thermoplastic composites are becoming more widely used in the aerospace, marine, automotive and energy industry. This is due to the decreasing cost and superior strength to weight ratios, over metallic parts. Advance thermoplastic composite have excellent damage tolerance, corrosion resistant, high fracture toughness, high impact resistance, good fatigue resistance, low storage cost, and infinite shelf life. Thermoplastic composites also have the ability to be formed and reformed, repaired and fusion welded. Fusion bonding fundamentals. Fusion bonding is a category of techniques for welding thermoplastic composites. It requires the melting of the joint interface, which decreases the viscosity of the polymer and allows for intermolecular diffusion. These polymer chains then diffuse across the joint interface and become entangled, giving the joint its strength. Welding techniques. There are many welding techniques that can be used to fusion bond thermoplastic composites. These different techniques can be broken down into three classifications for their ways of generating heat; frictional heating, external heating and electromagnetic heating. Some of these techniques can be very limited and only used for specific joints and geometries. Friction welding. Friction welding is best used for parts that are small and flat. The welding equipment is often expensive, but produces high-quality welds. Linear vibration welding. Two flat parts are brought together under pressure with one fixed in place and the other vibrating back-and-forth parallel to the joint. Frictional heat is then generated till the polymers are softened or melted. Once the desired temperature is met, the vibration motion stops, the polymer solidifies and a weld joint is made. The two most important welding parameters that affect the mechanical performance are welding pressure and time. Developing parameters for different advance thermoplastic composite can be challenging because the high elastic modulus of the material will have a higher heat generation, requiring less weld time. The pressure can affect the fiber orientation which also greatly impact the mechanical performance. Lap shear joints tend to have the best mechanical performance from the higher volume fraction of fibers at the weld interface. Overall linear vibration welding can achieve high production rates with excellent strength, but is limited to the joint geometries that are flat. Spin welding. Spin welding is not a very common welding technique for advanced thermoplastic composites because this can only be done with parts that have a circular geometry. This is done by one part remaining stationary while the other is continuously rotated with pressure applied to the weld interface. Rotational velocity will vary throughout different radii of the Interface. This will result in a temperature gradient as a function of the radius, resulting in different shrinkage for the fibers causing high residual stresses. The orientation of the fibers will also contribute to high residual stress and reduction in strength. Ultrasonic welding. Ultrasonics welding is one of the most commonly used technique for welding advanced thermoplastic composites. This is due for its ability to maintain high weld strength, hermetic sealing, and high production rates. This welding technique operates at high vibrational frequencies (10–70 kHz) and low amplitude. The direction of vibration is perpendicular to the joint surface, but can also be parallel to the joint for hermetic application. Heat is generated from the surface and intermolecular friction due to the vibrational. On the surface of the joint there are small asperities called energy directors, where the vibrational energy concentrates and induces melting. Design of the energy director and optimized parameters can be critical to improve the quality of the weld to reducing any fiber disruption during welding. Energy directors that are triangular or semi-circle often achieve the highest strength. With optimize welding parameters and joint design weld strength, up to 80% of the base material can be retained for advanced thermoplastic composites. However, welding can cause damage to the fibers, which will result in premature failure. Ultrasonic welding of advanced thermoplastic composites is used for making automotive parts, medical devices and battery housing. Thermal welding. Thermal welding can produce good weld quality although extra precautions need to be taken to prevent high residual stress, warping, and decohesion. Other thermal welding techniques are not commonly used due their high heat input, which can damage the composite. Laser welding. Laser welding of advanced thermoplastic composites is a process by which the LASER (Light Amplification of Simulated Emission of electromagnetic Radiation), a highly focused coherent beam of light melts the composite tin various ways. Taking advantage of joint design and material properties, lasers can be applied either directly or indirectly to create the welded joint. There are processing methods that take advantage of material structure/properties to create the weld joint. Welding variables affect weld quality in both positive and negative ways depending on how they are manipulated. Laser heating mechanism in matter. When a laser beam impinges on a material, it excites electrons in the outer most shell of the atom. The return of those electrons to the relaxed state induces thermal heating through conversion to vibrational states which propagate to the surrounding material. Joining methods for laser welding. Surface heating. This method involves using infrared radiation to heat the surfaces the composites to be welded and then clamping until and holding the parts together. IR/Laser stacking. This method involves laser melting a polymer post and pressing a die into the molten post to create a rivet-like button to joint materials like metals. This process can be used to join metallic joints to composite structures. Through Transmission IR welding (TTIr). This method utilizes one laser transparent (LT) and one laser absorbing (LA) material. Typically, the components are layered as a sandwich with the laser beam passing through the LT layer and irradiating the surface of the LA. This creates a melt layer at the interface of two components leading to a weld. Effect of Constituent Properties on Weldability. To understand how the properties of a composite affect is weldability, the effects of the individual constituents (fiber, matrix, additives, etc.) need to be understood. The effect of each will be noted separately and then the combined effects will be discussed. Matrix. Electromagnetic radiation interaction. A laser beam can interact in one of three ways when it contacts the polymer matrix. It can be absorbed, transmitted, or reflected. The amount of absorption determines the amount of energy available for welding. The reflectivity is affected by the index of refraction according to this relation: formula_0, where n is the index of refraction of the polymer and m is the index of refraction of air. Absorption can be affected by the following structural characteristics of the polymer to be discussed below: crystallinity, chemical bonding, and concentration of additives. Crystallinity. Increased crystallinity tends to cause lower laser beam transmission because of scattering caused by changes in the index of refraction encountered when going from one phase to the next or because of changing crystallographic orientation. Increased crystallinity can cause the transmission to increase monotonically as a function of polymer thickness. The relationship follows the Lambert-Bouuger's Law: formula_1, where formula_2 is the intensity of the laser beam at a given depth or thickness, t. formula_3 is the intensity of laser beam at its source. formula_4 is the absorption constant of the polymer. By the same token, amorphous polymers lack this trend with thickness. Chemical bonding. Polymers absorb EMR (Electro Magnetic Radiation) in a specific wavelength of light depending on what functional groups are present on the polymer. For instance, bending of the C-H bond on the formula_5 at 6800 nm. Many polymers have vibrational modes at wavelengths greater than 1100 nm, so to apply methods such as TTIr, laser sources must produce photons at wavelengths shorter than that. Therefore, lasers (1064 nm) and diode lasers (800-950 nm) can pass through the LT until they impinge on the intended modified polymer or additive that results in absorption, whereas &lt;chem&gt;CO_2&lt;/chem&gt; lasers (10,640 nm) will be absorbed too easily as it passes through the LT. Reinforcements. Reinforcements such as fibers or short particles. Reinforcing fibers can be added to increase the strength of a composite. Some reinforcements like carbon fibers have high thermal conductivity and can dissipate the heat of welding, thus requiring more energy input than with other reinforcement materials such as glass. Glass reinforcements can cause scattering of the beam. The orientation of the continuous fibers can affect the width of welds being made. When the welding direction is parallel to the orientation of the fibers, the weld width is usually narrower due to heat being channeled through the fibers to the front and the rear of the weld. Increased volume fraction of reinforcements such as glass can scatter the laser beam, thus allowing less to be transmitted to the weld joint. When this happens, the amount of energy necessary to fuse the joint may increase. The increase if not done carefully can cause damage to the transparent part of a TTIr weld joint. Additives and Fillers. Some additives can be intentionally added to absorb laser energy. This technique is especially useful in concentrating the weld joint to the mated surfaces of two materials that are relatively transparent to the laser beam. For example, carbon black increases absorption of the laser beam. There can be some unintended consequences of using these absorbing additives. Increasing the concentration of carbon black in a polymer can decrease the depth of heating and increase the peak temperature at the weld joint. Surface damage can occur if the concentration of carbon black becomes excessive. Some additives such as the highly selective materials used in the Clearweld process are applied only to the mating surfaces between the plastics to be joined. Some of the chemicals such as cyanines only absorb in a narrow wavelength band centered around 785 nm. This methodology initially was applied only to plastics, but has recently been applied to composites such as carbon fiber reinforced PEEK. Other additives called clarifiers can do the opposite of carbon black by increasing laser beam transmission by reducing crystallinity in polymers. Despite the fact that both pigments and dyes can both add color to a polymer, they behave differently. A dye is soluble in a polymer, whereas a pigment is not. Welding technique comparisons. Contour Welding (CW) vs quasi-simultaneous (QS). During TTIr, although it takes more energy per unit length to achieve fusion with QS than with CW, QS offers the advantage of achieving higher weld strength and weldability of low transmissive materials such as continuous glass fiber thermoplastics. Greater strength is imparted because full fusion is achieved without damaging the surface of the transparent material. Electromagnetic welding. Electromagnetic welding is capable of welding complex parts with also the possibility of reopening welds for replacement or repair. To achieve good welds the design of the coil and implant is important for uniform heating. Implant resistance welding. Implant resistance welding can be a low cost solution for welding parts that are flat or with curved surfaces. The heating element used is often a metal mesh or carbon strips, which provides uniform heating. However, advanced thermoplastic composites that contain conductive fibers can’t be used due to unwanted power leakages. Implant induction welding. Induction welding uses a implant or susceptor that is placed at the weld interface and embedded with conductive material such as metal or carbon fibers. An induction coil is then place near the weld joint, which induces a current in embedded in the material used to generate heat. When welding carbon fiber, carbon and graphite fiber mats with higher electrical resistance are used to concentrate the heat at the weld interface. This has the ability to weld complex geometry structures with great weld strength. Challenges of welding advanced thermoplastic composites. The heat generated during welding thermoplastic composite, induces residual stresses in the joint. These stresses can greatly reduce the strength and performance of the part. Upon cooling from welding the matrix and fibers will have different coefficients of thermal expansion, which introduces the residual stress. Things such as heat input, cooling rates, volume fraction of the fibers, and matrix material will influence the residual stress. Another important factor to consider is the orientation of the fibers. During the molten state of welding, fibers can reorient themselves in a manner that reduces weld strength. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "R=\\frac{(n-m)^2}{(n+m)^2}" }, { "math_id": 1, "text": "I_t=I_0e^{-(\\alpha t)}" }, { "math_id": 2, "text": "I_t" }, { "math_id": 3, "text": "I_0" }, { "math_id": 4, "text": "\\alpha" }, { "math_id": 5, "text": "-CH_2-" } ]
https://en.wikipedia.org/wiki?curid=57216564
57218264
Keynes's theory of wages and prices
Economic theory Keynes's theory of wages and prices is contained in the three chapters 19-21 comprising Book V of "The General Theory of Employment, Interest and Money". Keynes, contrary to the mainstream economists of his time, argued that capitalist economies were not inherently self-correcting. Wages and prices were "sticky", in that they were not flexible enough to respond efficiently to market demand. An economic depression for instance, would not necessarily set off a chain of events leading back to full employment and higher wages. Keynes believed that government action was necessary for the economy to recover. In Book V of Keynes's theory, Chapter 19 discusses whether wage rates contribute to unemployment and introduces the Keynes effect. Chapter 20 covers mathematical groundwork for Chapter 21, which examines how changes in income from increased money supply affect wages, prices, employment, and profits. Keynes disagrees with the classical view that flexible wages can cure unemployment, arguing that interest rates have a more significant impact on employment. In Chapter 20, Keynes examines the law of supply and its relation to employment. Chapter 21 analyzes the effect of changes in money supply on the economy, rejecting the quantity theory of money and exploring the impact of various assumptions on his theories. The role of Book V in Keynes's theory. Chapter 19 discusses the question of whether wage rates contribute to unemployment. Keynes's views and intentions on this matter have been vigorously debated, and he does not offer a clear answer in this chapter. The concept of the Keynes effect arises from his attempts to resolve the issue. Chapter 20 covers some mathematical ground needed for Chapter 21. Chapter 21 considers the question of how a change in income resulting from an increase in money supply will be apportioned between wages, prices, employment and profits. (The results also depend on the exogenous behaviour of the workforce and on the shapes of various functions.) Similar considerations arise within the body of Keynes's theory since an increase in income due to a change in the schedule of the marginal efficiency of capital will have an equally complicated effect. When the topic arose in Chapter 18 Keynes did not mention that a full analysis needed to be supported by a theory of prices; instead he asserted that "the amount of employment" was "almost the same thing" as the national income. They are different things but under suitable assumptions they move together. Schumpeter and Hicks appear to have taken Keynes's comment at face value, concluding from it that the "General Theory" analysed a time period too short for prices to adapt, which deprives it of any interest. Brady and Gorga view Chapters 20 and 21 as providing belated elucidation of the aggregate demand presented earlier in the book, particularly in Chapter 3. Chapter 19: Changes in money wages. Contrast with classical view. Keynes summarizes the view of classical economists that the economy should be self-adjusting if wages are fluid, and that they blame rigidity in wages for problems like unemployment. He disagrees with what he says is the orthodox view, based on the quantity theory of money, is that wage reductions have a small effect on aggregate demand, but that this is made up for by demand for other factors of production. Keynes postulates that the classical position has reached a mistaken conclusion by analysing the demand curve for a given industry and transferring this conception "without substantial modification to industry as a whole". Keynes specifically disagrees with the theory of Arthur Cecil Pigou "that in the long run unemployment can be cured by wage adjustments" which Keynes did not see as important compared to other influences on wages. Keynesian analysis. Keynes considers seven different effects of lower wages (including the marginal efficiency of capital and interest rates) and whether or not they have an impact on employment. He concludes that the only one that does is interest rates. This indirect effect of wages on employment through the interest rate was termed the "Keynes effect" by Don Patinkin. Modigliani later performed a formal analysis (based on Keynes's theory, but with Hicksian units) and concluded that unemployment was indeed attributable to excessive wages. Keynes argued that interest rates can also be reduced by increasing the supply of money and that this is more practical and safer than a widespread reduction in wages, which might need to be severe enough to harm consumer confidence which would itself increase unemployment because of reduced demand. He summarises: There is, therefore, no ground for the belief that a flexible wage policy is capable of maintaining a state of continuous full employment;– any more than for the belief that an open-market monetary policy is capable, unaided, of achieving this result. The economic system cannot be made self-adjusting along these lines. And having come to the view that "a flexible wage policy and a flexible money policy come, analytically, to the same thing", he presents four considerations suggesting that "it can only be an unjust person who would prefer a flexible wage policy to a flexible money policy". Axel Leijonhufvud attached particular significance to this chapter, adopting the view in his 1968 book "Keynesian economics and the economics of Keynes" that its omission from the IS-LM model had pointed Keynesian economics in the wrong direction. He argued that: His [Keynes's] followers understandably decided to skip the problematical dynamic analysis of Chapter 19 and focus on the relatively tractable static IS-LM model. Chapter 20: The employment function. Chapter 20 is an examination of the law of supply. Keynes makes use for the first time of the "first postulate of classical economics", and also for the first time assumes the existence of a unit of value allowing outputs to be compared in real terms. He depends heavily on an assumption of perfect competition, which indeed is implicit in the "first postulate". An important difference is that when competition is not perfect, "it is marginal revenue, not price, which determines the output of the individual producer". Keynes interprets the relation between "output" and employment as a causative relation between "effective demand" and employment. He discusses what happens at full employment concluding that wages and prices will rise in proportion to any additional expenditure leaving the real economy unchanged. The money supply remains constant in wage units and the rate of interest is unaffected. Chapter 21: The theory of prices. The purpose of this chapter is to examine the effect of a change in the quantity of money on the rest of the economy. Keynes does not provide a conclusive statement of his views, but rather presents an initial simplification followed by a number of corrections. Keynes's initial simple model. Keynes's simplified starting point is this: assuming that an increase in the "money supply" leads to a proportional increase in "income in money terms" (which is the quantity theory of money), it follows that for as long as there is unemployment wages will remain constant, the economy will move to the right along the marginal cost curve (which is flat) leaving prices and profits unchanged, and the entire extra income will be absorbed by increased employment; but once full employment has been reached, wages, prices (and also profits) will increase in proportion to the money supply. This is the "modified quantity theory of money". Quantity theory of money. Keynes does not accept the quantity theory. He writes effective demand [meaning money income] will not change in exact proportion to the quantity of money. The correction is based on the mechanism we have already described under Keynesian economic intervention. Money supply influences the economy through liquidity preference, whose dependence on the interest rate leads to direct effects on the level of investment and to indirect effects on the level of income through the multiplier. This account has the fault we have mentioned earlier: it treats the influence of "r" on liquidity preference as primary and that of "Y" as secondary and therefore ends up with the wrong formula for the multiplier. However once we correct Keynes's correction we see that he makes a valid point since the effect of money supply on income is no longer one of proportionality, and cannot be one of proportionality so long as part of the demand for money (the speculative part) is independent of the level of income. Movement along the supply curve. Keynes writes that the marginal cost curve is not in fact flat, although his reasons are unclear. Premature motion of wages. Wages are exogenous in Keynes's system. In order to obtain a determinate result for the response of prices or employment to a change in money supply he needs to make an assumption about how wages will react. His initial assumption was that so long as there is unemployment workers will be content with a constant money wage, and that when there is full employment they will demand a wage which moves in parallel with prices and money supply. His corrected explanation is that as the economy approaches full employment, wages will begin to respond to increases in the money supply. Wage inflation remains a function of the level of employment, but is now a progressive response rather than a sharp corner. Keynes's assumptions in this matter had a significant influence on the subsequent fate of his theories. He also remarks as point (3) that some classes of worker may be fully employed while there is unemployment amongst others. Components of marginal cost. Although we have treated an employer's marginal cost as being his or her wage bill, this is not entirely accurate. Keynes isolates "user cost" as a separate component, identifying it as "the marginal disinvestment in equipment due to the production of marginal output". His point (5), which may be considered a technical detail, is that user cost is unlikely to move in exact parallel with wages. Asymmetry of Keynes's assumptions. Keynes mentions in §V that there is an asymmetry in his system deriving from the stickiness he postulates in wages which makes it easier for them to move upwards than downwards. Without resistance to downward motion, he tells us, money wages would fall without limit "whenever there was a tendency for less than full employment" and: ... there would be no resting-place below full employment until either the rate of interest was incapable of falling further or wages were zero. In fact we must have "some" factor, the value of which in terms of money is, if not fixed, at least sticky, to give us any stability of values in a monetary system. Symbolic statement of Keynes's theory of prices. In §VI Keynes draws on the mathematical results of his previous chapter. Money supply is the independent variable, with total real output "y" as varying in accordance with it, and prices, wages and employment as being related to output in the same way as in Chapter 20. Constant velocity of circulation. Keynes begins with the equation "MV=D" where: This equation is useful to Keynes only under the assumption that "V" is constant, from which it follows that output in money terms "D" moves in proportion to "M" and that prices will do the same only if they move in proportion to output in money terms, i.e. only if Keynes's "ep" is unity. If this condition holds then it follows from the formulae for "ep" and formula_0 above that formula_1 is infinite and therefore that the price elasticity of supply is zero. Keynes gets an equivalent result by a different path using one of his relations between elasticities. So his conclusion is that if the velocity of circulation is constant, then prices move in proportion to money supply only in conditions in which real output is also constant. Variable velocity of circulation. Keynes begins by defining a new elasticity: "ed" differs from the other elasticities in not being a property of the supply curve. The elasticity of "Dw" – i.e. of "Y" – with respect to "M" is determined by the gradients of the preference functions in Keynes's theory of employment, "L"(), "S"(), and "Is"(). "ed" is determined jointly by these things and by the elasticity of "D" with respect to "Dw" but is not analysed here. Keynes proceeds to consider the response of prices to a change in money supply asserting that: "ep" had been defined earlier and is now incorrectly equated to formula_4 when its true value has already been given as formula_5. This is presumably the "inadequate derivation of the equations on page 305" mentioned by the editors of the RES edition on page 385. The likeliest explanation is that Keynes wrote this part while working with a definition of "eo" as the elasticity of output in real terms with respect to "employment" rather than with respect to "output in wage units". References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\epsilon" }, { "math_id": 1, "text": "\\epsilon_\\nu+\\epsilon_W" }, { "math_id": 2, "text": "e_d = \\frac{M}{D} \\frac{\\textrm d D}{\\textrm d M}" }, { "math_id": 3, "text": "\\frac{M}{p} \\frac{\\textrm d p}{\\textrm d M} = e_p \\cdot e_d\\qquad\\textrm{where}\\qquad e_p = 1 - e_e e_o(1-e_w)" }, { "math_id": 4, "text": "1 - e_e e_o(1-e_w)" }, { "math_id": 5, "text": "1 - e_o (1-e_w)" } ]
https://en.wikipedia.org/wiki?curid=57218264
57218361
Divisor sum identities
The purpose of this page is to catalog new, interesting, and useful identities related to number-theoretic divisor sums, i.e., sums of an arithmetic function over the divisors of a natural number formula_0, or equivalently the Dirichlet convolution of an arithmetic function formula_1 with one: formula_2 These identities include applications to sums of an arithmetic function over just the proper prime divisors of formula_0. We also define periodic variants of these divisor sums with respect to the greatest common divisor function in the form of formula_3 Well-known inversion relations that allow the function formula_1 to be expressed in terms of formula_4 are provided by the Möbius inversion formula. Naturally, some of the most interesting examples of such identities result when considering the average order summatory functions over an arithmetic function formula_1 defined as a divisor sum of another arithmetic function formula_4. Particular examples of divisor sums involving special arithmetic functions and special Dirichlet convolutions of arithmetic functions can be found on the following pages: here, here, here, here, and here. Average order sum identities. Interchange of summation identities. The following identities are the primary motivation for creating this topics page. These identities do not appear to be well-known, or at least well-documented, and are extremely useful tools to have at hand in some applications. In what follows, we consider that formula_5 are any prescribed arithmetic functions and that formula_6 denotes the summatory function of formula_4. A more common special case of the first summation below is referenced here. In general, these identities are collected from the so-called "rarities and b-sides" of both well established and semi-obscure analytic number theory notes and techniques and the papers and work of the contributors. The identities themselves are not difficult to prove and are an exercise in standard manipulations of series inversion and divisor sums. Therefore, we omit their proofs here. The convolution method. The convolution method is a general technique for estimating average order sums of the form formula_12 where the multiplicative function "f" can be written as a convolution of the form formula_13 for suitable, application-defined arithmetic functions "g" and "h". A short survey of this method can be found here. A related technique is the use of the formula formula_14 this is known as the Dirichlet hyperbola method. Periodic divisor sums. An arithmetic function is "periodic (mod k)", or "k"-periodic, if formula_15 for all formula_16. Particular examples of "k"-periodic number theoretic functions are the Dirichlet characters formula_17 modulo "k" and the greatest common divisor function formula_18. It is known that every "k"-periodic arithmetic function has a representation as a "finite" discrete Fourier series of the form formula_19 where the Fourier coefficients formula_20 defined by the following equation are also "k"-periodic: formula_21 We are interested in the following "k"-periodic divisor sums: formula_22 It is a fact that the Fourier coefficients of these divisor sum variants are given by the formula formula_23 Fourier transforms of the GCD. We can also express the Fourier coefficients in the equation immediately above in terms of the Fourier transform of any function "h" at the input of formula_24 using the following result where formula_25 is a Ramanujan sum (cf. Fourier transform of the totient function): formula_26 Thus by combining the results above we obtain that formula_27 Sums over prime divisors. Let the function formula_28 denote the characteristic function of the primes, i.e., formula_29 "if and only if" formula_0 is prime and is zero-valued otherwise. Then as a special case of the first identity in equation (1) in section interchange of summation identities above, we can express the average order sums formula_30 We also have an integral formula based on Abel summation for sums of the form formula_31 where formula_32 denotes the prime-counting function. Here we typically make the assumption that the function "f" is continuous and differentiable. Some lesser appreciated divisor sum identities. We have the following divisor sum formulas for "f" any arithmetic function and "g" completely multiplicative where formula_33 is Euler's totient function and formula_34 is the Möbius function: The Dirichlet inverse of an arithmetic function. We adopt the notation that formula_43 denotes the multiplicative identity of Dirichlet convolution so that formula_44 for any arithmetic function "f" and formula_45. The Dirichlet inverse of a function "f" satisfies formula_46 for all formula_45. There is a well-known recursive convolution formula for computing the Dirichlet inverse formula_47 of a function "f" by induction given in the form of formula_48 For a fixed function "f", let the function formula_49 Next, define the following two multiple, or nested, convolution variants for any fixed arithmetic function "f": formula_50 The function formula_51 by the equivalent pair of summation formulas in the next equation is closely related to the Dirichlet inverse for an arbitrary function "f". formula_52 In particular, we can prove that formula_53 A table of the values of formula_51 for formula_54 appears below. This table makes precise the intended meaning and interpretation of this function as the signed sum of all possible multiple "k"-convolutions of the function "f" with itself. Let formula_55 where "p" is the Partition function (number theory). Then there is another expression for the Dirichlet inverse given in terms of the functions above and the coefficients of the q-Pochhammer symbol for formula_56 given by formula_57 Notes. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "n" }, { "math_id": 1, "text": "f(n)" }, { "math_id": 2, "text": "g(n) := \\sum_{d\\mid n} f(d)." }, { "math_id": 3, "text": "g_m(n) := \\sum_{d\\mid (m,n)} f(d),\\ 1 \\leq m \\leq n" }, { "math_id": 4, "text": "g(n)" }, { "math_id": 5, "text": "f,g,h,u,v: \\mathbb{N} \\rightarrow \\mathbb{C}" }, { "math_id": 6, "text": "G(x) := \\sum_{n \\leq x} g(n)" }, { "math_id": 7, "text": "\\sum_{n=1}^x v(n) \\sum_{d\\mid n} h(d) u\\left(\\frac{n}{d}\\right) = \\sum_{n=1}^x h(n) \\sum_{k=1}^{\\left\\lfloor \\frac{x}{n} \\right\\rfloor} u(k) v(nk) " }, { "math_id": 8, "text": "\n \\begin{align} \n \\sum_{n=1}^x \\sum_{d\\mid n} f(d) g\\left(\\frac{n}{d}\\right) & = \\sum_{n=1}^x f(n) G\\left(\\left\\lfloor \\frac{x}{n} \\right\\rfloor\\right) \n = \\sum_{i=1}^x \\left(\\sum_{\\left\\lceil \\frac{x+1}{i+1} \\right\\rceil \\leq n \\leq \\left\\lfloor \\frac{x-1}{i} \\right\\rfloor} f(n)\\right) G(i) + \n \\sum_{d\\mid x} G(d) f\\left(\\frac{x}{d}\\right) \n \\end{align} \n " }, { "math_id": 9, "text": "\\sum_{d=1}^x f(d) \\left(\\sum_{r\\mid (d,x)} g(r) h\\left(\\frac{d}{r}\\right)\\right) = \\sum_{r\\mid x} g(r) \\left(\\sum_{1 \\leq d \\leq x/r} h(d) f(rd)\\right) " }, { "math_id": 10, "text": "\\sum_{m=1}^x \\left(\\sum_{d\\mid(m,x)} f(d) g\\left(\\frac{x}{d}\\right)\\right) = \\sum_{d\\mid x} f(d) g\\left(\\frac{x}{d}\\right) \\cdot \\frac{x}{d} " }, { "math_id": 11, "text": "\\sum_{m=1}^x \\left(\\sum_{d\\mid (m,x)} f(d) g\\left(\\frac{x}{d}\\right)\\right) t^m = (t^x-1) \\cdot \\sum_{d\\mid x} \\frac{t^d f(d)}{t^d-1} g\\left(\\frac{x}{d}\\right) " }, { "math_id": 12, "text": "\\sum_{n \\leq x} f(n) \\qquad\\text{ or } \\qquad \\sum_{\\stackrel{q \\leq x}{q\\text{ squarefree}}} f(q), " }, { "math_id": 13, "text": "f(n) = (g \\ast h)(n)" }, { "math_id": 14, "text": "\\sum_{k=1}^n f(k) = \\sum_{k=1}^n \\sum_{xy=k}^{} g(x)h(y)\n= \\sum_{x=1}^a \\sum_{y=1}^{n/x} g(x)h(y) + \\sum_{y=1}^b \\sum_{x=1}^{n/y} g(x)h(y) - \\sum_{x=1}^a \\sum_{y=1}^b g(x)h(y);" }, { "math_id": 15, "text": "f(n+k) = f(n)" }, { "math_id": 16, "text": "n \\in \\mathbb{N}" }, { "math_id": 17, "text": "f(n) = \\chi(n)" }, { "math_id": 18, "text": "f(n) = (n, k)" }, { "math_id": 19, "text": "f(n) = \\sum_{m=1}^k a_k(m) e\\left(\\frac{mn}{k}\\right), " }, { "math_id": 20, "text": "a_k(m)" }, { "math_id": 21, "text": "a_k(m) = \\frac{1}{k} \\sum_{n=1}^k f(n) e\\left(-\\frac{mn}{k}\\right). " }, { "math_id": 22, "text": "s_k(n) := \\sum_{d\\mid (n,k)} f(d) g\\left(\\frac{k}{d}\\right) = \\sum_{m=1}^k a_k(m) e\\left(\\frac{mn}{k}\\right). " }, { "math_id": 23, "text": "a_k(m) = \\sum_{d\\mid (m,k)} g(d) f\\left(\\frac{k}{d}\\right) \\frac{d}{k}. " }, { "math_id": 24, "text": "\\operatorname{gcd}(n, k)" }, { "math_id": 25, "text": "c_q(n)" }, { "math_id": 26, "text": "F_h(m, n) = \\sum_{k=1}^{n} h((k,n)) e\\left(-\\frac{km}{n}\\right) = (h \\ast c_{\\bullet}(m))(n). " }, { "math_id": 27, "text": "a_k(m) = \\sum_{d\\mid(m,k)} g(d) f\\left(\\frac{k}{d}\\right) \\frac{d}{k} = \\sum_{d\\mid k} \\sum_{r\\mid d} f(r) g(d) c_{\\frac{d}{r}}(m). " }, { "math_id": 28, "text": "a(n)" }, { "math_id": 29, "text": "a(n) = 1" }, { "math_id": 30, "text": "\\sum_{n=1}^x \\sum_{\\stackrel{p\\mid n}{p\\text{ prime}}} f(p) = \\sum_{p=1}^x a(p) f(p) \\left\\lfloor \\frac{x}{p} \\right\\rfloor = \n \\sum_{\\stackrel{p=1}{p\\text{ prime}}}^x f(p) \\left\\lfloor \\frac{x}{p} \\right\\rfloor. " }, { "math_id": 31, "text": "\\sum_{\\stackrel{p=1}{p\\text{ prime}}}^x f(p) = \\pi(x) f(x) - \\int_2^x \\pi(t) f^{\\prime}(t) dt \\approx \n \\frac{x f(x)}{\\log x} - \\int_2^x \\frac{t}{\\log t} f^{\\prime}(t) dt, " }, { "math_id": 32, "text": "\\pi(x) \\sim \\frac{x}{\\log x}" }, { "math_id": 33, "text": "\\varphi(n)" }, { "math_id": 34, "text": "\\mu(n)" }, { "math_id": 35, "text": "\\sum_{d\\mid n} f(d) \\varphi\\left(\\frac{n}{d}\\right) = \\sum_{k=1}^n f(\\operatorname{gcd}(n, k)) " }, { "math_id": 36, "text": "\\sum_{d\\mid n} \\mu(d) f(d) = \\prod_{\\stackrel{p\\mid n}{p\\text{ prime}}} (1-f(p)) " }, { "math_id": 37, "text": "f(m)f(n) = \\sum_{d\\mid (m,n)} g(d) f\\left(\\frac{mn}{d^2}\\right). " }, { "math_id": 38, "text": "\\cdot" }, { "math_id": 39, "text": "f \\cdot (g \\ast h) = (f \\cdot g) \\ast (f \\cdot h)" }, { "math_id": 40, "text": "\\sum_{d^k\\mid n} \\mu(d) = \\Biggl\\{\\begin{array}{ll} 0, & \\text{ if } m^k\\mid n \\text{ for some } m>1; \\\\ 1, & \\text{otherwise.}\\end{array}" }, { "math_id": 41, "text": "m \\geq 1" }, { "math_id": 42, "text": "\\sum_{d\\mid n} \\mu(d) \\log^m(d) = 0." }, { "math_id": 43, "text": "\\varepsilon(n) = \\delta_{n,1}" }, { "math_id": 44, "text": " (\\varepsilon \\ast f)(n) = (f \\ast \\varepsilon)(n) = f(n)" }, { "math_id": 45, "text": "n \\geq 1" }, { "math_id": 46, "text": "(f \\ast f^{-1})(n) = (f^{-1} \\ast f)(n) = \\varepsilon(n)" }, { "math_id": 47, "text": "f^{-1}(n)" }, { "math_id": 48, "text": "f^{-1}(n) = \\Biggl\\{\\begin{array}{ll} \\frac{1}{f(1)}, & \\text{ if } n = 1; \\\\ -\\frac{1}{f(1)} \\sum_{\\stackrel{d\\mid n}{d>1}} f(d) f^{-1}\\left(\\frac{n}{d}\\right), & \\text{ if } n>1. \\end{array}" }, { "math_id": 49, "text": "f_{\\pm}(n) := (-1)^{\\delta_{n,1}} f(n) = \\Biggl\\{\\begin{matrix} -f(1), & \\text{ if } n=1; \\\\ f(n), & \\text{ if } n>1 \\end{matrix}" }, { "math_id": 50, "text": "\n \\begin{align} \n \\widetilde{\\operatorname{ds}}_{j,f}(n) & := \\underbrace{\\left(f_{\\pm} \\ast f \\ast \\cdots \\ast f\\right)}_{j\\text{ times}}(n) \\\\ \n \\operatorname{ds}_{j,f}(n) & := \\Biggl\\{\\begin{array}{ll} f_{\\pm}(n), & \\text{ if } j=1; \\\\ \\sum\\limits_{\\stackrel{d\\mid n}{d>1}} f(d) \\operatorname{ds}_{j-1,f}(n/d), & \\text{ if } j > 1. \\end{array} \n \\end{align} \n" }, { "math_id": 51, "text": "D_f(n)" }, { "math_id": 52, "text": "D_f(n) := \\sum_{j=1}^n \\operatorname{ds}_{2j,f}(n) = \\sum_{m=1}^{\\left\\lfloor \\frac{n}{2} \\right\\rfloor} \\sum_{i=0}^{2m-1} \\binom{2m-1}{i} (-1)^{i+1} \\widetilde{\\operatorname{ds}}_{i+1,f}(n)" }, { "math_id": 53, "text": "f^{-1}(n) = \\left(D + \\frac{\\varepsilon}{f(1)}\\right)(n)." }, { "math_id": 54, "text": "2 \\leq n \\leq 16" }, { "math_id": 55, "text": "p_k(n) := p(n-k)" }, { "math_id": 56, "text": " n > 1" }, { "math_id": 57, "text": "f^{-1}(n) = \\sum_{k=1}^{n} \\left[(p_k \\ast \\mu)(n) + (p_k \\ast D_f \\ast \\mu)(n)\\right] \\times [q^{k-1}] \\frac{(q; q)_{\\infty}}{1-q}." } ]
https://en.wikipedia.org/wiki?curid=57218361
57219341
Monetary/fiscal debate
The monetary/fiscal policy debate, otherwise known as the Ando–Modigliani/Friedman–Meiselman debate (or AM/FM debate from the main instigators' initials, and for this reason sometimes jokingly called the "radio stations debate"), was the exchange of viewpoints about the comparative efficiency of monetary policies and fiscal policies that originated with a work co-authored by Milton Friedman and David I. Meiselman and first published in 1963, as part of studies submitted to the Commission on Money and Credit. In 2000, a survey of 298 members of the American Economic Association (AEA) found that while 84 percent generally agreed with the statement "Fiscal policy has a significant stimulative impact on a less than fully employed economy", 71 percent also generally agreed with the statement "Management of the business cycle should be left to the Federal Reserve; activist fiscal policy should be avoided." In 2011, a follow-up survey of 568 AEA members found that the previous consensus about the latter proposition had dissolved and was by then roughly evenly disputed. Origin. In the early 1960s, contributing to the studies invited by the Commission on Money and Credit, Milton Friedman and David Meiselman published a study whereby, they found that "[e]xcept for the early years of the Great Depression, money is more closely related to consumption than is autonomous expenditures," claiming moreover that "[t]he results [of the tests] are strikingly one-sided". They used the following reduced form, least squares regression equation to compare the effectiveness of monetary and fiscal policies; in effect, to compare Keynesian and monetarist theories: formula_0 (1) where C is induced private consumption, α is a constant, V represents money velocity, M is approximately M2, K represents an expenditure-multiplier, A is autonomous expenditures, and "t" represents time. Friedman and Meiselman found that, whether using annual data from 1897 to 1958 or quarterly data from 1946 to 1958, and whether using only real, contemporaneous data, or experimenting with various time lags, private consumption was not statistically significantly affected by discretionary fiscal policy, but was by monetary policy. They stated that their monetary variables were "highly correlated" with consumption, whereas fiscal policy variables were not. Debate. The Friedman/Meiselman 1963 paper was addressed with numerous articles, where counter-arguments were made: The model was erroneously specified because important and statistically relevant variables were omitted; the data used were not actually coincident with the theory behind them; there was no correction for the "thermostat effect" so that even if fiscal policy is effective it will seem to have a neutral or even negative relationship with spending rather than the positive effect it is theorized to have; and that the results were time-specific. Hester claims bias. In 1964, Donald D. Hester criticized the F/M paper for "bias" against a "Keynesian" outcome. For that purpose, Hester argued that government deficits are endogenously determined, and not exogenously, and thus no single-equation approach could properly capture government spending and deficits, while the same principle applies for short-run private investment. Also, Hester emphasized that the actual data should have been empirically tested in first-differential form so as to extricate the trends of both explanatory variables, and thus demonstrate only the endogenously generated economic growth. Hester stated that, when he tried "improved" data and empirical methods, “the autonomous expenditure theory outperformed the quantity theory [of money],” i.e. Keynesian economics win over monetarist economics. Friedman/Meiselman respond. In a paper published in 1964, Friedman and Meiselman conceded that Hester’s suggestion of using first differences was correct and that it is a better method for their single-equation approach. But they insisted that their interpretations of income and autonomous expenditures are relevant, rejecting Hester’s misgivings. They claimed that Hester’s use of correlation coefficients with his newly defined autonomous expenditures constituted an "unsound argument,"and summarized as follows: We remain of the opinion that there is a striking division among students of economic affairs about the role of money in determining the course of economic events. One view is that the quantity of money matters little; the other, that it is a key factor in understanding, and even more, controlling economic change. Our paper tried to present some evidence relevant to deciding between these views. The kind of evidence we gave is not the only kind that is relevant and may not be the most important or significant. And, of course, much other evidence is available from other work by us and by many others. This other evidence needs to be added to and brought to bear on the main issue that divides economists into two groups. Hester does not quarrel with the relevance of our evidence but with the particular form of the income-expenditure theory we use. [Hester's] criticism of our procedure rests primarily on a misunderstanding of the theoretical basis of our approach. He offers neither theoretical argument nor empirical evidence in support of his alternative formulation. Hence his criticism is largely beside the point. That is unfortunate. We badly need work on these problems that will clarify the issues involved. We can ill afford to waste the energy, interest, and ability that Hester displays in his paper on frivolous quibbling. Ando and Modigliani: both policies affect outcome. Albert Ando and Franco Modigliani, in a paper published in 1965, disputed the findings presented in the 1963 Friedman/Meiselman work. Ando and Modigliani claimed that [The Friedman/Meiselman 1963 work] has shortcomings in procedures that if repaired change the result, but, moreover, the single-equation approach coupled with the equally single, independent variable approach and the corresponding correlations cannot shed light on macro-policy. They argued that the consumption function was not correctly specified within the F/M use of autonomous expenditures and claimed that the variable that Friedman and Meiselman had derived was actually saving and not autonomous expenditures. They also observed that the data used in the 1963 paper would need to be modified by including corporate retained earnings, transfer payments made by the government to foreigners, and “wage accruals over disbursement.” Ando and Modigliani objected to the use of an ordinary, least squares equation because of the induced influence on the independent variable by the dependent variable and offered their own model, which ostensibly removed the independent part from the induced part. Ando and Modigliani criticized the Friedman/Meiselman paper for omitting to determine exogenous and endogenous components to monetary policy in the same manner as economists do with fiscal policy. Instead, Ando and Modigliani, rather than using a standard money-supply variable, introduced M*, which is meant to represent what the money stock would be if high powered money were "fully utilized", thus introducing a "high-usage variable." The purpose was to show that money is not exogenously determined: people can choose to hold money in different amounts and levels of liquidity as situations warrant, while lenders don't need to lend out all of their excess reserves if they so desire - which constitutes a standard Keynesian concept. Moreover, Ando and Modigliani found that the error variance in predicting output was "much higher" when using money than any of the fiscal variables and labeled the F &amp; M respective results "spurious." They concluded that Friedman and Meiselman’s results were biased in favor of monetary policy, and that, if both policy variables were to be given a balanced approach, the end result would be that both policies would have real and statistically significant effects on the economy. Indeed, in the opening statement of their paper, they state that the "number of basic shortcomings in [the Friedman/Meiselman] procedure...make the results of their elaborate battery of tests essentially worthless." Economists Michael DePrano and Thomas Mayer published a critique of the F &amp; M paper that was generally in line with the criticisms leveled by A &amp; M. FM respond to AM. In 1965, Friedman and Mieselman responded to the criticism leveled at their 1963 paper, particularly by Ando and Modigliani. They claimed that although one could indeed object to their autonomous-expenditure variable, any of the alternatives that had been put forward by others were equally objectionable. Additionally, Friedman and Meiselman defended their use of their consumption function and explained why it is, in their view, the right method to use. And they pointed out that, unlike them, Ando and Modigliani used nominal data rather than real data and, therefore, the empirical results put forward by A &amp; M could not be correctly comparable to their own. Friedman and Mieselman agreed in theory with A &amp; M that the term M* is a valid means to determine the exogenous versus the endogenous nature of the policy variable but still disagreed with the actual A &amp; M methodology to determine M*. As to the consumption variance, they maintained that "of the total variance of consumption for the 25 years, 88 per cent is accounted for by the differences between the means for the two subperiods." Finally, they claimed complete lack of bias in their research and the empirical processes and claimed that even if they had built a model that seemed to favor monetary policy over fiscal policy, that was because the theory comes out that way. They concluded as follows: None of the calculations made by our critics for supposedly the same purpose is correct because they omit some components of income for the income-expenditure calculations, set the two theories different tasks, or use lengthy periods combining two different sub-periods. We have made some of the correct calculations for one of the alternative concepts of autonomous expenditures (Ando and Modigliani’s). Though less clear-cut, the results are in the same direction as those from our original calculations. Hence, we are left with no reason to change our earlier conclusion that “so far as these data go [and, we may now add, those adduced by Ando and Modigliani, DePrano and Mayer, and Hester] the widespread belief that the investment multiplier is stabler than the monetary velocity is an invalid generalization from the experience of three or four years. It holds for neither later nor earlier years”. The St. Louis equation. In 1968, Federal Reserve Bank of St. Louis economists Leonall C. Andersen and Jerry L. Jordan published a study that fully supported the Friedman and Meiselman single-equation approach but expanded it in response to the criticisms of the 1963 paper. They offered their own economic output model, in which all variables are in first-differential form as denoted by ∆, as follows: formula_1 (2) where α is a constant; Y is nominal domestic spending; M represents monetary policy defined by the monetary base; E represents variously high-employment expenditures, high-employment receipts, or high-employment surplus; and Z represents a catch-all variable defined as “a variable summarizing all other forces that influence total spending.” Those forces include weather, international trade, preferences, technology, resources, infrastructure, war, etc. In their math, they used an Almon lag technique with 4th-degree polynomials and a 4-period time lag. They concluded that, just as Friedman and Meiselman had found, monetary policy seemed to affect whatever measure was used for spending; but fiscal policy did not. De Leeuw and Kalchbrenner: endogenous vs. exogenous. In 1969, Frank DeLeeuw and J. Kalchbrenner, also St. Louis Fed economists, published an article that criticized "severely" the Andersen/Jordan study and modeling. They argued that exogenous fiscal policy cannot be properly measured by using any of the fiscal policy definitions presented by their colleagues, nor can any single-equation approach bring into relief the particular influences of such a policy variable. There exist no ways, they claimed, to separate the endogenous- from the exogenous-policy effects because those effects are effectively lost in the complex workings of an entire economy. They pointed out, in particular, that the tax and monetary-base variables are impossibly entangled with the endogeneity-exogeneity problem and claimed that the Andersen/Jordan method leaves out the influences introduced by inflation. The main argument by De Leeuw and Kalchbrenne was that causality cannot be demonstrated by the single-equation approach, and the direction of causation is impossible to establish, i.e. GNP could be driving is fiscal spending rather than the other way around. After making a "clear improvement" on the Andersen/Jordan model (using high employment receipts adjusted for inflation as the fiscal variable and two different versions of the monetary base), and re-running the "St. Louis equation" on the basis of data from 1952 to 1968, they proclaimed that, according to their findings, fiscal expenditures were statistically significant, positively correlated to changes in GNP in the long run - as was also true for changes in monetary policy. Other arguments. In 1971, William L. Silber posited that the researchers were altering their equations to fit into whatever their ideological worldviews were theorizing, which was the reason he gave his paper a "highly political" title. He questioned the validity of the overall methodology behind the "St. Louis equation" approach, after running it in various time periods that were deemed to have the same underlying structural form, and finding that some periods appeared to show fiscal policy as quite significant while others did not. In 1971, Edward Gramlich reviewed the “radio debate" up to that point and compared the multiplier and elasticity estimates for monetary and fiscal policies among the several different models and the non-single equation models. As he stated, all the models, except the Ando/Modigliani ones, showed monetary policy to have a multiplier above 1, and in every case to be larger than the fiscal multiplier. His study, which included a model "improving" the St Louis equation, supported the view that monetary policy is strongly correlated with spending but also found that fiscal policy is correlated as well. In 1972, Stephen M. Goldfeld, Alan S. Blinder, John Kareken and William Poole, in their study, criticized the Andersen/Jordan approach as econometrically unsound. They argued that, without a reaction function, one cannot determine the nature of the “exogenous” from the "endogenous”, and that, if the rules or automatic stabilizers are done to counter-cyclical perfection, then the correlations do not show up with the statistical significance we would expect. Their paper concludes that the single-equation approach used to empirically determine the comparative efficiency of monetary and fiscal policies is "without merit." In 1973, William Poole and Elinda B. F. Kornblith found that all the models tended to "underpredict," and attempted to provide hypotheses for that result. Their conclusion was that the “decision [about which models were correct or supported monetary or fiscal policies] must still be rated a draw.” In 1975, J. W. Elliot conducted an empirical analysis and pointed out the difficulty of comparing the regression coefficients as “multipliers” since their corresponding variables are money, which is a stock, and fiscal spending, which is a flow. He concluded that, irrespective of technique, his results supported the Andersen/Jordan results. Ando and Modigliani return. In 1974, at a conference held at Brown University, Ando and Modigliani presented a paper where they recreated an analysis of a simulated economy using the Andersen/Jordan method, which, they concluded, was biased in favor of monetary policy. Their work was published in 1976. Keith Carlson vs. Benjamin Friedman. In 1977, Benjamin Friedman found that, using the Andersen/Jordan model but extending the data set out to the 2nd quarter of 1976, fiscal policy was now statistically significant in the determination of expenditures -although serious heteroscedasticity problems had appeared. He also found that, if he used data starting at the 1st quarter of 1960, the results were even more favorable to discretionary fiscal policy, reiterating the existence in the Andersen/Jordan model of an inherent coefficient bias. Ultimately, he concluded that the St Louis equation methodology was "not salvageable." In response, Keith Carlson, in 1978, made an empirical modification to the original Andersen/Jordan model, whereby instead of using a first-difference approach, he posited that a rate-of-change approach eliminated the heteroscedasticity problems discovered by B. Friedman. Carlson’s model is as follows: formula_2 (3) where the variables are the same as in the Andersen/Jordan model but the dots over the terms denote "growth rates" for the respective variables. Carlson determined that his model once again supported the original conclusion of significant monetary-policy effects and insignificant fiscal-policy effects. Outcome. Numerous papers have appeared in the literature, dating from the 1963 original work until the 2010s. In 2011, Stefan Belliveau attempted to sum up the debate down to three “interpretations”: Real business-cycle theory says that neither fiscal nor monetary policy is very effective, essentially rejecting state activism; Keynesian theory suggests that government expenditures can influence economic output while monetary policy is not as effective; and monetarist theory says that monetary policy is effective while fiscal policy is not. To settle the matter, Belliveau attempted to salvage the Andersen/Jordan equation by including Gross Value Added by Sector as his output-dependent variable, considering it necessary to look at these data if policymakers are attempting to stabilize economic fluctuations. Using annual data from 1956 to 2007, Belliveau found empirical support, as claimed, that both monetary and fiscal policy seem to help stabilize an economy, and considers the use of both policies in the United States as being "reasonable" during and after the Great Recession. Milton Friedman, in a 2000s interview, maintained that "the debate was over" and that "everyone agrees fundamentally" with the notion of monetary-policy supremacy. He stated that he still had "far more extreme views about the unimportance of fiscal policy for the aggregate economy than the [economist] profession does." In 2000, a survey of 298 members of the American Economic Association found that while 84 percent generally agreed with the statement "Fiscal policy has a significant stimulative impact on a less than fully employed economy", 71 percent also generally agreed with the statement "Management of the business cycle should be left to the Federal Reserve; activist fiscal policy should be avoided." In 2011, a follow-up survey of 568 AEA members found that the previous consensus about the latter proposition had dissolved and was by then roughly evenly disputed. Some heterodox economists (most notably Post-Keynesians) reject in their entirety old and new arguments in favor of monetary policy. As observed by Peter Bias in a 2014 retrospective of the debate, it all "points to the importance of clearly defining precise, objective functions or theories, and using the appropriate variables and methodologies to empirically test those theories." Footnotes. &lt;templatestyles src="Reflist/styles.css" /&gt; References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "C_t = \\alpha + VM_t + KA_t" }, { "math_id": 1, "text": "\\Delta Y_t=\\alpha + \\sum_{i=0}^{4}m_i\\Delta M_{{t-1}} + \\sum_{i=0}^{4}e_i\\Delta E_{{t-1}} + \\sum_{i=0}^{4}z_i\\Delta Z_{{t-1}}" }, { "math_id": 2, "text": "\\dot{Y}_t=\\alpha + \\sum_{i=0}^{4}m_i \\dot{M}_{{t-1}} + \\sum_{i=0}^{4}e_i\\ \\dot{E}_{{t-1}} " } ]
https://en.wikipedia.org/wiki?curid=57219341
57222123
Batch normalization
Method used to make artificial neural networks faster and stable by re-centering and re-scaling &lt;templatestyles src="Machine learning/styles.css"/&gt; Batch normalization (also known as batch norm) is a method used to make training of artificial neural networks faster and more stable through normalization of the layers' inputs by re-centering and re-scaling. It was proposed by Sergey Ioffe and Christian Szegedy in 2015. While the effect of batch normalization is evident, the reasons behind its effectiveness remain under discussion. It was believed that it can mitigate the problem of "internal covariate shift", where parameter initialization and changes in the distribution of the inputs of each layer affect the learning rate of the network. Recently, some scholars have argued that batch normalization does not reduce internal covariate shift, but rather smooths the objective function, which in turn improves the performance. However, at initialization, batch normalization in fact induces severe gradient explosion in deep networks, which is only alleviated by skip connections in residual networks. Others maintain that batch normalization achieves length-direction decoupling, and thereby accelerates neural networks. Internal covariate shift. Each layer of a neural network has inputs with a corresponding distribution, which is affected during the training process by the randomness in the parameter initialization and the randomness in the input data. The effect of these sources of randomness on the distribution of the inputs to internal layers during training is described as internal covariate shift. Although a clear-cut precise definition seems to be missing, the phenomenon observed in experiments is the change on means and variances of the inputs to internal layers during training. Batch normalization was initially proposed to mitigate internal covariate shift. During the training stage of networks, as the parameters of the preceding layers change, the distribution of inputs to the current layer changes accordingly, such that the current layer needs to constantly readjust to new distributions. This problem is especially severe for deep networks, because small changes in shallower hidden layers will be amplified as they propagate within the network, resulting in significant shift in deeper hidden layers. Therefore, the method of batch normalization is proposed to reduce these unwanted shifts to speed up training and to produce more reliable models. Besides reducing internal covariate shift, batch normalization is believed to introduce many other benefits. With this additional operation, the network can use higher learning rate without vanishing or exploding gradients. Furthermore, batch normalization seems to have a regularizing effect such that the network improves its generalization properties, and it is thus unnecessary to use dropout to mitigate overfitting. It has also been observed that the network becomes more robust to different initialization schemes and learning rates while using batch normalization. Procedures. Transformation. In a neural network, batch normalization is achieved through a normalization step that fixes the means and variances of each layer's inputs. Ideally, the normalization would be conducted over the entire training set, but to use this step jointly with stochastic optimization methods, it is impractical to use the global information. Thus, normalization is restrained to each mini-batch in the training process. Let us use "B" to denote a mini-batch of size "m" of the entire training set. The empirical mean and variance of "B" could thus be denoted as formula_0 and formula_1. For a layer of the network with "d-"dimensional input, formula_2, each dimension of its input is then normalized (i.e. re-centered and re-scaled) separately, formula_3, where formula_4 and formula_5; formula_6 and formula_7 are the per-dimension mean and standard deviation, respectively. formula_8 is added in the denominator for numerical stability and is an arbitrarily small constant. The resulting normalized activation formula_9have zero mean and unit variance, if formula_8 is not taken into account. To restore the representation power of the network, a transformation step then follows as formula_10, where the parameters formula_11 and formula_12 are subsequently learned in the optimization process. Formally, the operation that implements batch normalization is a transform formula_13 called the Batch Normalizing transform. The output of the BN transform formula_14 is then passed to other network layers, while the normalized output formula_15 remains internal to the current layer. Backpropagation. The described BN transform is a differentiable operation, and the gradient of the loss "l" with respect to the different parameters can be computed directly with the chain rule. Specifically, formula_16 depends on the choice of activation function, and the gradient against other parameters could be expressed as a function of formula_16: formula_17, formula_18, formula_19,formula_20, formula_21, and formula_22. Inference. During the training stage, the normalization steps depend on the mini-batches to ensure efficient and reliable training. However, in the inference stage, this dependence is not useful any more. Instead, the normalization step in this stage is computed with the population statistics such that the output could depend on the input in a deterministic manner. The population mean, formula_23, and variance, formula_24, are computed as: formula_25, and formula_26. The population statistics thus is a complete representation of the mini-batches. The BN transform in the inference step thus becomes formula_27, where formula_28 is passed on to future layers instead of formula_29. Since the parameters are fixed in this transformation, the batch normalization procedure is essentially applying a linear transform to the activation. Theory. Although batch normalization has become popular due to its strong empirical performance, the working mechanism of the method is not yet well-understood. The explanation made in the original paper was that batch norm works by reducing internal covariate shift, but this has been challenged by more recent work. One experiment trained a VGG-16 network under 3 different training regimes: standard (no batch norm), batch norm, and batch norm with noise added to each layer during training. In the third model, the noise has non-zero mean and non-unit variance, i.e. it explicitly introduces covariate shift. Despite this, it showed similar accuracy to the second model, and both performed better than the first, suggesting that covariate shift is not the reason that batch norm improves performance. Using batch normalization causes the items in a batch to no longer be iid, which can lead to difficulties in training due to lower quality gradient estimation. Smoothness. One alternative explanation, is that the improvement with batch normalization is instead due to it producing a smoother parameter space and smoother gradients, as formalized by a smaller Lipschitz constant. Consider two identical networks, one contains batch normalization layers and the other does not, the behaviors of these two networks are then compared. Denote the loss functions as formula_30 and formula_31, respectively. Let the input to both networks be formula_32, and the output be formula_33, for which formula_34, where formula_35 is the layer weights. For the second network, formula_33 additionally goes through a batch normalization layer. Denote the normalized activation as formula_36, which has zero mean and unit variance. Let the transformed activation be formula_37, and suppose formula_38 and formula_39 are constants. Finally, denote the standard deviation over a mini-batch formula_40 as formula_41. First, it can be shown that the gradient magnitude of a batch normalized network, formula_42, is bounded, with the bound expressed as formula_43. Since the gradient magnitude represents the Lipschitzness of the loss, this relationship indicates that a batch normalized network could achieve greater Lipschitzness comparatively. Notice that the bound gets tighter when the gradient formula_44 correlates with the activation formula_45, which is a common phenomena. The scaling of formula_46 is also significant, since the variance is often large. Secondly, the quadratic form of the loss Hessian with respect to activation in the gradient direction can be bounded as formula_47. The scaling of formula_46 indicates that the loss Hessian is resilient to the mini-batch variance, whereas the second term on the right hand side suggests that it becomes smoother when the Hessian and the inner product are non-negative. If the loss is locally convex, then the Hessian is positive semi-definite, while the inner product is positive if formula_48 is in the direction towards the minimum of the loss. It could thus be concluded from this inequality that the gradient generally becomes more predictive with the batch normalization layer. It then follows to translate the bounds related to the loss with respect to the normalized activation to a bound on the loss with respect to the network weights: formula_49, where formula_50 and formula_51. In addition to the smoother landscape, it is further shown that batch normalization could result in a better initialization with the following inequality: formula_52, where formula_53 and formula_54 are the local optimal weights for the two networks, respectively. Some scholars argue that the above analysis cannot fully capture the performance of batch normalization, because the proof only concerns the largest eigenvalue, or equivalently, one direction in the landscape at all points. It is suggested that the complete eigenspectrum needs to be taken into account to make a conclusive analysis. Measure. Since it is hypothesized that batch normalization layers could reduce internal covariate shift, an experiment is set up to measure quantitatively how much covariate shift is reduced. First, the notion of internal covariate shift needs to be defined mathematically. Specifically, to quantify the adjustment that a layer's parameters make in response to updates in previous layers, the correlation between the gradients of the loss before and after all previous layers are updated is measured, since gradients could capture the shifts from the first-order training method. If the shift introduced by the changes in previous layers is small, then the correlation between the gradients would be close to 1. The correlation between the gradients are computed for four models: a standard VGG network, a VGG network with batch normalization layers, a 25-layer deep linear network (DLN) trained with full-batch gradient descent, and a DLN network with batch normalization layers. Interestingly, it is shown that the standard VGG and DLN models both have higher correlations of gradients compared with their counterparts, indicating that the additional batch normalization layers are not reducing internal covariate shift. Vanishing/exploding gradients. Even though batchnorm was originally introduced to alleviate gradient vanishing or explosion problems, a deep batchnorm network in fact "suffers from gradient explosion" at initialization time, no matter what it uses for nonlinearity. Thus the optimization landscape is very far from smooth for a randomly initialized, deep batchnorm network. More precisely, if the network has formula_55 layers, then the gradient of the first layer weights has norm formula_56 for some formula_57 depending only on the nonlinearity. For any fixed nonlinearity, formula_58 decreases as the batch size increases. For example, for ReLU, formula_58 decreases to formula_59 as the batch size tends to infinity. Practically, this means deep batchnorm networks are untrainable. This is only relieved by skip connections in the fashion of residual networks. This gradient explosion on the surface contradicts the "smoothness" property explained in the previous section, but in fact they are consistent. The previous section studies the effect of inserting a single batchnorm in a network, while the gradient explosion depends on stacking batchnorms typical of modern deep neural networks. Decoupling. Another possible reason for the success of batch normalization is that it decouples the length and direction of the weight vectors and thus facilitates better training. By interpreting batch norm as a reparametrization of weight space, it can be shown that the length and the direction of the weights are separated and can thus be trained separately. For a particular neural network unit with input formula_32 and weight vector formula_60, denote its output as formula_61, where formula_62 is the activation function, and denote formula_63. Assume that formula_64, and that the spectrum of the matrix formula_65 is bounded as formula_66, formula_67, such that formula_65 is symmetric positive definite. Adding batch normalization to this unit thus results in formula_68, by definition. The variance term can be simplified such that formula_69. Assume that formula_32 has zero mean and formula_39 can be omitted, then it follows that formula_70, where formula_71 is the induced norm of formula_65, formula_72. Hence, it could be concluded that formula_73, where formula_74, and formula_38 and formula_60 accounts for its length and direction separately. This property could then be used to prove the faster convergence of problems with batch normalization. Linear convergence. Least-square problem. With the reparametrization interpretation, it could then be proved that applying batch normalization to the ordinary least squares problem achieves a linear convergence rate in gradient descent, which is faster than the regular gradient descent with only sub-linear convergence. Denote the objective of minimizing an ordinary least squares problem as formula_75, where formula_76 and formula_77. Since formula_78, the objective thus becomes formula_79, where 0 is excluded to avoid 0 in the denominator. Since the objective is convex with respect to formula_38, its optimal value could be calculated by setting the partial derivative of the objective against formula_38 to 0. The objective could be further simplified to be formula_80. Note that this objective is a form of the generalized Rayleigh quotient formula_81, where formula_82 is a symmetric matrix and formula_83 is a symmetric positive definite matrix. It is proven that the gradient descent convergence rate of the generalized Rayleigh quotient is formula_84, where formula_85 is the largest eigenvalue of formula_86, formula_87 is the second largest eigenvalue of formula_86, and formula_88 is the smallest eigenvalue of formula_86. In our case, formula_89is a rank one matrix, and the convergence result can be simplified accordingly. Specifically, consider gradient descent steps of the form formula_90 with step size formula_91, and starting from formula_92, then formula_93. Learning halfspace problem. The problem of learning halfspaces refers to the training of the Perceptron, which is the simplest form of neural network. The optimization problem in this case is formula_94, where formula_95 and formula_96 is an arbitrary loss function. Suppose that formula_96 is infinitely differentiable and has a bounded derivative. Assume that the objective function formula_97 is formula_98-smooth, and that a solution formula_99 exists and is bounded such that formula_100. Also assume formula_101 is a multivariate normal random variable. With the Gaussian assumption, it can be shown that all critical points lie on the same line, for any choice of loss function formula_96. Specifically, the gradient of formula_97 could be represented as formula_102, where formula_103, formula_104, and formula_105 is the formula_106-th derivative of formula_96. By setting the gradient to 0, it thus follows that the bounded critical points formula_107 can be expressed as formula_108, where formula_109 depends on formula_107 and formula_96. Combining this global property with length-direction decoupling, it could thus be proved that this optimization problem converges linearly. First, a variation of gradient descent with batch normalization, Gradient Descent in Normalized Parameterization (GDNP), is designed for the objective function formula_110, such that the direction and length of the weights are updated separately. Denote the stopping criterion of GDNP as formula_111. Let the step size be formula_112. For each step, if formula_113, then update the direction as formula_114. Then update the length according to formula_115, where formula_116 is the classical bisection algorithm, and formula_117 is the total iterations ran in the bisection step. Denote the total number of iterations as formula_118, then the final output of GDNP is formula_119. The GDNP algorithm thus slightly modifies the batch normalization step for the ease of mathematical analysis. It can be shown that in GDNP, the partial derivative of formula_97against the length component converges to zero at a linear rate, such that formula_120, where formula_121 and formula_122 are the two starting points of the bisection algorithm on the left and on the right, correspondingly. Further, for each iteration, the norm of the gradient of formula_97 with respect to formula_123 converges linearly, such that formula_124. Combining these two inequalities, a bound could thus be obtained for the gradient with respect to formula_125: formula_126, such that the algorithm is guaranteed to converge linearly. Although the proof stands on the assumption of Gaussian input, it is also shown in experiments that GDNP could accelerate optimization without this constraint. Neural networks. Consider a multilayer perceptron (MLP) with one hidden layer and formula_127 hidden units with mapping from input formula_128 to a scalar output described as formula_129, where formula_130 and formula_131 are the input and output weights of unit formula_106 correspondingly, and formula_96 is the activation function and is assumed to be a tanh function. The input and output weights could then be optimized with formula_132, where formula_133 is a loss function, formula_134, and formula_135. Consider fixed formula_136 and optimizing only formula_137, it can be shown that the critical points of formula_138 of a particular hidden unit formula_106, formula_139, all align along one line depending on incoming information into the hidden layer, such that formula_140, where formula_141 is a scalar, formula_142. This result could be proved by setting the gradient of formula_143 to zero and solving the system of equations. Apply the GDNP algorithm to this optimization problem by alternating optimization over the different hidden units. Specifically, for each hidden unit, run GDNP to find the optimal formula_144 and formula_145. With the same choice of stopping criterion and stepsize, it follows that formula_146. Since the parameters of each hidden unit converge linearly, the whole optimization problem has a linear rate of convergence. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\mu_B = \\frac 1 m \\sum_{i=1}^m x_i" }, { "math_id": 1, "text": "\\sigma_B^2 = \\frac 1 m \\sum_{i=1}^m (x_i-\\mu_B)^2" }, { "math_id": 2, "text": "x = (x^{(1)},...,x^{(d)})" }, { "math_id": 3, "text": "\\hat{x}_{i}^{(k)} = \\frac {x_i^{(k)}-\\mu_B^{(k)}} \\sqrt{\\left(\\sigma_B^{(k)}\\right)^2+\\epsilon}" }, { "math_id": 4, "text": "k \\in [1,d]" }, { "math_id": 5, "text": "i \\in [1,m]" }, { "math_id": 6, "text": "\\mu_B^{(k)}" }, { "math_id": 7, "text": "\\sigma_B^{(k)}" }, { "math_id": 8, "text": "\\epsilon" }, { "math_id": 9, "text": "\\hat{x}^{(k)}" }, { "math_id": 10, "text": "y_i^{(k)} = \\gamma^{(k)} \\hat{x}_{i}^{(k)} +\\beta^{(k)}" }, { "math_id": 11, "text": "\\gamma^{(k)}" }, { "math_id": 12, "text": "\\beta^{(k)}" }, { "math_id": 13, "text": "BN_{\\gamma^{(k)},\\beta^{(k)}}: x^{(k)}_{1...m} \\rightarrow y^{(k)}_{1...m}" }, { "math_id": 14, "text": "y^{(k)} = BN_{\\gamma^{(k)},\\beta^{(k)}}(x^{(k)})" }, { "math_id": 15, "text": "\\hat{x}_{i}^{(k)}" }, { "math_id": 16, "text": "\\frac{\\partial l}{\\partial y_i^{(k)}} " }, { "math_id": 17, "text": "\\frac{\\partial l}{\\partial \\hat{x}_i^{(k)}} = \\frac{\\partial l}{\\partial y_i^{(k)}}\\gamma^{(k)} " }, { "math_id": 18, "text": "\\frac{\\partial l}{\\partial \\gamma^{(k)}} = \\sum_{i=1}^m \\frac{\\partial l}{\\partial y_i^{(k)}}\\hat{x}_i^{(k)} " }, { "math_id": 19, "text": "\\frac{\\partial l}{\\partial \\beta^{(k)}} = \\sum_{i=1}^m \\frac{\\partial l}{\\partial y_i^{(k)}} " }, { "math_id": 20, "text": "\\frac{\\partial l}{\\partial \\sigma_B^{(k)^2}} = \\sum_{i=1}^m \\frac{\\partial l}{\\partial y_i^{(k)}} (x_i^{(k)}-\\mu_B^{(k)})\\left(-\\frac {\\gamma^{(k)}} 2 (\\sigma_B^{(k)^2}+\\epsilon)^{-3/2}\\right) " }, { "math_id": 21, "text": "\\frac{\\partial l}{\\partial \\mu_B^{(k)}} = \\sum_{i=1}^m \\frac{\\partial l}{\\partial y_i^{(k)}}\\frac{-\\gamma^{(k)}}{\\sqrt{\\sigma_B^{(k)^2}+\\epsilon}}+\\frac{\\partial l}{\\partial \\sigma_B^{(k)^2}}\\frac{1}{m}\\sum_{i=1}^m (-2)\\cdot (x_i^{(k)}-\\mu^{(k)}_B) " }, { "math_id": 22, "text": "\\frac{\\partial l}{\\partial x_i^{(k)}} = \\frac{\\partial l}{\\partial \\hat{x}^{(k)}_i}\\frac{1}{\\sqrt{\\sigma_B^{(k)^2}+\\epsilon}}+\\frac{\\partial l}{\\partial \\sigma_B^{(k)^2}}\\frac{2(x_i^{(k)}-\\mu_B^{(k)})}{m}+\\frac{\\partial l}{\\partial \\mu_B^{(k)}}\\frac{1}{m} " }, { "math_id": 23, "text": "E[x^{(k)}]" }, { "math_id": 24, "text": "\\operatorname{Var}[x^{(k)}]" }, { "math_id": 25, "text": "E[x^{(k)}] = E_{B}[\\mu^{(k)}_B] " }, { "math_id": 26, "text": "\\operatorname{Var}[x^{(k)}] = \\frac{m}{m-1}E_{B}[\\left(\\sigma^{(k)}_B \\right)^2] " }, { "math_id": 27, "text": "y^{(k)} = BN^{\\text{inf}}_{\\gamma^{(k)},\\beta^{(k)}}(x^{(k)})=\\gamma^{(k)}\\frac{x^{(k)} - E[x^{(k)}]}{\\sqrt{\\operatorname{Var}[x^{(k)}]+\\epsilon}} + \\beta^{(k)}" }, { "math_id": 28, "text": "y^{(k)}" }, { "math_id": 29, "text": "x^{(k)}" }, { "math_id": 30, "text": " \\hat{L} " }, { "math_id": 31, "text": " L " }, { "math_id": 32, "text": "x " }, { "math_id": 33, "text": "y " }, { "math_id": 34, "text": "y = Wx " }, { "math_id": 35, "text": "W " }, { "math_id": 36, "text": "\\hat{y} " }, { "math_id": 37, "text": "z = \\gamma\\hat{y}+\\beta " }, { "math_id": 38, "text": "\\gamma " }, { "math_id": 39, "text": "\\beta " }, { "math_id": 40, "text": "\\hat{y_j} \\in \\R^m " }, { "math_id": 41, "text": "\\sigma_j " }, { "math_id": 42, "text": "||\\triangledown_{y_i}\\hat{L}|| " }, { "math_id": 43, "text": "||\\triangledown_{y_i}\\hat{L}||^2 \\le \\frac{\\gamma^2}{\\sigma_j^2}\\Bigg(||\\triangledown_{y_i}L||^2-\\frac{1}{m}\\langle 1,\\triangledown_{y_i}L\\rangle^2-\\frac{1}{m}\\langle\\triangledown_{y_i}L,\\hat{y}_j\\rangle^2\\bigg) " }, { "math_id": 44, "text": "\\triangledown_{y_i}\\hat{L} " }, { "math_id": 45, "text": "\\hat{y_i} " }, { "math_id": 46, "text": "\\frac{\\gamma^2}{\\sigma^2_j} " }, { "math_id": 47, "text": "(\\triangledown_{y_j}\\hat{L})^T \\frac{\\partial \\hat{L}}{\\partial y_j \\partial y_j}(\\triangledown_{y_j}\\hat{L}) \\le \\frac{\\gamma^2}{\\sigma^2}\n\\bigg(\\frac{\\partial \\hat{L}}{\\partial y_j}\\bigg)^T \\bigg(\\frac{\\partial L}{\\partial y_j \\partial y_j}\\bigg)\\bigg(\\frac{\\partial \\hat{L}}{\\partial y_j}\\bigg)\n-\\frac{\\gamma}{m\\sigma^2}\\langle\\triangledown_{y_j}L,\\hat{y_j}\\rangle \\bigg|\\bigg|\\frac{\\partial \\hat{L}}{\\partial y_j}\\bigg|\\bigg|^2 " }, { "math_id": 48, "text": "\\hat{g_j} " }, { "math_id": 49, "text": "\\hat{g_j} \\le \\frac{\\gamma^2}{\\sigma_j^2}(g^2_j-m\\mu^2_{g_j}-\\lambda^2\\langle \\triangledown_{y_j}L,\\hat{y}_j\\rangle^2) " }, { "math_id": 50, "text": "g_j = max_{||X||\\le\\lambda} ||\\triangledown_W L||^2 " }, { "math_id": 51, "text": "\\hat{g}_j = max_{||X||\\le\\lambda} ||\\triangledown_W \\hat{L}||^2 " }, { "math_id": 52, "text": "||W_0-\\hat{W}^*||^2 \\le ||W_0-W^*||^2-\\frac{1}{||W^*||^2}(||W^*||^2-\\langle W^*,W_0\\rangle)^2 " }, { "math_id": 53, "text": "W^* " }, { "math_id": 54, "text": "\\hat{W}^* " }, { "math_id": 55, "text": "L" }, { "math_id": 56, "text": " > c\\lambda^L" }, { "math_id": 57, "text": "\\lambda>1, c>0" }, { "math_id": 58, "text": "\\lambda" }, { "math_id": 59, "text": "\\pi/(\\pi-1)\\approx 1.467" }, { "math_id": 60, "text": "w " }, { "math_id": 61, "text": "f(w) = E_x[\\phi(x^Tw)] " }, { "math_id": 62, "text": "\\phi " }, { "math_id": 63, "text": "S = E[xx^T] " }, { "math_id": 64, "text": "E[x]=0 " }, { "math_id": 65, "text": "S " }, { "math_id": 66, "text": "0<\\mu = \\lambda_{min}(S) " }, { "math_id": 67, "text": "L=\\lambda_{max}(S)<\\infty " }, { "math_id": 68, "text": "f_{BN}(w,\\gamma,\\beta) = E_x[\\phi(BN(x^Tw))] = E_x\\bigg[\\phi\\bigg(\\gamma(\\frac{x^Tw-E_x[x^Tw]}{var_x[x^Tw]^{1/2}})+\\beta\\bigg)\\bigg] " }, { "math_id": 69, "text": "var_x[x^Tw]=w^TSw " }, { "math_id": 70, "text": "f_{BN}(w,\\gamma) = E_x\\bigg[\\phi\\bigg(\\gamma\\frac{x^Tw}{(w^TSw)^{1/2}}\\bigg)\\bigg] " }, { "math_id": 71, "text": "(w^TSw)^{\\frac{1}{2}} " }, { "math_id": 72, "text": "||w||_s " }, { "math_id": 73, "text": "f_{BN}(w,\\gamma) = E_x[\\phi(x^T\\tilde{w})] " }, { "math_id": 74, "text": "\\tilde{w}=\\gamma \\frac{w}{||w||_s} " }, { "math_id": 75, "text": "min_{\\tilde{w}\\in R^d}f_{OLS}(\\tilde{w})=min_{\\tilde{w}\\in R^d}(E_{x,y}[(y-x^T\\tilde{w})^2])=min_{\\tilde{w}\\in R^d}(2u^T\\tilde{w}+\\tilde{w}^TS\\tilde{w})\n " }, { "math_id": 76, "text": "u=E[-yx] " }, { "math_id": 77, "text": "S = E[x x^T] " }, { "math_id": 78, "text": "\\tilde{w}=\\gamma\\frac{w}{||w||_s} " }, { "math_id": 79, "text": "min_{w\\in R^d\\backslash\\{0\\},\\gamma\\in R}f_{OLS}(w,\\gamma)=min_{w\\in R^d\\backslash\\{0\\},\\gamma\\in R}\\bigg(2\\gamma\\frac{u^Tw}{||w||_S+\\gamma^2}\\bigg) " }, { "math_id": 80, "text": "min_{w\\in R^d\\backslash\\{0\\}}\\rho(w)=min_{w\\in R^d\\backslash\\{0\\}}\\bigg(-\\frac{w^Tuu^Tw}{w^TSw}\\bigg) " }, { "math_id": 81, "text": "\\tilde{\\rho}(w)=\\frac{w^TBw}{w^TAw} " }, { "math_id": 82, "text": "B\\in R^{d \\times d} " }, { "math_id": 83, "text": "A\\in R^{d\\times d} " }, { "math_id": 84, "text": "\\frac{\\lambda_1-\\rho(w_{t+1})}{\\rho(w_{t+1}-\\lambda_2)}\\le \\bigg(1-\\frac{\\lambda_1-\\lambda_2}{\\lambda_1-\\lambda_{min}}\\bigg)^{2t}\\frac{\\lambda_1-\\rho(w_t)}{\\rho(w_t)-\\lambda_2} " }, { "math_id": 85, "text": "\\lambda_1 " }, { "math_id": 86, "text": "B " }, { "math_id": 87, "text": "\\lambda_2 " }, { "math_id": 88, "text": "\\lambda_{min} " }, { "math_id": 89, "text": "B=uu^T " }, { "math_id": 90, "text": "w_{t+1}=w_t-\\eta_t\\triangledown\\rho(w_t) " }, { "math_id": 91, "text": "\\eta_t=\\frac{w_t^TSw_t}{2L|\\rho(w_t)|} " }, { "math_id": 92, "text": "\\rho(w_0)\\ne 0 " }, { "math_id": 93, "text": "\\rho(w_t)-\\rho(w^*)\\le \\bigg(1-\\frac{\\mu}{L}\\bigg)^{2t}(\\rho(w_0)-\\rho(w^*)) " }, { "math_id": 94, "text": "min_{\\tilde{w}\\in R^d}f_{LH}(\\tilde{w})=E_{y,x}[\\phi(z^T\\tilde{w})]\n " }, { "math_id": 95, "text": "z=-yx\n " }, { "math_id": 96, "text": "\\phi\n " }, { "math_id": 97, "text": "f_{LH}\n " }, { "math_id": 98, "text": "\\zeta\n " }, { "math_id": 99, "text": "\\alpha^* = argmin_{\\alpha}||\\triangledown f(\\alpha w)||^2\n " }, { "math_id": 100, "text": "-\\infty < \\alpha^* < \\infty\n " }, { "math_id": 101, "text": "z\n " }, { "math_id": 102, "text": "\\triangledown_{\\tilde{w}}f_{LH}(\\tilde{w})=c_1(\\tilde{w})u+c_2(\\tilde{w})S\\tilde{w}\n " }, { "math_id": 103, "text": "c_1(\\tilde{w})=E_z[\\phi^{(1)}(z^T\\tilde{w})]-E_z[\\phi^{(2)}(z^T\\tilde{w})](u^T\\tilde{w})\n " }, { "math_id": 104, "text": "c_2(\\tilde{w})=E_z[\\phi^{(2)}(z^T\\tilde{w})]\n " }, { "math_id": 105, "text": "\\phi^{(i)}\n " }, { "math_id": 106, "text": "i\n " }, { "math_id": 107, "text": "\\tilde{w}_*\n " }, { "math_id": 108, "text": "\\tilde{w}_* = g_*S^{-1}u\n " }, { "math_id": 109, "text": "g_*\n " }, { "math_id": 110, "text": "min_{w\\in R^d\\backslash\\{0\\},\\gamma\\in R}f_{LH}(w,\\gamma) " }, { "math_id": 111, "text": "h(w_t,\\gamma_t)=E_z[\\phi'(z^T\\tilde{w}_t)](u^Tw_t)-E_z[\\phi''(z^T\\tilde{w}_t)](u^Tw_t)^2\n " }, { "math_id": 112, "text": "s_t=s(w_t,\\gamma_t)=-\\frac{||w_t||_S^3}{Lg_th(w_t,\\gamma_t)} " }, { "math_id": 113, "text": "h(w_t,\\gamma_t)\\ne 0\n " }, { "math_id": 114, "text": "w_{t+1}=w_t-s_t\\triangledown_w f(w_t,\\gamma_t)\n " }, { "math_id": 115, "text": "\\gamma_t = Bisection(T_s,f,w_t)\n " }, { "math_id": 116, "text": "Bisection()\n " }, { "math_id": 117, "text": "T_s\n " }, { "math_id": 118, "text": "T_d\n " }, { "math_id": 119, "text": "\\tilde{w}_{T_d} = \\gamma_{T_d}\\frac{w_{T_d}}{||w_{T_d}||_S}\n " }, { "math_id": 120, "text": "(\\partial_\\gamma f_{LH}(w_t,a_t^{(T_s)})^2\\le \\frac{2^{-T_s}\\zeta|b_t^{(0)}-a_t^{(0)}|}{\\mu^2}\n " }, { "math_id": 121, "text": "a_t^{(0)}\n " }, { "math_id": 122, "text": "b_t^{0}\n " }, { "math_id": 123, "text": "w\n " }, { "math_id": 124, "text": "||w_t||_S^2||\\triangledown f_{LH}(w_t,g_t)||_{S^{-1}}^2\\le \\bigg(1-\\frac{\\mu}{L}\\bigg)^{2t}\\Phi^2\\gamma_t^2(\\rho(w_0)-\\rho^*)\n " }, { "math_id": 125, "text": "\\tilde{w}_{T_d}\n " }, { "math_id": 126, "text": "||\\triangledown_\\tilde{w}f(\\tilde{w}_{T_d})||^2\\le \\bigg(1-\\frac{\\mu}{L}\\bigg)^{2T_d}\\Phi^2(\\rho(w_0)-\\rho^*)+\\frac{2^{-T_s}\\zeta|b_t^{(0)}-a_t^{(0)}|}{\\mu^2}\n " }, { "math_id": 127, "text": "m\n " }, { "math_id": 128, "text": "x\\in R^d\n " }, { "math_id": 129, "text": "F_x(\\tilde{W},\\Theta)=\\sum_{i=1}^{m}\\theta_i\\phi(x^T\\tilde{w}^{(i)})\n " }, { "math_id": 130, "text": "\\tilde{w}^{(i)}\n " }, { "math_id": 131, "text": "\\theta_i\n " }, { "math_id": 132, "text": "min_{\\tilde{W},\\Theta}(f_{NN}(\\tilde{W},\\Theta)=E_{y,x}[l(-yF_x(\\tilde{W},\\Theta))])\n " }, { "math_id": 133, "text": "l\n " }, { "math_id": 134, "text": "\\tilde{W}=\\{\\tilde{w}^{(1)},...,\\tilde{w}^{(m)}\\}\n " }, { "math_id": 135, "text": "\\Theta=\\{\\theta^{(1)},...,\\theta^{(m)}\\}\n " }, { "math_id": 136, "text": "\\Theta\n " }, { "math_id": 137, "text": "\\tilde{W}\n " }, { "math_id": 138, "text": "f_{NN}(\\tilde{W})\n " }, { "math_id": 139, "text": "\\hat{w}^{(i)}\n " }, { "math_id": 140, "text": "\\hat{w}^{(i)} = \\hat{c}^{(i)}S^{-1}u\n " }, { "math_id": 141, "text": "\\hat{c}^{(i)}\\in R\n " }, { "math_id": 142, "text": "i =1,...,m\n " }, { "math_id": 143, "text": "f_{NN}\n " }, { "math_id": 144, "text": "W\n " }, { "math_id": 145, "text": "\\gamma\n " }, { "math_id": 146, "text": "||\\triangledown_{\\tilde{w}^{(i)}}f(\\tilde{w}_t^{(i)})||^2_{S^{-1}}\\le\\bigg(1-\\frac{\\mu}{L}\\bigg)^{2t}C(\\rho(w_0)-\\rho^*)+\\frac{2^{-T_s^{(i)}}\\zeta|b_t^{(0)}-a_t^{(0)}|}{\\mu^2}\n " } ]
https://en.wikipedia.org/wiki?curid=57222123
57223837
Jones model
The Jones model (also known as the semi-endogenous growth model) is a growth model developed in 1995 by economist Charles I. Jones. The model builds on the Romer model (1990), and in particular it generalizes or modifies the description of how new technologies, ideas, or design instructions arise by taking into account the criticism of the Romer model that the long-term growth rate depends positively on the size of the population (economies of scale). This is problematic because empirically larger countries have not necessarily grown faster than smaller ones; and as total human population increased during the 20th century, growth did not speed up. Furthermore, the extent of influence from the current state of knowledge on new inventions (standing on shoulders effect). Model Structure. For a single company i According to the following modeling applies to the emergence of new ideas or design instructions: formula_0 With formula_1:: Number of employees in the research sector formula_2: Technology level formula_3 refers to the derivation of the variables formula_2 A after the time, so formula_4 where the parameters take the following values: formula_5, For parameter values of formula_6 results in the Romer model (formula_7). After aggregation across all companies results: formula_8. Here the parameters have the following meaning: Growth rate. In the Jones model, growth in steady state is given by: formula_13 n for the growth rate of persons working in the research sector. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\dot A_i = \\delta \\cdot A(t)^\\phi \\cdot L_A(t)^{\\lambda-1} \\cdot L_{A_i}(t)" }, { "math_id": 1, "text": "L_{A}" }, { "math_id": 2, "text": "A" }, { "math_id": 3, "text": "\\dot A" }, { "math_id": 4, "text": "\\dot A= \\frac{\\partial A(t)}{\\partial t}" }, { "math_id": 5, "text": " 0 < \\lambda < 1; \\phi < 1" }, { "math_id": 6, "text": "\\lambda = \\phi = 1" }, { "math_id": 7, "text": "\\dot A = \\delta \\cdot A \\cdot L_A" }, { "math_id": 8, "text": "\\dot A = \\delta \\cdot A(t)^\\phi \\cdot L_A(t)^{\\lambda}" }, { "math_id": 9, "text": "\\lambda" }, { "math_id": 10, "text": "\\phi < 0 " }, { "math_id": 11, "text": "\\phi = 0 " }, { "math_id": 12, "text": "\\phi > 0 " }, { "math_id": 13, "text": " \\frac{\\dot{\\hat{A(t)}}}{\\hat{A(t)}} = 0 \\quad \\Leftrightarrow \\quad \\hat{A(t)} = \\frac{\\lambda \\cdot n}{1 - \\phi}" } ]
https://en.wikipedia.org/wiki?curid=57223837
5722488
Paxos (computer science)
Family of protocols for solving consensus Paxos is a family of protocols for solving consensus in a network of unreliable or fallible processors. Consensus is the process of agreeing on one result among a group of participants. This problem becomes difficult when the participants or their communications may experience failures. Consensus protocols are the basis for the state machine replication approach to distributed computing, as suggested by Leslie Lamport and surveyed by Fred Schneider. State machine replication is a technique for converting an algorithm into a fault-tolerant, distributed implementation. Ad-hoc techniques may leave important cases of failures unresolved. The principled approach proposed by Lamport et al. ensures all cases are handled safely. The Paxos protocol was first submitted in 1989 and named after a fictional legislative consensus system used on the Paxos island in Greece, where Lamport wrote that the parliament had to function "even though legislators continually wandered in and out of the parliamentary Chamber". It was later published as a journal article in 1998. The Paxos family of protocols includes a spectrum of trade-offs between the number of processors, number of message delays before learning the agreed value, the activity level of individual participants, number of messages sent, and types of failures. Although no deterministic fault-tolerant consensus protocol can guarantee progress in an asynchronous network (a result proved in a paper by Fischer, Lynch and Paterson), Paxos guarantees safety (consistency), and the conditions that could prevent it from making progress are difficult to provoke. Paxos is usually used where durability is required (for example, to replicate a file or a database), in which the amount of durable state could be large. The protocol attempts to make progress even during periods when some bounded number of replicas are unresponsive. There is also a mechanism to drop a permanently failed replica or to add a new replica. History. The topic predates the protocol. In 1988, Lynch, Dwork and Stockmeyer had demonstrated the solvability of consensus in a broad family of "partially synchronous" systems. Paxos has strong similarities to a protocol used for agreement in "viewstamped replication", first published by Oki and Liskov in 1988, in the context of distributed transactions. Notwithstanding this prior work, Paxos offered a particularly elegant formalism, and included one of the earliest proofs of safety for a fault-tolerant distributed consensus protocol. Reconfigurable state machines have strong ties to prior work on reliable group multicast protocols that support dynamic group membership, for example Birman's work in 1985 and 1987 on the virtually synchronous gbcast protocol. However, gbcast is unusual in supporting durability and addressing partitioning failures. Most reliable multicast protocols lack these properties, which are required for implementations of the state machine replication model. This point is elaborated in a paper by Lamport, Malkhi and Zhou. Paxos protocols are members of a theoretical class of solutions to a problem formalized as uniform agreement with crash failures. Lower bounds for this problem have been proved by Keidar and Shraer. Derecho, a C++ software library for cloud-scale state machine replication, offers a Paxos protocol that has been integrated with self-managed virtually synchronous membership. This protocol matches the Keidar and Shraer optimality bounds, and maps efficiently to modern remote DMA (RDMA) datacenter hardware (but uses TCP if RDMA is not available). Assumptions. In order to simplify the presentation of Paxos, the following assumptions and definitions are made explicit. Techniques to broaden the applicability are known in the literature, and are not covered in this article. Number of processors. In general, a consensus algorithm can make progress using formula_0 processors, despite the simultaneous failure of any formula_1 processors: in other words, the number of non-faulty processes must be strictly greater than the number of faulty processes. However, using reconfiguration, a protocol may be employed which survives any number of total failures as long as no more than F fail simultaneously. For Paxos protocols, these reconfigurations can be handled as separate "configurations". Safety and liveness properties. In order to guarantee "safety" (also called "consistency"), Paxos defines three properties and ensures the first two are always held, regardless of the pattern of failures: Note that Paxos is "not" guaranteed to terminate, and thus does not have the liveness property. This is supported by the Fischer Lynch Paterson impossibility result (FLP) which states that a consistency protocol can only have two of "safety", "liveness", and "fault tolerance". As Paxos's point is to ensure fault tolerance and it guarantees safety, it cannot also guarantee liveness. Typical deployment. In most deployments of Paxos, each participating process acts in three roles; Proposer, Acceptor and Learner. This reduces the message complexity significantly, without sacrificing correctness: By merging roles, the protocol "collapses" into an efficient client-master-replica style deployment, typical of the database community. The benefit of the Paxos protocols (including implementations with merged roles) is the guarantee of its safety properties. A typical implementation's message flow is covered in the section Multi-Paxos. Basic Paxos. This protocol is the most basic of the Paxos family. Each "instance" (or "execution") of the basic Paxos protocol decides on a single output value. The protocol proceeds over several rounds. A successful round has 2 phases: phase 1 (which is divided into parts "a" and "b") and phase 2 (which is divided into parts "a" and "b"). See below the description of the phases. Remember that we assume an asynchronous model, so e.g. a processor may be in one phase while another processor may be in another. A Proposer creates a message, which we call a Prepare. The message is identified with unique a number, "n", which must be greater than any number previously used in a Prepare message by this Proposer. Note that "n" is not the value to be proposed; it is simply a unique identifier of this initial message by the Proposer. In fact, the Prepare message needn't contain the proposed value (often denoted by "v"). The Proposer chooses at least a Quorum of Acceptors and sends the Prepare message containing "n" to them. A Proposer should not initiate Paxos if it cannot communicate with enough Acceptors to constitute a Quorum. The Acceptors wait for a Prepare message from any of the Proposers. When an Acceptor receives a Prepare message, the Acceptor must examine the identifier number, "n", of that message. There are two cases: If a Proposer receives Promises from a Quorum of Acceptors, it needs to set a value "v" to its proposal. If any Acceptors had previously accepted any proposal, then they'll have sent their values to the Proposer, who now must set the value of its proposal, "v", to the value associated with the highest proposal number reported by the Acceptors, let's call it "z". If none of the Acceptors had accepted a proposal up to this point, then the Proposer may choose the value it originally wanted to propose, say "x". The Proposer sends an "Accept" message, "(n, v)", to a Quorum of Acceptors with the chosen value for its proposal, v, and the proposal number "n" (which is the same as the number contained in the "Prepare" message previously sent to the Acceptors). So, the "Accept" message is either "(n, v=z)" or, in case none of the Acceptors previously accepted a value, "(n, v=x)". Phase 2. Phase 2a: "Accept". This "Accept" message should be interpreted as a "request", as in "Accept this proposal, please!". If an Acceptor receives an Accept message, "(n, v)", from a Proposer, it must accept it if and only if it has "not" already promised (in Phase 1b of the Paxos protocol) to only consider proposals having an identifier greater than "n". If the Acceptor has not already promised (in Phase 1b) to only consider proposals having an identifier greater than "n", it should register the value "v" (of the just received "Accept" message) as the accepted value (of the Protocol), and send an "Accepted" message to the Proposer and every Learner (which can typically be the Proposers themselves. Learners will learn the decided value ONLY AFTER receiving Accepted messages from a majority of acceptors, which means, NOT after receiving just the FIRST Accept message). Else, it can ignore the Accept message or request. Phase 2b: "Accepted". Note that consensus is achieved when a majority of Acceptors accept the same "identifier number" (rather than the same "value"). Because each identifier is unique to a Proposer and only one value may be proposed per identifier, all Acceptors that accept the same identifier thereby accept the same value. These facts result in a few counter-intuitive scenarios that do not impact correctness: Acceptors can accept multiple values, a value may achieve a majority across Acceptors (with different identifiers) only to later be changed, and Acceptors may continue to accept proposals after an identifier has achieved a majority. However, the Paxos protocol guarantees that consensus is permanent and the chosen value is immutable. Rounds fail when multiple Proposers send conflicting "Prepare" messages, or when the Proposer does not receive a Quorum of responses ("Promise" or "Accepted"). In these cases, another round must be started with a higher proposal number. Paxos can be used to select a leader. Notice that a Proposer in Paxos could propose "I am the leader," (or, for example, "Proposer X is the leader"). Because of the agreement and validity guarantees of Paxos, if accepted by a Quorum, then the Proposer is now known to be the leader to all other nodes. This satisfies the needs of leader election because there is a single node believing it is the leader and a single node known to be the leader at all times. Graphic representation of the flow of messages in the basic Paxos. The following diagrams represent several cases/situations of the application of the Basic Paxos protocol. Some cases show how the Basic Paxos protocol copes with the failure of certain (redundant) components of the distributed system. Note that the values returned in the "Promise" message are "null" the first time a proposal is made (since no Acceptor has accepted a value before in this round). Basic Paxos without failures. In the diagram below, there is 1 Client, 1 Proposer, 3 Acceptors (i.e. the Quorum size is 3) and 2 Learners (represented by the 2 vertical lines). This diagram represents the case of a first round, which is successful (i.e. no process in the network fails). Here, V is the last of (Va, Vb, Vc). Error cases in basic Paxos. The simplest error cases are the failure of an Acceptor (when a Quorum of Acceptors remains alive) and failure of a redundant Learner. In these cases, the protocol requires no "recovery" (i.e. it still succeeds): no additional rounds or messages are required, as shown below (in the next two diagrams/cases). Basic Paxos when an Acceptor fails. In the following diagram, one of the Acceptors in the Quorum fails, so the Quorum size becomes 2. In this case, the Basic Paxos protocol still succeeds. Basic Paxos when a redundant learner fails. In the following case, one of the (redundant) Learners fails, but the Basic Paxos protocol still succeeds. Basic Paxos when a Proposer fails. In this case, a Proposer fails after proposing a value, but before the agreement is reached. Specifically, it fails in the middle of the Accept message, so only one Acceptor of the Quorum receives the value. Meanwhile, a new Leader (a Proposer) is elected (but this is not shown in detail). Note that there are 2 rounds in this case (rounds proceed vertically, from the top to the bottom). Basic Paxos when multiple Proposers conflict. The most complex case is when multiple Proposers believe themselves to be Leaders. For instance, the current leader may fail and later recover, but the other Proposers have already re-selected a new leader. The recovered leader has not learned this yet and attempts to begin one round in conflict with the current leader. In the diagram below, 4 unsuccessful rounds are shown, but there could be more (as suggested at the bottom of the diagram). Basic Paxos where an Acceptor accepts Two Different Values. In the following case, one Proposer achieves acceptance of value V1 by one Acceptor before failing. A new Proposer prepares the Acceptors that never accepted V1, allowing it to propose V2. Then V2 is accepted by all Acceptors, including the one that initially accepted V1. Basic Paxos where a multi-identifier majority is insufficient. In the following case, one Proposer achieves acceptance of value V1 of one Acceptor before failing. A new Proposer prepares the Acceptors that never accepted V1, allowing it to propose V2. This Proposer is able to get one Acceptor to accept V2 before failing. A new Proposer finds a majority that includes the Acceptor that has accepted V1, and must propose it. The Proposer manages to get two Acceptors to accept it before failing. At this point, three Acceptors have accepted V1, but not for the same identifier. Finally, a new Proposer prepares the majority that has not seen the largest accepted identifier. The value associated with the largest identifier in that majority is V2, so it must propose it. This Proposer then gets all Acceptors to accept V2, achieving consensus. Basic Paxos where new Proposers cannot change an existing consensus. In the following case, one Proposer achieves acceptance of value V1 of two Acceptors before failing. A new Proposer may start another round, but it is now impossible for that proposer to prepare a majority that doesn't include at least one Acceptor that has accepted V1. As such, even though the Proposer doesn't see the existing consensus, the Proposer's only option is to propose the value already agreed upon. New Proposers can continually increase the identifier to restart the process, but the consensus can never be changed. Multi-Paxos. A typical deployment of Paxos requires a continuous stream of agreed values acting as commands to a distributed state machine. If each command is the result of a single instance of the Basic Paxos protocol, a significant amount of overhead would result. If the leader is relatively stable, phase 1 becomes unnecessary. Thus, it is possible to skip phase 1 for future instances of the protocol with the same leader. To achieve this, the round number I is included along with each value which is incremented in each round by the same Leader. Multi-Paxos reduces the failure-free message delay (proposal to learning) from 4 delays to 2 delays. Graphic representation of the flow of messages in the Multi-Paxos. Multi-Paxos without failures. In the following diagram, only one instance (or "execution") of the basic Paxos protocol, with an initial Leader (a Proposer), is shown. Note that a Multi-Paxos consists of several instances of the basic Paxos protocol. where V = last of (Va, Vb, Vc). Multi-Paxos when phase 1 can be skipped. In this case, subsequent instances of the basic Paxos protocol (represented by "I+1") use the same leader, so the phase 1 (of these subsequent instances of the basic Paxos protocol), which consist of the Prepare and Promise sub-phases, is skipped. Note that the Leader should be stable, i.e. it should not crash or change. Multi-Paxos when roles are collapsed. A common deployment of the Multi-Paxos consists in collapsing the role of the Proposers, Acceptors and Learners to "Servers". So, in the end, there are only "Clients" and "Servers". The following diagram represents the first "instance" of a basic Paxos protocol, when the roles of the Proposer, Acceptor and Learner are collapsed to a single role, called the "Server". Multi-Paxos when roles are collapsed and the leader is steady. In the subsequent instances of the basic Paxos protocol, with the same leader as in the previous instances of the basic Paxos protocol, the phase 1 can be skipped. Optimisations. A number of optimisations can be performed to reduce the number of exchanged messages, to improve the performance of the protocol, etc. A few of these optimisations are reported below. "We can save messages at the cost of an extra message delay by having a single distinguished learner that informs the other learners when it finds out that a value has been chosen. Acceptors then send "Accepted" messages only to the distinguished learner. In most applications, the roles of leader and distinguished learner are performed by the same processor. "A leader can send its "Prepare" and "Accept!" messages just to a quorum of acceptors. As long as all acceptors in that quorum are working and can communicate with the leader and the learners, there is no need for acceptors not in the quorum to do anything. "Acceptors do not care what value is chosen. They simply respond to "Prepare" and "Accept!" messages to ensure that, despite failures, only a single value can be chosen. However, if an acceptor does learn what value has been chosen, it can store the value in stable storage and erase any other information it has saved there. If the acceptor later receives a "Prepare" or "Accept!" message, instead of performing its Phase1b or Phase2b action, it can simply inform the leader of the chosen value. "Instead of sending the value v, the leader can send a hash of v to some acceptors in its "Accept!" messages. A learner will learn that v is chosen if it receives "Accepted" messages for either v or its hash from a quorum of acceptors, and at least one of those messages contains v rather than its hash. However, a leader could receive "Promise" messages that tell it the hash of a value v that it must use in its Phase2a action without telling it the actual value of v. If that happens, the leader cannot execute its Phase2a action until it communicates with some process that knows v." "A proposer can send its proposal only to the leader rather than to all coordinators. However, this requires that the result of the leader-selection algorithm be broadcast to the proposers, which might be expensive. So, it might be better to let the proposer send its proposal to all coordinators. (In that case, only the coordinators themselves need to know who the leader is.) "Instead of each acceptor sending "Accepted" messages to each learner, acceptors can send their "Accepted" messages to the leader and the leader can inform the learners when a value has been chosen. However, this adds an extra message delay. "Finally, observe that phase 1 is unnecessary for round 1 .. The leader of round 1 can begin the round by sending an "Accept!" message with any proposed value." Cheap Paxos. Cheap Paxos extends Basic Paxos to tolerate F failures with F+1 main processors and F auxiliary processors by dynamically reconfiguring after each failure. This reduction in processor requirements comes at the expense of liveness; if too many main processors fail in a short time, the system must halt until the auxiliary processors can reconfigure the system. During stable periods, the auxiliary processors take no part in the protocol. "With only two processors p and q, one processor cannot distinguish failure of the other processor from failure of the communication medium. A third processor is needed. However, that third processor does not have to participate in choosing the sequence of commands. It must take action only in case p or q fails, after which it does nothing while either p or q continues to operate the system by itself. The third processor can therefore be a small/slow/cheap one, or a processor primarily devoted to other tasks." Message flow: Cheap Multi-Paxos. An example involving three main acceptors, one auxiliary acceptor and quorum size of three, showing failure of one main processor and subsequent reconfiguration: Fast Paxos. Fast Paxos generalizes Basic Paxos to reduce end-to-end message delays. In Basic Paxos, the message delay from client request to learning is 3 message delays. Fast Paxos allows 2 message delays, but requires that (1) the system be composed of "3f+ 1" acceptors to tolerate up to "f" faults (instead of the classic 2f+1), and (2) the Client to send its request to multiple destinations. Intuitively, if the leader has no value to propose, then a client could send an "Accept!" message to the Acceptors directly. The Acceptors would respond as in Basic Paxos, sending "Accepted" messages to the leader and every Learner achieving two message delays from Client to Learner. If the leader detects a collision, it resolves the collision by sending "Accept!" messages for a new round which are "Accepted" as usual. This coordinated recovery technique requires four message delays from Client to Learner. The final optimization occurs when the leader specifies a recovery technique in advance, allowing the Acceptors to perform the collision recovery themselves. Thus, uncoordinated collision recovery can occur in three message delays (and only two message delays if all Learners are also Acceptors). Generalized Paxos. Generalized consensus explores the relationship between the operations of the replicated state machine and the consensus protocol that implements it. The main discovery involves optimizations of Paxos when conflicting proposals could be applied in any order. i.e., when the proposed operations are commutative operations for the state machine. In such cases, the conflicting operations can both be accepted, avoiding the delays required for resolving conflicts and re-proposing the rejected operations. This concept is further generalized into ever-growing sequences of commutative operations, some of which are known to be stable (and thus may be executed). The protocol tracks these sequences ensuring that all proposed operations of one sequence are stabilized before allowing any operation non-commuting with them to become stable. Example. In order to illustrate Generalized Paxos, the example below shows a message flow between two concurrently executing clients and a replicated state machine implementing read/write operations over two distinct registers A and B. Note that in this table indicates operations which are non-commutative. A possible sequence of operations : Since codice_0 commutes with both codice_1 and codice_2, one possible permutation equivalent to the previous order is the following: In practice, a commute occurs only when operations are proposed concurrently. Performance. The above message flow shows us that Generalized Paxos can leverage operation semantics to avoid collisions when the spontaneous ordering of the network fails. This allows the protocol to be in practice quicker than Fast Paxos. However, when a collision occurs, Generalized Paxos needs two additional round trips to recover. This situation is illustrated with operations WriteB and ReadB in the above schema. In the general case, such round trips are unavoidable and come from the fact that multiple commands can be accepted during a round. This makes the protocol more expensive than Paxos when conflicts are frequent. Hopefully two possible refinements of Generalized Paxos are possible to improve recovery time. Byzantine Paxos. Paxos may also be extended to support arbitrary failures of the participants, including lying, fabrication of messages, collusion with other participants, selective non-participation, etc. These types of failures are called Byzantine failures, after the solution popularized by Lamport. Byzantine Paxos introduced by Castro and Liskov adds an extra message (Verify) which acts to distribute knowledge and verify the actions of the other processors: Message flow: Byzantine Multi-Paxos, steady state. Fast Byzantine Paxos introduced by Martin and Alvisi removes this extra delay, since the client sends commands directly to the Acceptors. Note the "Accepted" message in Fast Byzantine Paxos is sent to all Acceptors and all Learners, while Fast Paxos sends "Accepted" messages only to Learners): Message flow: Fast Byzantine Multi-Paxos, steady state. The failure scenario is the same for both protocols; Each Learner waits to receive F+1 identical messages from different Acceptors. If this does not occur, the Acceptors themselves will also be aware of it (since they exchanged each other's messages in the broadcast round), and correct Acceptors will re-broadcast the agreed value: Adapting Paxos for RDMA networks. With the emergence of very high speed reliable datacenter networks that support remote DMA (RDMA), there has been substantial interest in optimizing Paxos to leverage hardware offloading, in which the network interface card and network routers provide reliability and network-layer congestion control, freeing the host CPU for other tasks. The Derecho C++ Paxos library is an open-source Paxos implementation that explores this option. Derecho offers both a classic Paxos, with data durability across full shutdown/restart sequences, and vertical Paxos (atomic multicast), for in-memory replication and state-machine synchronization. The Paxos protocols employed by Derecho needed to be adapted to maximize asynchronous data streaming and remove other sources of delay on the leader's critical path. So doing enables Derecho to sustain the full bidirectional RDMA data rate. In contrast, although traditional Paxos protocols can be migrated to an RDMA network by simply mapping the message send operations to native RDMA operations, doing so leaves round-trip delays on the critical path. In high-speed RDMA networks, even small delays can be large enough to prevent utilization of the full potential bandwidth. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "n=2F+1" }, { "math_id": 1, "text": "F" } ]
https://en.wikipedia.org/wiki?curid=5722488
57225038
Robinson compass mask
In image processing, a Robinson compass mask is a type of compass mask used for edge detection. It has eight major compass orientations, each will extract the edges in respect to its direction. A combined use of compass masks of different directions could detect the edges from different angles. Technical explanation. The Robinson compass mask is defined by taking a single mask and rotating it to form eight orientations: formula_0 formula_1 formula_2 formula_3 formula_4 formula_5 formula_6 formula_7 The direction axis is the line of zeros in the matrix. Robinson compass mask is similar to kirsch compass masks, but is simpler to implement. Since the matrix coefficients only contains 0, 1, 2, and are symmetrical, only the results of four masks need to be calculated, the other four results are the negation of the first four results. An edge, or contour is an tiny area with neighboring distinct pixel values. The convolution of each mask with the image would create a high value output where there is a rapid change of pixel value, thus an edge point is found. All the detected edge points would line up as edges. Example. An example of Robinson compass masks applied to the original image. Obviously, the edges in the direction of the mask is enhanced. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\text{North:}\\begin{bmatrix}\n-1 & 0 & 1 \\\\\n-2 & 0 & 2 \\\\\n-1 & 0 & 1\n\\end{bmatrix}" }, { "math_id": 1, "text": "\\text{North West:}\\begin{bmatrix}\n0 & 1 & 2 \\\\\n-1 & 0 & 1 \\\\\n-2 & -1 & 0\n\\end{bmatrix}" }, { "math_id": 2, "text": "\\text{West:}\\begin{bmatrix}\n1 & 2 & 1 \\\\\n0 & 0 & 0 \\\\\n-1 & -2 & -1\n\\end{bmatrix}" }, { "math_id": 3, "text": "\\text{South West:}\\begin{bmatrix}\n2 & 1 & 0 \\\\\n1 & 0 & -1 \\\\\n0 & -1 & -2\n\\end{bmatrix}" }, { "math_id": 4, "text": "\\text{South:}\\begin{bmatrix}\n1 & 0 & -1 \\\\\n2 & 0 & -2 \\\\\n1 & 0 & -1\n\\end{bmatrix}" }, { "math_id": 5, "text": "\\text{South East:}\\begin{bmatrix}\n0 & -1 & -2 \\\\\n1 & 0 & -1 \\\\\n2 & 1 & 0\n\\end{bmatrix}" }, { "math_id": 6, "text": "\\text{East:}\\begin{bmatrix}\n-1 & -2 & -1 \\\\\n0 & 0 & 0 \\\\\n1 & 2 & 1\n\\end{bmatrix}" }, { "math_id": 7, "text": "\\text{North East:}\\begin{bmatrix}\n-2 & -1 & 0 \\\\\n-1 & 0 & 1 \\\\\n0 & 1 & 2\n\\end{bmatrix}" } ]
https://en.wikipedia.org/wiki?curid=57225038
57226096
Operator monotone function
In linear algebra, the operator monotone function is an important type of real-valued function, fully classified by Charles Löwner in 1934. It is closely allied to the operator concave and operator concave functions, and is encountered in operator theory and in matrix theory, and led to the Löwner–Heinz inequality. Definition. A function formula_0 defined on an interval formula_1 is said to be operator monotone if whenever formula_2 and formula_3 are Hermitian matrices (of any size/dimensions) whose eigenvalues all belong to the domain of formula_4 and whose difference formula_5 is a positive semi-definite matrix, then necessarily formula_6 where formula_7 and formula_8 are the values of the matrix function induced by formula_4 (which are matrices of the same size as formula_2 and formula_3). Notation This definition is frequently expressed with the notation that is now defined. Write formula_9 to indicate that a matrix formula_2 is positive semi-definite and write formula_10 to indicate that the difference formula_5 of two matrices formula_2 and formula_3 satisfies formula_11 (that is, formula_5 is positive semi-definite). With formula_0 and formula_2 as in the theorem's statement, the value of the matrix function formula_7 is the matrix (of the same size as formula_2) defined in terms of its formula_2's spectral decomposition formula_12 by formula_13 where the formula_14 are the eigenvalues of formula_2 with corresponding projectors formula_15 The definition of an operator monotone function may now be restated as: A function formula_0 defined on an interval formula_1 said to be "operator monotone" if (and only if) for all positive integers formula_16 and all formula_17 Hermitian matrices formula_2 and formula_3 with eigenvalues in formula_18 if formula_10 then formula_19 References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "f : I \\to \\Reals" }, { "math_id": 1, "text": "I \\subseteq \\Reals" }, { "math_id": 2, "text": "A" }, { "math_id": 3, "text": "B" }, { "math_id": 4, "text": "f" }, { "math_id": 5, "text": "A - B" }, { "math_id": 6, "text": "f(A) - f(B) \\geq 0" }, { "math_id": 7, "text": "f(A)" }, { "math_id": 8, "text": "f(B)" }, { "math_id": 9, "text": "A \\geq 0" }, { "math_id": 10, "text": "A \\geq B" }, { "math_id": 11, "text": "A - B \\geq 0" }, { "math_id": 12, "text": "A = \\sum_j \\lambda_j P_j" }, { "math_id": 13, "text": "f(A) = \\sum_j f(\\lambda_j)P_j ~," }, { "math_id": 14, "text": "\\lambda_j" }, { "math_id": 15, "text": "P_j." }, { "math_id": 16, "text": "n," }, { "math_id": 17, "text": "n \\times n" }, { "math_id": 18, "text": "I," }, { "math_id": 19, "text": "f(A) \\geq f(B)." } ]
https://en.wikipedia.org/wiki?curid=57226096
57231802
Point-normal triangle
The curved point-normal triangle, in short PN triangle, is an interpolation algorithm to retrieve a cubic Bézier triangle from the vertex coordinates of a regular flat triangle and normal vectors. The PN triangle retains the vertices of the flat triangle as well as the corresponding normals. For computer graphics applications, additionally a linear or quadratic interpolant of the normals is created to represent an incorrect but plausible normal when rendering and so giving the impression of smooth transitions between adjacent PN triangles. The usage of the PN triangle enables the visualization of triangle based surfaces in a smoother shape at low cost in terms of rendering complexity and time. Mathematical formulation. With information of the given vertex positions formula_0 of a flat triangle and the according normal vectors formula_1 at the vertices a cubic Bézier triangle is constructed. In contrast to the notation of the Bézier triangle page the nomenclature follows G. Farin (2002), therefore we denote the 10 control points as formula_2 with the positive indices holding the condition formula_3. The first three control points are equal to the given vertices.formula_4 Six control points related to the triangle edges, i.e. formula_5 are computed asformula_6This definition ensures that the original vertex normals are reproduced in the interpolated triangle. Finally the internal control point formula_7is derived from the previously calculated control points as formula_8 An alternative interior control point formula_9 was suggested in.
[ { "math_id": 0, "text": "\\mathbf{P}_{1},\\mathbf{P}_{2},\\mathbf{P}_{3} \\in \\mathbb{R}^{3}\n" }, { "math_id": 1, "text": "\\mathbf{N}_{1},\n\\mathbf{N}_{2},\n\\mathbf{N}_{3}\n" }, { "math_id": 2, "text": "\\mathbf{b}_{ijk}\n" }, { "math_id": 3, "text": "i+j+k = 3\n" }, { "math_id": 4, "text": "\\begin{align}\\mathbf{b}_{300} &= \\mathbf{P}_{1}, & \\mathbf{b}_{030} &= \\mathbf{P}_{2}, & \\mathbf{b}_{003} &= \\mathbf{P}_{3}\\end{align}" }, { "math_id": 5, "text": "i,j,k = \\left\\{0,1,2\\right\\}\n" }, { "math_id": 6, "text": "\\begin{align}\n\\mathbf{b}_{012} &= \\frac{1}{3} \\left( 2 \\mathbf{P}_{3} + \\mathbf{P}_{2} - \\omega_{32}\\mathbf{N}_{3}\\right), &\n\\mathbf{b}_{021} &= \\frac{1}{3} \\left( 2 \\mathbf{P}_{2} + \\mathbf{P}_{3} - \\omega_{23}\\mathbf{N}_{2}\\right), &&\\\\\n\\mathbf{b}_{102} &= \\frac{1}{3} \\left( 2 \\mathbf{P}_{3} + \\mathbf{P}_{1} - \\omega_{31}\\mathbf{N}_{3}\\right), &\n\\mathbf{b}_{201} &= \\frac{1}{3} \\left( 2 \\mathbf{P}_{1} + \\mathbf{P}_{3} - \\omega_{13}\\mathbf{N}_{1}\\right), &&\\\\\n\\mathbf{b}_{120} &= \\frac{1}{3} \\left( 2 \\mathbf{P}_{2} + \\mathbf{P}_{1} - \\omega_{21}\\mathbf{N}_{2}\\right), &\n\\mathbf{b}_{210} &= \\frac{1}{3} \\left( 2 \\mathbf{P}_{1} + \\mathbf{P}_{2} - \\omega_{12}\\mathbf{N}_{1}\\right) &\n\\qquad \\text{with} \\quad \\omega_{ij} &= \\left( \\mathbf{P}_{j} - \\mathbf{P}_{i} \\right) \\cdot \\mathbf{N}_{i}.\\\\\n\\end{align}\n" }, { "math_id": 7, "text": "(i=j=k=1)\n" }, { "math_id": 8, "text": "\\begin{align}\n\\mathbf{b}_{111} &= \\mathbf{E} + \\frac{1}{2} \\left(\\mathbf{E} - \\mathbf{V}\\right)\\\\\n\\text{with}&\\quad \\mathbf{E} = \\frac{1}{6} \\left(\\mathbf{b}_{012} + \\mathbf{b}_{021} + \\mathbf{b}_{102} + \\mathbf{b}_{201} + \\mathbf{b}_{120} + \\mathbf{b}_{210}\\right)\\\\\n\\text{and}&\\quad \\mathbf{V} = \\frac{1}{3} \\left(\\mathbf{P}_{1} + \\mathbf{P}_{2}+ \\mathbf{P}_{3}\\right).\n\\end{align}" }, { "math_id": 9, "text": "\\begin{align}\n\\mathbf{b}_{111} &= \\mathbf{E} + 5 \\left(\\mathbf{E} - \\mathbf{V}\\right)\n\\end{align}" } ]
https://en.wikipedia.org/wiki?curid=57231802
5723244
Treatise on Natural Philosophy
Book by William Thomson Treatise on Natural Philosophy was an 1867 text book by William Thomson (later Lord Kelvin) and Peter Guthrie Tait, published by Oxford University Press. The "Treatise" was often referred to as formula_0 and "formula_1", as explained by Alexander Macfarlane: Maxwell had facetiously referred to Thomson as formula_0 and Tait as formula_1. Hence the "Treatise on Natural Philosophy" came to be commonly referred to as formula_0 "and formula_1" in conversation with mathematicians. Reception. The first volume was received by an enthusiastic review in Saturday Review: The grand result of all concurrent research in modern times has been to confirm what was but perhaps a dream of genius, or an instinct of the keen Greek intellect, that all the operations of nature are rooted and grounded in number and figure. The Treatise was also reviewed as "Elements of Natural Philosophy" (1873). Thomson &amp; Tait's "Treatise on Natural Philosophy" was reviewed by J. C. Maxwell in Nature of 3 July 1879 indicating the importance given to kinematics: "The guiding idea … is that geometry itself is part of the science of motion." In 1892 Karl Pearson noted that formula_0 and "formula_1" perpetuated a "subjectivity of force" that originated with Newton. In 1902 Alexander Macfarlane ascribed much of the inspiration of the book to William Rankine's 1865 paper "Outlines of the Science of Energetics": The main object of Thomson and Tait's "Treatise on Natural Philosophy" was to fill up Rankine's outlines, — expound all branches of physics from the standpoint of the doctrine of energy. The plan contemplated four volumes; the printing of the first volume began in 1862 and was completed in 1867. The other three volumes never appeared. When a second edition was called for, the matter of the first volume was increased by a number of appendices and appeared as two separately bound parts. The volume which did appear, although judged rather difficult reading even by accomplished mathematicians, has achieved great success. It has been translated in French and German; it has educated the new generation of mathematical physicists; and it has been styled the "Principia" of the nineteenth century. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "T" }, { "math_id": 1, "text": "T^1" } ]
https://en.wikipedia.org/wiki?curid=5723244
57235111
Crinkled arc
In mathematics, and in particular the study of Hilbert spaces, a crinkled arc is a type of continuous curve. The concept is usually credited to Paul Halmos. Specifically, consider formula_0 where formula_1 is a Hilbert space with inner product formula_2 We say that formula_3 is a crinkled arc if it is continuous and possesses the "crinkly" property: if formula_4 then formula_5 that is, the chords formula_6 and formula_7 are orthogonal whenever the intervals formula_8 and formula_9 are non-overlapping. Halmos points out that if two nonoverlapping chords are orthogonal, then "the curve makes a right-angle turn during the passage between the chords' farthest end-points" and observes that such a curve would "seem to be making a sudden right angle turn at each point" which would justify the choice of terminology. Halmos deduces that such a curve could not have a tangent at any point, and uses the concept to justify his statement that an infinite-dimensional Hilbert space is "even roomier than it looks". Writing in 1975, Richard Vitale considers Halmos's empirical observation that every attempt to construct a crinkled arc results in essentially the same solution and proves that formula_10 is a crinkled arc if and only if, after appropriate normalizations, formula_11 where formula_12 is an orthonormal set. The normalizations that need to be allowed are the following: a) Replace the Hilbert space "H" by its smallest closed subspace containing all the values of the crinkled arc; b) uniform scalings; c) translations; d) reparametrizations. Now use these normalizations to define an equivalence relation on crinkled arcs if any two of them become identical after any sequence of such normalizations. Then there is just one equivalence class, and Vitale's formula describes a canonical example.
[ { "math_id": 0, "text": "f\\colon [0,1] \\to X," }, { "math_id": 1, "text": "X" }, { "math_id": 2, "text": "\\langle \\cdot, \\cdot \\rangle." }, { "math_id": 3, "text": "f" }, { "math_id": 4, "text": "0 \\leq a < b\\leq c < d \\leq 1" }, { "math_id": 5, "text": "\\langle f(b)-f(a),f(d)-f(c)\\rangle=0," }, { "math_id": 6, "text": "f(b)-f(a)" }, { "math_id": 7, "text": "f(d)-f(c)" }, { "math_id": 8, "text": "[a,b]" }, { "math_id": 9, "text": "[c,d]" }, { "math_id": 10, "text": "f(t)" }, { "math_id": 11, "text": "\nf(t) = \\sqrt{2}\\, \\sum_{n=1}^{\\infty} x_n\n\\frac{\\sin(n-1/2)\\pi t}{(n - 1/2)\\pi}\n" }, { "math_id": 12, "text": "\\left(x_n\\right)_n" } ]
https://en.wikipedia.org/wiki?curid=57235111
572352
Complete partial order
In mathematics, the phrase complete partial order is variously used to refer to at least three similar, but distinct, classes of partially ordered sets, characterized by particular completeness properties. Complete partial orders play a central role in theoretical computer science: in denotational semantics and domain theory. Definitions. The term complete partial order, abbreviated cpo, has several possible meanings depending on context. A partially ordered set is a directed-complete partial order (dcpo) if each of its directed subsets has a supremum. (A subset of a partial order is directed if it is non-empty and every pair of elements has an upper bound in the subset.) In the literature, dcpos sometimes also appear under the label up-complete poset. A pointed directed-complete partial order (pointed dcpo, sometimes abbreviated cppo), is a dcpo with a least element (usually denoted formula_0). Formulated differently, a pointed dcpo has a supremum for every directed "or empty" subset. The term chain-complete partial order is also used, because of the characterization of pointed dcpos as posets in which every chain has a supremum. A related notion is that of ω-complete partial order (ω-cpo). These are posets in which every ω-chain (formula_1) has a supremum that belongs to the poset. The same notion can be extended to other cardinalities of chains. Every dcpo is an ω-cpo, since every ω-chain is a directed set, but the converse is not true. However, every ω-cpo with a basis is also a dcpo (with the same basis). An ω-cpo (dcpo) with a basis is also called a continuous ω-cpo (or continuous dcpo). Note that "complete partial order" is never used to mean a poset in which "all" subsets have suprema; the terminology complete lattice is used for this concept. Requiring the existence of directed suprema can be motivated by viewing directed sets as generalized approximation sequences and suprema as "limits" of the respective (approximative) computations. This intuition, in the context of denotational semantics, was the motivation behind the development of domain theory. The dual notion of a directed-complete partial order is called a filtered-complete partial order. However, this concept occurs far less frequently in practice, since one usually can work on the dual order explicitly. By analogy with the Dedekind–MacNeille completion of a partially ordered set, every partially ordered set can be extended uniquely to a minimal dcpo. Characterizations. An ordered set is a dcpo if and only if every non-empty chain has a supremum. As a corollary, an ordered set is a pointed dcpo if and only if every (possibly empty) chain has a supremum, i.e., if and only if it is chain-complete. Proofs rely on the axiom of choice. Alternatively, an ordered set formula_2 is a pointed dcpo if and only if every order-preserving self-map of formula_2 has a least fixpoint. Continuous functions and fixed-points. A function "f" between two dcpos "P" and "Q" is called (Scott) continuous if it maps directed sets to directed sets while preserving their suprema: Note that every continuous function between dcpos is a monotone function. This notion of continuity is equivalent to the topological continuity induced by the Scott topology. The set of all continuous functions between two dcpos "P" and "Q" is denoted ["P" → "Q"]. Equipped with the pointwise order, this is again a dcpo, and a cpo whenever "Q" is a cpo. Thus the complete partial orders with Scott-continuous maps form a cartesian closed category. Every order-preserving self-map "f" of a cpo ("P", ⊥) has a least fixed-point. If "f" is continuous then this fixed-point is equal to the supremum of the iterates (⊥, "f"&amp;hairsp;(⊥), "f"&amp;hairsp;("f"&amp;hairsp;(⊥)), … "f" "n"(⊥), …) of ⊥ (see also the Kleene fixed-point theorem). Another fixed point theorem is the Bourbaki-Witt theorem, stating that if formula_6 is a function from a dcpo to itself with the property that formula_7 for all formula_8, then formula_6 has a fixed point. This theorem, in turn, can be used to prove that Zorn's lemma is a consequence of the axiom of choice. See also. Directed completeness alone is quite a basic property that occurs often in other order-theoretic investigations, using for instance algebraic posets and the Scott topology. Directed completeness relates in various ways to other completeness notions such as chain completeness.
[ { "math_id": 0, "text": "\\bot" }, { "math_id": 1, "text": "x_1 \\leq x_2 \\leq x_3 \\leq ..." }, { "math_id": 2, "text": "P" }, { "math_id": 3, "text": "f(D) \\subseteq Q" }, { "math_id": 4, "text": "D \\subseteq P" }, { "math_id": 5, "text": "f(\\sup D) = \\sup f(D)" }, { "math_id": 6, "text": "f" }, { "math_id": 7, "text": "f(x) \\geq x" }, { "math_id": 8, "text": "x" } ]
https://en.wikipedia.org/wiki?curid=572352
57237765
Ladyzhenskaya–Babuška–Brezzi condition
In numerical partial differential equations, the Ladyzhenskaya–Babuška–Brezzi (LBB) condition is a sufficient condition for a saddle point problem to have a unique solution that depends continuously on the input data. Saddle point problems arise in the discretization of Stokes flow and in the mixed finite element discretization of Poisson's equation. For positive-definite problems, like the unmixed formulation of the Poisson equation, most discretization schemes will converge to the true solution in the limit as the mesh is refined. For saddle point problems, however, many discretizations are unstable, giving rise to artifacts such as spurious oscillations. The LBB condition gives criteria for when a discretization of a saddle point problem is stable. The condition is variously referred to as the LBB condition, the Babuška–Brezzi condition, or the "inf-sup" condition. Saddle point problems. The abstract form of a saddle point problem can be expressed in terms of Hilbert spaces and bilinear forms. Let formula_0 and formula_1 be Hilbert spaces, and let formula_2, formula_3 be bilinear forms. Let formula_4, formula_5 where formula_6, formula_7 are the dual spaces. The saddle-point problem for the pair formula_8, formula_9 is to find a pair of fields formula_10 in formula_0, formula_11 in formula_1 such that, for all formula_12 in formula_0 and formula_13 in formula_1, formula_14 For example, for the Stokes equations on a formula_15-dimensional domain formula_16, the fields are the velocity formula_10 and pressure formula_11, which live in respectively the Sobolev space formula_17 and the Lebesgue space formula_18. The bilinear forms for this problem are formula_19 where formula_20 is the viscosity. Another example is the mixed Laplace equation (in this context also sometimes called the Darcy equations) where the fields are again the velocity formula_10 and pressure formula_11, which live in the spaces formula_21 and formula_18, respectively. Here, the bilinear forms for the problem are formula_22 where formula_23 is the inverse of the permeability tensor. Statement of the theorem. Suppose that formula_8 and formula_9 are both continuous bilinear forms, and moreover that formula_8 is coercive on the kernel of formula_9: formula_24 for all formula_12 such that formula_25 for all formula_26. If formula_9 satisfies the inf–sup or Ladyzhenskaya–Babuška–Brezzi condition formula_27 for all formula_13 and for some formula_28, then there exists a unique solution formula_29 of the saddle-point problem. Moreover, there exists a constant formula_30 such that formula_31 The alternative name of the condition, the "inf-sup" condition, comes from the fact that by dividing by formula_32, one arrives at the statement formula_33 Since this has to hold for all formula_26 and since the right hand side does not depend on formula_13, we can take the infimum over all formula_13 on the left side and can rewrite the condition equivalently as formula_34 Connection to infinite-dimensional optimization problems. Saddle point problems such as those shown above are frequently associated with infinite-dimensional optimization problems with constraints. For example, the Stokes equations result from minimizing the dissipation formula_35 subject to the incompressibility constraint formula_36 Using the usual approach to constrained optimization problems, one can form a Lagrangian formula_37 The optimality conditions (Karush-Kuhn-Tucker conditions) -- that is the first order necessary conditions—that correspond to this problem are then by variation of formula_38 with regard to formula_10 formula_39 and by variation of formula_40 with regard to formula_41: formula_42 This is exactly the variational form of the Stokes equations shown above with formula_43 formula_44 The inf-sup conditions can in this context then be understood as the infinite-dimensional equivalent of the constraint qualification (specifically, the LICQ) conditions necessary to guarantee that a minimizer of the constrained optimization problem also satisfies the first-order necessary conditions represented by the saddle point problem shown previously. In this context, the inf-sup conditions can be interpreted as saying that relative to the size of the space formula_0 of state variables formula_10, the number of constraints (as represented by the size of the space formula_1 of Lagrange multipliers formula_41) must be sufficiently small. Alternatively, it can be seen as requiring that the size of the space formula_0 of state variables formula_10 must be sufficiently large compared to the size of the space formula_1 of Lagrange multipliers formula_41.
[ { "math_id": 0, "text": "V" }, { "math_id": 1, "text": "Q" }, { "math_id": 2, "text": "a : V \\times V \\to \\mathbb{R}" }, { "math_id": 3, "text": "b : V \\times Q \\to \\mathbb{R}" }, { "math_id": 4, "text": "f \\in V^*" }, { "math_id": 5, "text": "g \\in Q^*" }, { "math_id": 6, "text": "V^*" }, { "math_id": 7, "text": "Q^*" }, { "math_id": 8, "text": "a" }, { "math_id": 9, "text": "b" }, { "math_id": 10, "text": "u" }, { "math_id": 11, "text": "p" }, { "math_id": 12, "text": "v" }, { "math_id": 13, "text": "q" }, { "math_id": 14, "text": "\\begin{align}a(u, v) + b(v, p) & = \\langle f, v\\rangle \\\\\nb(u, q) & = \\langle g, q\\rangle.\\end{align}" }, { "math_id": 15, "text": "d" }, { "math_id": 16, "text": "\\Omega" }, { "math_id": 17, "text": "H^1(\\Omega)^d" }, { "math_id": 18, "text": "L^2(\\Omega)" }, { "math_id": 19, "text": "\\begin{align}a(u, v) & = \\int_\\Omega \\mu\\nabla u : \\nabla v\\,dx \\\\\nb(u, q) & = \\int_\\Omega (\\nabla\\cdot u)q\\,dx,\\end{align}" }, { "math_id": 20, "text": "\\mu" }, { "math_id": 21, "text": "H_\\text{div}(\\Omega)^d" }, { "math_id": 22, "text": "\\begin{align}a(u, v) & = \\int_\\Omega u \\cdot K^{-1} v\\,dx \\\\\nb(u, q) & = \\int_\\Omega (\\nabla\\cdot u)q\\,dx,\\end{align}" }, { "math_id": 23, "text": "K^{-1}" }, { "math_id": 24, "text": "a(v, v) \\ge \\alpha\\|v\\|_V^2" }, { "math_id": 25, "text": "b(v, q) = 0" }, { "math_id": 26, "text": "q \\in Q" }, { "math_id": 27, "text": "\\sup_{v \\in V, v\\neq 0}\\frac{b(v, q)}{\\|v\\|_V} \\ge \\beta\\|q\\|_Q" }, { "math_id": 28, "text": "\\beta>0" }, { "math_id": 29, "text": "u, p" }, { "math_id": 30, "text": "C" }, { "math_id": 31, "text": "\\|u\\|_V + \\|p\\|_Q \\le C(\\|f\\|_{V^*} + \\|g\\|_{Q^*})." }, { "math_id": 32, "text": "\\|q\\|_Q" }, { "math_id": 33, "text": "\\sup_{v \\in V, v\\neq 0}\\frac{b(v, q)}{\\|v\\|_V \\|q\\|_Q} \\ge \\beta." }, { "math_id": 34, "text": "\\inf_{q\\in Q, q\\neq 0} \\sup_{v \\in V, v\\neq 0}\\frac{b(v, q)}{\\|v\\|_V \\|q\\|_Q} \\ge \\beta." }, { "math_id": 35, "text": "I(u) = \\int_\\Omega \\left( \\frac 12 \\mu |\\nabla u|^2 - f \\cdot u \\right)" }, { "math_id": 36, "text": " \\nabla \\cdot u = 0." }, { "math_id": 37, "text": " L(u,\\lambda) = I(u) - \\left( \\lambda, \\nabla \\cdot u\\right) = \\int_\\Omega \\left( \\frac 12 \\mu |\\nabla u|^2 - f \\cdot u - \\lambda (\\nabla \\cdot u) \\right)." }, { "math_id": 38, "text": "L(u,\n\\lambda)" }, { "math_id": 39, "text": " \\int_\\Omega \\left( \\mu \\nabla u : \\nabla v - f \\cdot v - \\lambda (\\nabla \\cdot v) \\right) = 0 \\qquad \\forall v \\in H^1(\\Omega)^d, " }, { "math_id": 40, "text": "L(u, \\lambda)" }, { "math_id": 41, "text": "\\lambda" }, { "math_id": 42, "text": " - \\int_\\Omega \\left( q (\\nabla \\cdot u) \\right) = 0 \\qquad \\forall q \\in L_2(\\Omega)^d, " }, { "math_id": 43, "text": " a(u,v) := \\int_\\Omega \\left( \\mu \\nabla u : \\nabla v \\right), " }, { "math_id": 44, "text": " b(\\lambda,v) := \\int_\\Omega \\lambda (\\nabla \\cdot v). " } ]
https://en.wikipedia.org/wiki?curid=57237765
572382
Continuity correction
In mathematics, a continuity correction is an adjustment made when a discrete object is approximated using a continuous object. Examples. Binomial. If a random variable "X" has a binomial distribution with parameters "n" and "p", i.e., "X" is distributed as the number of "successes" in "n" independent Bernoulli trials with probability "p" of success on each trial, then formula_0 for any "x" ∈ {0, 1, 2, ... "n"}. If "np" and "np"(1 − "p") are large (sometimes taken as both ≥ 5), then the probability above is fairly well approximated by formula_1 where "Y" is a normally distributed random variable with the same expected value and the same variance as "X", i.e., E("Y") = "np" and var("Y") = "np"(1 − "p"). This addition of 1/2 to "x" is a continuity correction. Poisson. A continuity correction can also be applied when other discrete distributions supported on the integers are approximated by the normal distribution. For example, if "X" has a Poisson distribution with expected value λ then the variance of "X" is also λ, and formula_2 if "Y" is normally distributed with expectation and variance both λ. Applications. Before the ready availability of statistical software having the ability to evaluate probability distribution functions accurately, continuity corrections played an important role in the practical application of statistical tests in which the test statistic has a discrete distribution: it had a special importance for manual calculations. A particular example of this is the binomial test, involving the binomial distribution, as in checking whether a coin is fair. Where extreme accuracy is not necessary, computer calculations for some ranges of parameters may still rely on using continuity corrections to improve accuracy while retaining simplicity.
[ { "math_id": 0, "text": "P(X\\leq x) = P(X<x+1)" }, { "math_id": 1, "text": "P(Y\\leq x+1/2)" }, { "math_id": 2, "text": "P(X\\leq x)=P(X<x+1)\\approx P(Y\\leq x+1/2)" } ]
https://en.wikipedia.org/wiki?curid=572382
57243717
Nonlinear frictiophoresis
Nonlinear frictiophoresis is the unidirectional drift of a particle in a medium caused by periodic driving force with zero mean. The effect is possible due to nonlinear dependence of the friction-drag force on the particle's velocity. It was discovered theoretically., and is mainly known as nonlinear electrofrictiophoresis At first glance, a periodic driving force with zero mean is able to entrain a particle into an oscillating movement without unidirectional drift, because integral momentum provided to the particle by the force is zero. The possibility of unidirectional drift can be recognized if one takes into account that the particle itself loses momentum through transferring it further to the medium it moves in/at. If the friction is nonlinear, then it may so happen that the momentum loss during movement in one direction does not equal to that in the opposite direction and this causes unidirectional drift. For this to happen, the driving force time-dependence must be more complicated than it is in a single sinusoidal harmonic. A simple example - Bingham plastic. Nonlinear friction. The simplest case of friction-velocity dependence law is the Stokes's one: formula_0 where formula_1 is the friction/drag force applied to a particle moving with velocity formula_2 in a medium. The friction-velocity law (1) is observed for a slowly moving spherical particle in a Newtonian fluid. It is linear, see Fig. 1, and is not suitable for nonlinear frictiophoresis to take place. The characteristic property of the law (1) is that any, even a very small driving force is able to get particle moving. This is not the case for such media as Bingham plastic. For those media, it is necessary to apply some threshold force, formula_3, to get the particle moving. This kind of friction-velocity (dry friction) law has a jump discontinuity at formula_4: formula_5 It is nonlinear, see Fig. 2, and is used in this example. Periodic driving force. Let formula_6 denote the period of driving force. Chose a time value formula_7 such that formula_8 and two force values, formula_9, formula_10 such that the following relations are satisfied: formula_11 formula_12 The periodic driving force formula_13 used in this example is as follows: formula_14 It is clear that, due to (3), formula_13 has zero mean: formula_15 See also Fig. 3. Unidirectional drift. For the sake of simplicity, we consider here the physical situation when inertia may be neglected. The latter can be achieved if particle's mass is small, velocity is low and friction is high. This conditions have to ensure that formula_16, where formula_17 is the relaxation time. In this situation, the particle driven with force (4) immediately starts moving with constant velocity formula_18 during interval formula_19 and will immediately stop moving during interval formula_20, see Fig. 4. This results in the positive mean velocity of unidirectional drift: formula_21 Mathematical analysis. Analysis of possibility to get a nonzero drift by periodic force with zero integral has been made in The dimensionless equation of motion for a particle driven by periodic force formula_13, formula_22, formula_23 is as follows: formula_24 where the friction/drag force formula_25 satisfies the following: formula_26 It is proven in that any solution to (5) settles down onto periodic regime formula_27, formula_28, which has nonzero mean: formula_29 almost certainly, provided formula_13 is not antiperiodic. For formula_30, two cases of formula_13 have been considered explicitly: 1. Saw-shaped driving force, see Fig. 5: formula_31 In this case, found in first order in formula_32 approximation to formula_27, formula_33, has the following mean value: formula_34 This estimate is made expecting formula_35. 2. Two harmonics driving force, formula_36 In this case, the first order in formula_32 approximation has the following mean value: formula_37 This value is maximized in formula_38, formula_39, keeping formula_40 constant. Interesting that the drift value depends on formula_38 and changes its direction twice as formula_38 spans over the interval formula_41. Another type of analysis, based on symmetry breaking suggests as well that a zero mean driving force is able to generate a directed drift. Applications. In applications, the nature of force formula_13 in (5), is usually electric, similar to forces acting during standard electrophoresis. The only differences are that the force is periodic and without constant component. For the effect to show up, the dependence of friction/drag force on velocity must be nonlinear. This is the case for numerous substances known as non-Newtonian fluids. Among these are gels, and dilatant fluids, pseudoplastic fluids, liquid crystals. Dedicated experiments have determined formula_42 for a standard DNA ladder up to 1500 bp long in 1.5% agarose gel. The dependence found, see Fig. 6, supports the possibility of nonlinear frictiophoresis in such a system. Based on data in Fig. 6, an optimal time course for driving electric field with zero mean, formula_43, has been found in, which ensures maximal drift for 1500 b.p. long fragment, see Fig. 7. The effect of unidirectional drift caused by periodic force with zero integral value has a peculiar dependence on the time course of the force applied. See the previous section for examples. This offers a new dimension to a set of separation problems. DNA separation with respect to length. In the DNA fragments separation, zero mean periodic electric field is used in zero-integrated-field electrophoresis (ZIFE), where the field time dependence similar to that shown in Fig. 3 is used. This allows to separate long fragments in agarose gel, nonseparable by standard constant field electrophoresis. The long DNA geometry and its manner of movement in a gel, known as reptation do not allow to apply directly the consideration based on Eq. (5), above. Separation with respect to specific mass. It was observed, that under certain physical conditions the mechanism described in Mathematical analysis section, above, can be used for separation with respect to specific mass, like particles made of isotopes of the same material. Extensions. The idea of organizing directed drift with zero mean periodic drive have obtained further development for other configurations and other physical mechanism of nonlinearity. Rotation by means of circular wave. An electric dipole rotating freely around formula_44-axis in a medium with nonlinear friction can be manipulated by applying electromagnetic wave polarized circularly along formula_45 and composed of two harmonics. The equation of motion for this system is as follows: formula_46 where formula_47 is the torque acting on the dipole due to circular wave: formula_48 where formula_49 is the dipole moment component orthogonal to formula_44-axis and formula_50 defines the dipole direction in the formula_51 plane. By choosing proper phase shift formula_38 in (6) it is possible to orient the dipole in any desired direction, formula_52. The direction formula_52 is attained due to angular directed drift, which becomes zero when formula_53. A small detuning between the first and second harmonic in (6) results in continuous rotational drift. Modification of potential function. If a particle undergoes a directed drift while moving freely in accordance with Eq. (5), then it drifts similarly if a shallow enough potential field formula_54 is imposed. Equation of motion in that case is: formula_55 where formula_56 is the force due to potential field. The drift continues until a steep enough region in the course of formula_54 is met, which is able to stop the drift. This kind of behavior, as rigorous mathematical analysis shows, results in modification of formula_54 by adding a linear in formula_57 term. This may change the formula_54 qualitatively, by, e.g. changing the number of equilibrium points, see Fig. 8. The effect may be essential during high frequency electric field acting on biopolymers. Another nonlinearity. For electrophoresis of colloid particles under a small strength electric field, the force formula_13 in the right-hand side of Eq. (5) is linearly proportional to the strength formula_43 of the electric field applied. For a high strength, the linearity is broken due to nonlinear polarization. As a result, the force may depend nonlinearly on the applied field: formula_58 In the last expression, even if the applied field, formula_43 has zero mean, the applied force formula_13 may happen to have a constant component that can cause a directed drift. As above, for this to happen, formula_43 must have more than a single sinusoidal harmonic. This same effect for a liquid in a tube may serve in electroosmotic pump driven with zero mean electric field. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "(1) \\qquad F_{dr}(v)=\\lambda v," }, { "math_id": 1, "text": " F_{dr}(v) " }, { "math_id": 2, "text": " v " }, { "math_id": 3, "text": " d " }, { "math_id": 4, "text": "v=0" }, { "math_id": 5, "text": "(2) \\qquad F_{dr}(v)=\\lambda v + d\\cdot \\mathrm{sign}(v)." }, { "math_id": 6, "text": "T>0" }, { "math_id": 7, "text": "t_1" }, { "math_id": 8, "text": "0<t_1<T" }, { "math_id": 9, "text": "F^+>0" }, { "math_id": 10, "text": "F^-<0" }, { "math_id": 11, "text": "\n \\qquad \\qquad F^+>d,\\quad |F^-|<d,\n" }, { "math_id": 12, "text": "\n (3) \\qquad F^+ t_1 + F^-(T-t_1)=0.\n" }, { "math_id": 13, "text": "f(t)" }, { "math_id": 14, "text": "\n (4) \\qquad f(t)=\\begin{cases}\nF^+, \\text{ if }0<t\\le t_1,\n\\\\\nF^-, \\text{ if } t_1<t\\le T,\\quad f(t+T)=f(t).\n\\end{cases}\n" }, { "math_id": 15, "text": "\n \\int\\limits_0^T f(t)dt=F^+ t_1 + F^-(T-t_1)=0.\n" }, { "math_id": 16, "text": "\\tau\\ll t_1" }, { "math_id": 17, "text": "\\tau" }, { "math_id": 18, "text": "v^+=\\frac{1}{\\lambda}(F^+-d)" }, { "math_id": 19, "text": "0<t\\le t_1" }, { "math_id": 20, "text": "t_1<t\\le T" }, { "math_id": 21, "text": "\n \\qquad \\qquad \n\\overline{v(t)}=\\frac{1}{T}\\int\\limits_0^T v(t)dt =\n\\frac{t_1}{\\lambda T}(F^+-d)>0.\n" }, { "math_id": 22, "text": "f(t+1)=f(t)" }, { "math_id": 23, "text": "\\int_0^1f(t)dt=0" }, { "math_id": 24, "text": "\n(5)\\qquad \\dot{v}+\\lambda v -\\epsilon g(v) = f(t),\n" }, { "math_id": 25, "text": "F_{dr}(v)=\\lambda v -\\epsilon g(v)" }, { "math_id": 26, "text": "\n\\qquad\\qquad F_{dr}(-v)=-F_{dr}(v),\\quad \\frac{d}{dv}F_{dr}(v)\\ge 0.\n" }, { "math_id": 27, "text": "v^*(t)" }, { "math_id": 28, "text": "v^*(t+1)=v^*(t)" }, { "math_id": 29, "text": "\n\\qquad\\qquad \\overline{v^*(t)}=\\int\\limits_0^1v^*(t)dt\\ne 0,\n" }, { "math_id": 30, "text": "g(v)=v^3" }, { "math_id": 31, "text": "\n\\qquad\\qquad f(t)=at,\\quad t\\in[-1/2;1/2],\\quad f(t+1)=f(t),\\quad t\\in ]-\\infty;\\infty[.\n" }, { "math_id": 32, "text": "\\epsilon" }, { "math_id": 33, "text": "v_1^*(t)" }, { "math_id": 34, "text": "\n\\qquad\\qquad \\Big|\\overline{v^*_1(t)}\\Big|\n \\equiv\\Big|\\int\\limits_0^1v_1^*(t)dt\\Big|\\ge\\frac{2}{3}\\epsilon a^3/\\lambda^5.\n" }, { "math_id": 35, "text": "\\lambda\\gg 1" }, { "math_id": 36, "text": "\n\\qquad\\qquad f(t)=a\\cos(2\\pi t) + b\\cos(4\\pi t+\\psi).\n" }, { "math_id": 37, "text": "\n\\qquad\\qquad \\Big|\\overline{v^*_1(t)}\\Big|=\\Big|\\int\\limits_0^1v_1^*(t)dt\\Big|\n\\ge\\frac{2}{27}\\frac{\\epsilon}{\\lambda}\\left(\\frac{M}{\\lambda}\\right)^3\n\\frac{1}\n {(1+\\frac{4\\pi^2}{\\lambda^2})(1+\\frac{16\\pi^2}{\\lambda^2})^{1/2}}.\n" }, { "math_id": 38, "text": "\\psi" }, { "math_id": 39, "text": "a,b" }, { "math_id": 40, "text": "a+b=M" }, { "math_id": 41, "text": "[0;2\\pi]" }, { "math_id": 42, "text": "F_{dr}(v)" }, { "math_id": 43, "text": "E(t)" }, { "math_id": 44, "text": "z" }, { "math_id": 45, "text": "\\pm z" }, { "math_id": 46, "text": "\n\\qquad \\qquad \\dot{\\omega}+\\lambda \\omega -\\epsilon g(\\omega) = f(t,\\theta(t)),\\quad \\dot{\\theta}=\\omega,\n" }, { "math_id": 47, "text": "f(t,\\theta(t))" }, { "math_id": 48, "text": "\n (6)\\qquad f(t,\\theta(t))\\sim\n |p|(A_1\\cos(\\omega t\\pm\\theta)+A_2\\cos(2\\omega t\\pm\\theta+\\psi)),\n " }, { "math_id": 49, "text": "p" }, { "math_id": 50, "text": "\\theta" }, { "math_id": 51, "text": "XY" }, { "math_id": 52, "text": "\\theta_0" }, { "math_id": 53, "text": "\\theta=\\theta_0" }, { "math_id": 54, "text": "U(x)" }, { "math_id": 55, "text": "\n\\qquad\\qquad \\dot{v}+\\lambda v -\\epsilon g(v) = f(t) - \\phi(x),\\quad \\dot{x}=v,\n" }, { "math_id": 56, "text": "\\phi(x)" }, { "math_id": 57, "text": "x" }, { "math_id": 58, "text": "\n\\qquad\\qquad f(t)\\sim E(t)+\\alpha (E(t))^3.\n" } ]
https://en.wikipedia.org/wiki?curid=57243717
57247386
Jacques Riguet
French mathematician, developed calculus of relations (1921 to 2013) Jacques Riguet (1921 to October 20, 2013) was a French mathematician known for his contributions to algebraic logic and category theory. According to Gunther Schmidt and Thomas Ströhlein, "Alfred Tarski and Jacques Riguet founded the modern calculus of relations". Career. Already at his lycée, Riguet was impressed by the power of logical reasoning in geometry. He studied Louis Couturat and Bourbaki, who made contributions to logic and set theory. Riguet studied higher mathematics with Albert Châtelet and was introduced to lattices. In 1948 he published "Relations binaires, fermetures, correspondances de Galois" which revived the calculus of binary relations. He published his thesis "Fondements de la Theorie de Relations Binaires" in October 1951. In 1954 Riguet gave a plenary address at the International Congress of Mathematicians in Amsterdam, speaking on the applications of binary relations to algebra and machine theory. For a time, Riguet attended the seminary of Jacques Lacan. Riguet was employed at Centre national de la recherche scientifique until 1957. Relations. In Riguet's work the composition of relations is the basis for characterizing relations, replacing the element-wise descriptions that use logical formulations. For example, he described the Schröder rules. His work was reviewed in Journal of Symbolic Logic by Øystein Ore. Some of Riguet’s contributions can be described using structure of the logical matrix associated with a relation. If "u" and "v" are logical vectors, then their logical outer product produces the associated logical matrix formula_0 Riguet calls the associated relation a rectangular relation, and if it happens to be symmetric it is a square relation. In 1950 he submitted "Sur les ensembles reguliers de relations binaires", and an article on difunctional relations, those with logical matrix in a block diagonal form. The following year he provided an algebraic characterization of heterogeneous relations with a logical matrix comparable to a Ferrers diagram. Since Ferrers diagrams order the partitions of an integer, Riguet extended order theory beyond relations restricted to one set. In 1954 Riguet described the extension of the calculus of binary relations to a calculus of Boolean matrices. Category theory. In 1958 Riguet went to Zurich, working with IBM, studying category theory. He published the following papers on that topic: Riguet participated in the Séminaire Itinérant des Catégories. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "u_i \\land v_j ." } ]
https://en.wikipedia.org/wiki?curid=57247386
572498
Bell polynomials
Polynomials in combinatorial mathematics In combinatorial mathematics, the Bell polynomials, named in honor of Eric Temple Bell, are used in the study of set partitions. They are related to Stirling and Bell numbers. They also occur in many applications, such as in Faà di Bruno's formula. Definitions. Exponential Bell polynomials. The "partial" or "incomplete" exponential Bell polynomials are a triangular array of polynomials given by formula_0 where the sum is taken over all sequences "j"1, "j"2, "j"3, ..., "j""n"−"k"+1 of non-negative integers such that these two conditions are satisfied: formula_1 formula_2 The sum formula_3 is called the "n"th "complete exponential Bell polynomial". Ordinary Bell polynomials. Likewise, the partial "ordinary" Bell polynomial is defined by formula_4 where the sum runs over all sequences "j"1, "j"2, "j"3, ..., "j""n"−"k"+1 of non-negative integers such that formula_5 formula_6 Thanks to the first condition on indices, we can rewrite the formula as formula_7 where we have used the multinomial coefficient. The ordinary Bell polynomials can be expressed in the terms of exponential Bell polynomials: formula_8 In general, Bell polynomial refers to the exponential Bell polynomial, unless otherwise explicitly stated. Combinatorial meaning. The exponential Bell polynomial encodes the information related to the ways a set can be partitioned. For example, if we consider a set {A, B, C}, it can be partitioned into two non-empty, non-overlapping subsets, which are also referred to as parts or blocks, in 3 different ways: Thus, we can encode the information regarding these partitions as formula_9 Here, the subscripts of "B"3,2 tell us that we are considering the partitioning of a set with 3 elements into 2 blocks. The subscript of each "x"i indicates the presence of a block with "i" elements (or block of size "i") in a given partition. So here, "x"2 indicates the presence of a block with two elements. Similarly, "x"1 indicates the presence of a block with a single element. The exponent of "x"ij indicates that there are "j" such blocks of size "i" in a single partition. Here, the fact that both "x"1 and "x"2 have exponent 1 indicates that there is only one such block in a given partition. The coefficient of the monomial indicates how many such partitions there are. Here, there are 3 partitions of a set with 3 elements into 2 blocks, where in each partition the elements are divided into two blocks of sizes 1 and 2. Since any set can be divided into a single block in only one way, the above interpretation would mean that "B""n",1 = "x""n". Similarly, since there is only one way that a set with "n" elements be divided into "n" singletons, "B""n","n" = "x"1"n". As a more complicated example, consider formula_10 This tells us that if a set with 6 elements is divided into 2 blocks, then we can have 6 partitions with blocks of size 1 and 5, 15 partitions with blocks of size 4 and 2, and 10 partitions with 2 blocks of size 3. The sum of the subscripts in a monomial is equal to the total number of elements. Thus, the number of monomials that appear in the partial Bell polynomial is equal to the number of ways the integer "n" can be expressed as a summation of "k" positive integers. This is the same as the integer partition of "n" into "k" parts. For instance, in the above examples, the integer 3 can be partitioned into two parts as 2+1 only. Thus, there is only one monomial in "B"3,2. However, the integer 6 can be partitioned into two parts as 5+1, 4+2, and 3+3. Thus, there are three monomials in "B"6,2. Indeed, the subscripts of the variables in a monomial are the same as those given by the integer partition, indicating the sizes of the different blocks. The total number of monomials appearing in a complete Bell polynomial "Bn" is thus equal to the total number of integer partitions of "n". Also the degree of each monomial, which is the sum of the exponents of each variable in the monomial, is equal to the number of blocks the set is divided into. That is, "j"1 + "j"2 + ... = "k" . Thus, given a complete Bell polynomial "Bn", we can separate the partial Bell polynomial "Bn,k" by collecting all those monomials with degree "k". Finally, if we disregard the sizes of the blocks and put all "x""i" = "x", then the summation of the coefficients of the partial Bell polynomial "B""n","k" will give the total number of ways that a set with "n" elements can be partitioned into "k" blocks, which is the same as the Stirling numbers of the second kind. Also, the summation of all the coefficients of the complete Bell polynomial "Bn" will give us the total number of ways a set with "n" elements can be partitioned into non-overlapping subsets, which is the same as the Bell number. In general, if the integer "n" is partitioned into a sum in which "1" appears "j"1 times, "2" appears "j"2 times, and so on, then the number of partitions of a set of size "n" that collapse to that partition of the integer "n" when the members of the set become indistinguishable is the corresponding coefficient in the polynomial. Examples. For example, we have formula_11 because the ways to partition a set of 6 elements as 2 blocks are 6 ways to partition a set of 6 as 5 + 1, 15 ways to partition a set of 6 as 4 + 2, and 10 ways to partition a set of 6 as 3 + 3. Similarly, formula_12 because the ways to partition a set of 6 elements as 3 blocks are 15 ways to partition a set of 6 as 4 + 1 + 1, 60 ways to partition a set of 6 as 3 + 2 + 1, and 15 ways to partition a set of 6 as 2 + 2 + 2. Table of values. Below is a triangular array of the incomplete Bell polynomials formula_13: Properties. Generating function. The exponential partial Bell polynomials can be defined by the double series expansion of its generating function: formula_14 In other words, by what amounts to the same, by the series expansion of the "k"-th power: formula_15 The complete exponential Bell polynomial is defined by formula_16, or in other words: formula_17 Thus, the "n"-th complete Bell polynomial is given by formula_18 Likewise, the "ordinary" partial Bell polynomial can be defined by the generating function formula_19 Or, equivalently, by series expansion of the "k"-th power: formula_20 See also generating function transformations for Bell polynomial generating function expansions of compositions of sequence generating functions and powers, logarithms, and exponentials of a sequence generating function. Each of these formulas is cited in the respective sections of Comtet. Recurrence relations. The complete Bell polynomials can be recurrently defined as formula_21 with the initial value formula_22. The partial Bell polynomials can also be computed efficiently by a recurrence relation: formula_23 where formula_24 formula_25 formula_26 In addition: formula_27 When formula_28, formula_29 The complete Bell polynomials also satisfy the following recurrence differential formula: formula_30 Derivatives. The partial derivatives of the complete Bell polynomials are given by formula_31 Similarly, the partial derivatives of the partial Bell polynomials are given by formula_32 If the arguments of the Bell polynomials are one-dimensional functions, the chain rule can be used to obtain formula_33 Stirling numbers and Bell numbers. The value of the Bell polynomial "B""n","k"("x"1,"x"2...) on the sequence of factorials equals an unsigned Stirling number of the first kind: formula_34 The sum of these values gives the value of the complete Bell polynomial on the sequence of factorials: formula_35 The value of the Bell polynomial "B""n","k"("x"1,"x"2...) on the sequence of ones equals a Stirling number of the second kind: formula_36 The sum of these values gives the value of the complete Bell polynomial on the sequence of ones: formula_37 which is the "n"th Bell number. formula_38 which gives the Lah number. Touchard polynomials. Touchard polynomial formula_39 can be expressed as the value of the complete Bell polynomial on all arguments being "x": formula_40 Inverse relations. If we define formula_41 then we have the inverse relationship formula_42 More generally, given some function formula_43 admitting an inverse formula_44,formula_45 Determinant forms. The complete Bell polynomial can be expressed as determinants: formula_46 and formula_47 Convolution identity. For sequences "x""n", "y""n", "n" = 1, 2, ..., define a convolution by: formula_48 The bounds of summation are 1 and "n" − 1, not 0 and "n" . Let formula_49 be the "n"th term of the sequence formula_50 Then formula_51 For example, let us compute formula_52. We have formula_53 formula_54 formula_55 and thus, formula_56 This corrects the omission of the factor formula_61 in Comtet's book. formula_62 Examples. The first few complete Bell polynomials are: formula_63 Applications. Faà di Bruno's formula. Faà di Bruno's formula may be stated in terms of Bell polynomials as follows: formula_64 Similarly, a power-series version of Faà di Bruno's formula may be stated using Bell polynomials as follows. Suppose formula_65 Then formula_66 In particular, the complete Bell polynomials appear in the exponential of a formal power series: formula_67 which also represents the exponential generating function of the complete Bell polynomials on a fixed sequence of arguments formula_68. Reversion of series. Let two functions "f" and "g" be expressed in formal power series as formula_69 such that "g" is the compositional inverse of "f" defined by "g"("f"("w")) = "w" or "f"("g"("z")) = "z". If "f"0 = 0 and "f"1 ≠ 0, then an explicit form of the coefficients of the inverse can be given in term of Bell polynomials as formula_70 with formula_71 and formula_72 is the rising factorial, and formula_73 Asymptotic expansion of Laplace-type integrals. Consider the integral of the form formula_74 where ("a","b") is a real (finite or infinite) interval, λ is a large positive parameter and the functions "f" and "g" are continuous. Let "f" have a single minimum in ["a","b"] which occurs at "x" = "a". Assume that as "x" → "a"+, formula_75 formula_76 with "α" &gt; 0, Re("β") &gt; 0; and that the expansion of "f" can be term wise differentiated. Then, Laplace–Erdelyi theorem states that the asymptotic expansion of the integral "I"("λ") is given by formula_77 where the coefficients "cn" are expressible in terms of "an" and "bn" using partial "ordinary" Bell polynomials, as given by Campbell–Froman–Walles–Wojdylo formula: formula_78 Symmetric polynomials. The elementary symmetric polynomial formula_79 and the power sum symmetric polynomial formula_80 can be related to each other using Bell polynomials as: formula_81 formula_82 These formulae allow one to express the coefficients of monic polynomials in terms of the Bell polynomials of its zeroes. For instance, together with Cayley–Hamilton theorem they lead to expression of the determinant of a "n" × "n" square matrix "A" in terms of the traces of its powers: formula_83 Cycle index of symmetric groups. The cycle index of the symmetric group formula_84 can be expressed in terms of complete Bell polynomials as follows: formula_85 Moments and cumulants. The sum formula_86 is the "n"th raw moment of a probability distribution whose first "n" cumulants are "κ"1, ..., "κ""n". In other words, the "n"th moment is the "n"th complete Bell polynomial evaluated at the first "n" cumulants. Likewise, the "n"th cumulant can be given in terms of the moments as formula_87 Hermite polynomials. Hermite polynomials can be expressed in terms of Bell polynomials as formula_88 where "x""i" = 0 for all "i" &gt; 2; thus allowing for a combinatorial interpretation of the coefficients of the Hermite polynomials. This can be seen by comparing the generating function of the Hermite polynomials formula_89 with that of Bell polynomials. Representation of polynomial sequences of binomial type. For any sequence "a"1, "a"2, …, "a""n" of scalars, let formula_90 Then this polynomial sequence is of binomial type, i.e. it satisfies the binomial identity formula_91 Example: For "a"1 = … = "a""n" = 1, the polynomials formula_92 represent Touchard polynomials. More generally, we have this result: Theorem: All polynomial sequences of binomial type are of this form. If we define a formal power series formula_93 then for all "n", formula_94 Software. Bell polynomials are implemented in: Notes. &lt;templatestyles src="Reflist/styles.css" /&gt; References. &lt;templatestyles src="Refbegin/styles.css" /&gt;
[ { "math_id": 0, "text": "\\begin{align}\nB_{n,k}(x_1,x_2,\\dots,x_{n-k+1}) &= \\sum{n! \\over j_1!j_2!\\cdots j_{n-k+1}!}\n\\left({x_1\\over 1!}\\right)^{j_1}\\left({x_2\\over 2!}\\right)^{j_2}\\cdots\\left({x_{n-k+1} \\over (n-k+1)!}\\right)^{j_{n-k+1}} \\\\\n&= n! \\sum \\prod_{i=1}^{n-k+1} \\frac{x_i^{j_i}}{(i!)^{j_i} j_i!},\n\\end{align}" }, { "math_id": 1, "text": "j_1 + j_2 + \\cdots + j_{n-k+1} = k, " }, { "math_id": 2, "text": "j_1 + 2 j_2 + 3 j_3 + \\cdots + (n-k+1)j_{n-k+1} = n." }, { "math_id": 3, "text": "\\begin{align}\nB_n(x_1,\\dots,x_n)&=\\sum_{k=1}^n B_{n,k}(x_1,x_2,\\dots,x_{n-k+1})\\\\\n&=n! \\sum_{1j_1 +\\ldots+ nj_n=n} \\prod_{i=1}^n \\frac{x_i^{j_i}}{(i!)^{j_i}j_i!}\n\\end{align}" }, { "math_id": 4, "text": "\\hat{B}_{n,k}(x_1,x_2,\\ldots,x_{n-k+1}) = \\sum \\frac{k!}{j_1! j_2! \\cdots j_{n-k+1}!} x_1^{j_1} x_2^{j_2} \\cdots x_{n-k+1}^{j_{n-k+1}}, " }, { "math_id": 5, "text": "j_1 + j_2 + \\cdots + j_{n-k+1} = k," }, { "math_id": 6, "text": "j_1 + 2 j_2 + \\cdots + (n-k+1)j_{n-k+1} = n." }, { "math_id": 7, "text": "\\hat{B}_{n,k}(x_1,x_2,\\ldots,x_{n-k+1}) = \\sum \\binom{k}{j_1, j_2, \\ldots, j_{n-k+1}} x_1^{j_1} x_2^{j_2} \\cdots x_{n-k+1}^{j_{n-k+1}}, " }, { "math_id": 8, "text": "\\hat{B}_{n,k}(x_1,x_2,\\ldots,x_{n-k+1}) = \\frac{k!}{n!}B_{n,k}(1!\\cdot x_1,2!\\cdot x_2,\\ldots,(n-k+1)!\\cdot x_{n-k+1})." }, { "math_id": 9, "text": "B_{3,2}(x_1,x_2) = 3 x_1 x_2. " }, { "math_id": 10, "text": "B_{6,2}(x_1,x_2,x_3,x_4,x_5)=6x_5x_1+15x_4x_2+10x_3^2." }, { "math_id": 11, "text": "B_{6,2}(x_1,x_2,x_3,x_4,x_5)=6x_5x_1+15x_4x_2+10x_3^2" }, { "math_id": 12, "text": "B_{6,3}(x_1,x_2,x_3,x_4)=15x_4x_1^2+60x_3x_2x_1+15x_2^3" }, { "math_id": 13, "text": "B_{n,k}(x_1,x_2,\\dots,x_{n-k+1})" }, { "math_id": 14, "text": " \n\\begin{align}\n\\Phi(t,u) &= \\exp\\left( u \\sum_{j=1}^\\infty x_j \\frac{t^j}{j!} \\right) = \\sum_{n\\geq k \\geq 0} B_{n,k}(x_1,\\ldots,x_{n-k+1}) \\frac{t^n}{n!} u^k\\\\\n &= 1 + \\sum_{n=1}^\\infty \\frac{t^n}{n!} \\sum_{k=1}^n u^k B_{n,k}(x_1,\\ldots,x_{n-k+1}).\n\\end{align}\n" }, { "math_id": 15, "text": " \\frac{1}{k!}\\left( \\sum_{j=1}^\\infty x_j \\frac{t^j}{j!} \\right)^k = \\sum_{n=k}^\\infty B_{n,k}(x_1,\\ldots,x_{n-k+1}) \\frac{t^n}{n!}, \\qquad k = 0, 1, 2, \\ldots " }, { "math_id": 16, "text": "\\Phi(t,1)" }, { "math_id": 17, "text": " \\Phi(t,1) = \\exp\\left( \\sum_{j=1}^\\infty x_j \\frac{t^j}{j!} \\right) = \\sum_{n=0}^\\infty B_n(x_1,\\ldots, x_n) \\frac{t^n}{n!}." }, { "math_id": 18, "text": " B_n(x_1,\\ldots, x_n) = \\left. \\left(\\frac{\\partial}{\\partial t}\\right)^n \\exp\\left( \\sum_{j=1}^n x_j \\frac{t^j}{j!} \\right) \\right|_{t=0}. " }, { "math_id": 19, "text": " \\hat{\\Phi}(t,u) = \\exp \\left( u \\sum_{j=1}^\\infty x_j t^j \\right) = \\sum_{n\\geq k\\geq 0} \\hat{B}_{n,k}(x_1,\\ldots,x_{n-k+1}) t^n \\frac{u^k}{k!}." }, { "math_id": 20, "text": "\\left(\\sum_{j=1}^\\infty x_j t^j\\right)^k = \\sum_{n=k}^\\infty \\hat{B}_{n,k}(x_1, \\ldots, x_{n-k+1}) t^n. " }, { "math_id": 21, "text": " B_{n+1}(x_1, \\ldots, x_{n+1}) = \\sum_{i=0}^n {n \\choose i} B_{n-i}(x_1, \\ldots, x_{n-i}) x_{i+1}" }, { "math_id": 22, "text": "B_0 = 1" }, { "math_id": 23, "text": " B_{n+1,k+1}(x_1, \\ldots, x_{n-k+1}) = \\sum_{i=0}^{n-k} \\binom{n}{i} x_{i+1} B_{n-i,k}(x_1, \\ldots, x_{n-k-i+1})" }, { "math_id": 24, "text": " B_{0,0} = 1; " }, { "math_id": 25, "text": " B_{n,0} = 0 \\text{ for } n \\geq 1; " }, { "math_id": 26, "text": " B_{0,k} = 0 \\text{ for } k \\geq 1. " }, { "math_id": 27, "text": "B_{n, k_1 + k_2}(x_1, \\ldots, x_{n-k_1-k_2+1}) = \\frac{k_1! \\, k_2!}{(k_1 + k_2)!} \\sum_{i=0}^n \\binom{n}{i} B_{i, k_1}(x_1, \\ldots, x_{i-k_1+1}) B_{n-i, k_2}(x_1, \\ldots, x_{n-i-k_2+1})." }, { "math_id": 28, "text": "1 \\le a < n" }, { "math_id": 29, "text": "B_{n, n-a}(x_1, \\ldots, x_{a+1}) = \\sum_{j = a+1}^{2a}\\frac{j!}{a!}\\binom{n}{j}x_1^{n-j} B_{a, j-a}\\Bigl(\\frac{x_2}{2}, \\frac{x_3}{3}, \\ldots, \\frac{x_{2(a+1)-j}}{2(a+1)-j}\\Bigr)." }, { "math_id": 30, "text": "\n\\begin{align}\nB_n(x_1, \\ldots, x_n) = \\frac{1}{n-1} \\left[ \\sum_{i=2}^n \\right. & \\sum_{j=1}^{i-1} (i-1) \\binom{i-2}{j-1} x_j x_{i-j}\\frac{\\partial B_{n-1}(x_1,\\dots,x_{n-1})}{\\partial x_{i-1}} \\\\[5pt]\n& \\left. {} + \\sum_{i=2}^n \\sum_{j=1}^{i-1} \\frac{x_{i+1}}{\\binom i j} \\frac{\\partial^2 B_{n-1}(x_1,\\dots,x_{n-1})}{\\partial x_j \\partial x_{i-j}} \\right. \\\\[5pt]\n& \\left. {} + \\sum_{i=2}^n x_i \\frac{\\partial B_{n-1}(x_1,\\dots,x_{n-1})}{\\partial x_{i-1}} \\right].\n\\end{align}\n" }, { "math_id": 31, "text": " \\frac{\\partial B_{n}}{\\partial x_i} (x_1, \\ldots, x_{n}) = \\binom{n}{i} B_{n-i}(x_1, \\ldots, x_{n-i})." }, { "math_id": 32, "text": " \\frac{\\partial B_{n,k}}{\\partial x_i} (x_1, \\ldots, x_{n-k+1}) = \\binom{n}{i} B_{n-i,k-1}(x_1, \\ldots, x_{n-i-k+2})." }, { "math_id": 33, "text": " \\frac{d}{dx} \\left(B_{n,k}(a_1(x), \\cdots, a_{n-k+1}(x))\\right) = \\sum_{i=1}^{n-k+1} \\binom{n}{i} a_i'(x) B_{n-i,k-1}(a_1(x), \\cdots, a_{n-i-k+2}(x))." }, { "math_id": 34, "text": "B_{n,k}(0!,1!,\\dots,(n-k)!)=c(n,k)=|s(n,k)| = \\left[{n\\atop k}\\right]." }, { "math_id": 35, "text": "B_n(0!,1!,\\dots,(n-1)!)=\\sum_{k=1}^n B_{n,k}(0!,1!,\\dots,(n-k)!) = \\sum_{k=1}^n \\left[{n\\atop k}\\right] = n!." }, { "math_id": 36, "text": "B_{n,k}(1,1,\\dots,1)=S(n,k)=\\left\\{{n\\atop k}\\right\\}." }, { "math_id": 37, "text": "B_n(1,1,\\dots,1)=\\sum_{k=1}^n B_{n,k}(1,1,\\dots,1) = \\sum_{k=1}^n \\left\\{{n\\atop k}\\right\\}," }, { "math_id": 38, "text": "B_{n,k}(1!,2!,\\ldots,(n-k+1)!) = \\binom{n-1}{k-1} \\frac{n!}{k!} = L(n,k)" }, { "math_id": 39, "text": "T_n(x) = \\sum_{k=0}^n \\left\\{{n\\atop k}\\right\\}\\cdot x^k" }, { "math_id": 40, "text": "T_n(x) = B_n(x,x,\\dots,x)." }, { "math_id": 41, "text": "y_n = \\sum_{k=1}^n B_{n,k}(x_1,\\ldots,x_{n-k+1})," }, { "math_id": 42, "text": "x_n = \\sum_{k=1}^n (-1)^{k-1} (k-1)! B_{n,k}(y_1,\\ldots,y_{n-k+1})." }, { "math_id": 43, "text": "f" }, { "math_id": 44, "text": "g = f^{-1}" }, { "math_id": 45, "text": "y_n = \\sum_{k=0}^n f^{(k)}(a) \\, B_{n,k}(x_1, \\ldots, x_{n-k+1}) \\quad \\Leftrightarrow \\quad x_n = \\sum_{k=0}^n g^{(k)}\\big(f(a)\\big) \\, B_{n,k}(y_1, \\ldots, y_{n-k+1}). " }, { "math_id": 46, "text": "B_n(x_1,\\dots,x_n) = \\det\\begin{bmatrix}\nx_1 & {n-1 \\choose 1} x_2 & {n-1 \\choose 2}x_3 & {n-1 \\choose 3} x_4 & \\cdots & \\cdots & x_n \\\\ \\\\\n-1 & x_1 & {n-2 \\choose 1} x_2 & {n-2 \\choose 2} x_3 & \\cdots & \\cdots & x_{n-1} \\\\ \\\\\n0 & -1 & x_1 & {n-3 \\choose 1} x_2 & \\cdots & \\cdots & x_{n-2} \\\\ \\\\\n0 & 0 & -1 & x_1 & \\cdots & \\cdots & x_{n-3} \\\\ \\\\\n0 & 0 & 0 & -1 & \\cdots & \\cdots & x_{n-4} \\\\ \\\\\n\\vdots & \\vdots & \\vdots & \\vdots & \\ddots & \\ddots & \\vdots \\\\ \\\\\n0 & 0 & 0 & 0 & \\cdots & -1 & x_1 \\end{bmatrix}" }, { "math_id": 47, "text": "B_n(x_1,\\dots,x_n) = \\det\\begin{bmatrix}\n\\frac{x_1}{0!} & \\frac{x_2}{1!} & \\frac{x_3}{2!} & \\frac{x_4}{3!} & \\cdots & \\cdots & \\frac{x_n}{(n-1)!} \\\\ \\\\\n-1 & \\frac{x_1}{0!} & \\frac{x_2}{1!} & \\frac{x_3}{2!} & \\cdots & \\cdots & \\frac{x_{n-1}}{(n-2)!} \\\\ \\\\\n0 & -2 & \\frac{x_1}{0!} & \\frac{x_2}{1!} & \\cdots & \\cdots & \\frac{x_{n-2}}{(n-3)!} \\\\ \\\\\n0 & 0 & -3 & \\frac{x_1}{0!} & \\cdots & \\cdots & \\frac{x_{n-3}}{(n-4)!} \\\\ \\\\\n0 & 0 & 0 & -4 & \\cdots & \\cdots & \\frac{x_{n-4}}{(n-5)!} \\\\ \\\\\n\\vdots & \\vdots & \\vdots & \\vdots & \\ddots & \\ddots & \\vdots \\\\ \\\\\n0 & 0 & 0 & 0 & \\cdots & -(n-1) & \\frac{x_1}{0!} \\end{bmatrix}." }, { "math_id": 48, "text": "(x \\mathbin{\\diamondsuit} y)_n = \\sum_{j=1}^{n-1} {n \\choose j} x_j y_{n-j}." }, { "math_id": 49, "text": "x_n^{k\\diamondsuit}\\," }, { "math_id": 50, "text": "\\displaystyle\\underbrace{x\\mathbin{\\diamondsuit}\\cdots\\mathbin{\\diamondsuit} x}_{k \\text{ factors}}.\\," }, { "math_id": 51, "text": "B_{n,k}(x_1,\\dots,x_{n-k+1}) = {x_n^{k\\diamondsuit} \\over k!}.\\," }, { "math_id": 52, "text": " B_{4,3}(x_1,x_2) " }, { "math_id": 53, "text": " x = ( x_1 \\ , \\ x_2 \\ , \\ x_3 \\ , \\ x_4 \\ , \\dots ) " }, { "math_id": 54, "text": " x \\mathbin{\\diamondsuit} x = ( 0,\\ 2 x_1^2 \\ ,\\ 6 x_1 x_2 \\ , \\ 8 x_1 x_3 + 6 x_2^2 \\ , \\dots ) " }, { "math_id": 55, "text": " x \\mathbin{\\diamondsuit} x \\mathbin{\\diamondsuit} x = ( 0 \\ ,\\ 0 \\ , \\ 6 x_1^3 \\ , \\ 36 x_1^2 x_2 \\ , \\dots ) " }, { "math_id": 56, "text": " B_{4,3}(x_1,x_2) = \\frac{ ( x \\mathbin{\\diamondsuit} x \\mathbin{\\diamondsuit} x)_4 }{3!} = 6 x_1^2 x_2. " }, { "math_id": 57, "text": "B_{n,k}(1,2,3,\\ldots,n-k+1) = \\binom{n}{k} k^{n-k} " }, { "math_id": 58, "text": "B_{n,k}(\\alpha \\beta x_1,\\alpha \\beta^2 x_2, \\ldots, \\alpha \\beta^{n-k+1}x_{n-k+1}) = \\alpha^k \\beta^n B_{n,k}(x_1,x_2,\\ldots,x_{n-k+1})" }, { "math_id": 59, "text": " B_n(x_1 + y_1, \\ldots, x_n + y_n) = \\sum_{i=0}^n {n \\choose i} B_{n-i}(x_1, \\ldots, x_{n-i})B_i(y_1, \\ldots, y_i)," }, { "math_id": 60, "text": " B_{n, k}\\Bigl(\\frac{x_{q+1}}{\\binom{q+1}{q}}, \\frac{x_{q+2}}{\\binom{q+2}{q}}, \\ldots\\Bigr) = \\frac{n!(q!)^k}{(n+qk)!} B_{n+qk, k}(\\ldots, 0, 0, x_{q+1}, x_{q+2}, \\ldots)." }, { "math_id": 61, "text": "(q!)^k" }, { "math_id": 62, "text": "\n\\begin{align}\nB_{n, 1}(x_1, \\ldots, x_n) ={}& x_n \\\\\nB_{n, 2}(x_1, \\ldots, x_{n-1}) ={}& \\frac{1}{2}\\sum_{k=1}^{n-1} \\binom{n}{k} x_kx_{n-k} \\\\\nB_{n, n}(x_1) ={}& x_1^n \\\\\nB_{n, n-1}(x_1, x_2) ={}& \\binom{n}{2}x_1^{n-2}x_2 \\\\\nB_{n, n-2}(x_1, x_2, x_3) ={}& \\binom{n}{3}x_1^{n-3}x_3 + 3\\binom{n}{4}x_1^{n-4}x_2^2 \\\\\nB_{n, n-3}(x_1, x_2, x_3, x_4) ={}& \\binom{n}{4}x_1^{n-4}x_4 + 10\\binom{n}{5}x_1^{n-5}x_2x_3 + 15\\binom{n}{6}x_1^{n-6}x_2^3\\\\\nB_{n, n-4}(x_1, x_2, x_3, x_4, x_5) ={}& \\binom{n}{5}x_1^{n-5}x_5 + 5\\binom{n}{6}x_1^{n-6}(3x_2x_4 + 2x_3^2) + 105\\binom{n}{7}x_1^{n-7}x_2^2x_3 \\\\\n& + 105\\binom{n}{8}x_1^{n-8}x_2^4.\n\\end{align} \n" }, { "math_id": 63, "text": "\n\\begin{align}\nB_0 = {} & 1, \\\\[8pt] \nB_1(x_1) = {} & x_1, \\\\[8pt]\nB_2(x_1,x_2) = {} & x_1^2 + x_2, \\\\[8pt]\nB_3(x_1,x_2,x_3) = {} & x_1^3 + 3x_1 x_2 + x_3, \\\\[8pt]\nB_4(x_1,x_2,x_3,x_4) = {} & x_1^4 + 6 x_1^2 x_2 + 4 x_1 x_3 + 3 x_2^2 + x_4, \\\\[8pt]\nB_5(x_1,x_2,x_3,x_4,x_5) = {} & x_1^5 + 10 x_2 x_1^3 + 15 x_2^2 x_1 + 10 x_3 x_1^2 + 10 x_3 x_2 + 5 x_4 x_1 + x_5 \\\\[8pt]\nB_6(x_1,x_2,x_3,x_4,x_5,x_6) = {} & x_1^6 + 15 x_2 x_1^4 + 20 x_3 x_1^3 + 45 x_2^2 x_1^2 + 15 x_2^3 + 60 x_3 x_2 x_1 \\\\\n& {} + 15 x_4 x_1^2 + 10 x_3^2 + 15 x_4 x_2 + 6 x_5 x_1 + x_6, \\\\[8pt]\nB_7(x_1,x_2,x_3,x_4,x_5,x_6,x_7) = {} & x_1^7 + 21 x_1^5 x_2 + 35 x_1^4 x_3 + 105 x_1^3 x_2^2 + 35 x_1^3 x_4 \\\\\n& {} + 210 x_1^2 x_2 x_3 + 105 x_1 x_2^3 + 21 x_1^2 x_5 + 105 x_1 x_2 x_4 \\\\\n& {} + 70 x_1 x_3^2 + 105 x_2^2 x_3 + 7 x_1 x_6 + 21 x_2 x_5 + 35 x_3 x_4 + x_7. \n\\end{align}" }, { "math_id": 64, "text": "{d^n \\over dx^n} f(g(x)) = \\sum_{k=0}^n f^{(k)}(g(x)) B_{n,k} \\left(g'(x),g''(x), \\dots, g^{(n-k+1)}(x)\\right)." }, { "math_id": 65, "text": "f(x)=\\sum_{n=1}^\\infty {a_n \\over n!} x^n \\qquad \\text{and} \\qquad g(x) = \\sum_{n=0}^\\infty {b_n \\over n!} x^n." }, { "math_id": 66, "text": "g(f(x)) = \\sum_{n=1}^\\infty\n\\frac{\\sum_{k=0}^n b_k B_{n,k}(a_1,\\dots,a_{n-k+1})}{n!} x^n." }, { "math_id": 67, "text": "\\exp\\left(\\sum_{i=1}^\\infty {a_i \\over i!} x^i \\right)\n= \\sum_{n=0}^\\infty {B_n(a_1,\\dots,a_n) \\over n!} x^n," }, { "math_id": 68, "text": "a_1, a_2, \\dots" }, { "math_id": 69, "text": "f(w) = \\sum_{k=0}^\\infty f_k \\frac{w^k}{k!}, \\qquad \\text{and} \\qquad g(z) = \\sum_{k=0}^\\infty g_k \\frac{z^k}{k!}," }, { "math_id": 70, "text": " g_n = \\frac{1}{f_1^n} \\sum_{k=1}^{n-1} (-1)^k n^{\\bar{k}} B_{n-1,k}(\\hat{f}_1,\\hat{f}_2,\\ldots,\\hat{f}_{n-k}), \\qquad n \\geq 2, " }, { "math_id": 71, "text": " \\hat{f}_k = \\frac{f_{k+1}}{(k+1)f_{1}}," }, { "math_id": 72, "text": "n^{\\bar{k}} = n(n+1)\\cdots (n+k-1) " }, { "math_id": 73, "text": "g_1 = \\frac{1}{f_{1}}. " }, { "math_id": 74, "text": "I(\\lambda) = \\int_a^b e^{-\\lambda f(x)} g(x) \\, \\mathrm{d}x, " }, { "math_id": 75, "text": " f(x) \\sim f(a) + \\sum_{k=0}^\\infty a_k (x-a)^{k+\\alpha}, " }, { "math_id": 76, "text": " g(x) \\sim \\sum_{k=0}^\\infty b_k (x-a)^{k+\\beta-1}, " }, { "math_id": 77, "text": " I(\\lambda) \\sim e^{-\\lambda f(a)} \\sum_{n=0}^\\infty \\Gamma \\Big(\\frac{n+\\beta}{\\alpha} \\Big) \\frac{c_n}{\\lambda^{(n+\\beta)/\\alpha}} \\qquad \\text{as} \\quad \\lambda \\rightarrow \\infty, " }, { "math_id": 78, "text": " c_n = \\frac{1}{\\alpha a_0^{(n+\\beta)/\\alpha}} \\sum_{k=0}^n b_{n-k} \\sum_{j=0}^k \\binom{-\\frac{n+\\beta}{\\alpha}}{j} \\frac{1}{a_0^j} \\hat{B}_{k,j}(a_1,a_2,\\ldots,a_{k-j+1}). " }, { "math_id": 79, "text": "e_n" }, { "math_id": 80, "text": "p_n" }, { "math_id": 81, "text": "\n\\begin{align}\ne_n & = \\frac{1}{n!}\\; B_{n}(p_1, -1! p_2, 2! p_3, -3! p_4, \\ldots, (-1)^{n-1}(n-1)! p_n ) \\\\\n& = \\frac{(-1)^n}{n!}\\; B_{n}(-p_1, -1! p_2, -2! p_3, -3! p_4, \\ldots, -(n-1)! p_n ),\n\\end{align}\n" }, { "math_id": 82, "text": "\n\\begin{align}\np_n & = \\frac{(-1)^{n-1}}{(n-1)!} \\sum_{k=1}^n (-1)^{k-1} (k-1)!\\; B_{n,k}(e_1,2! e_2, 3! e_3,\\ldots,(n-k+1)! e_{n-k+1}) \\\\\n& = (-1)^n\\; n\\; \\sum_{k=1}^n \\frac{1}{k} \\; \\hat{B}_{n,k}(-e_1,\\dots,-e_{n-k+1}).\n\\end{align}\n" }, { "math_id": 83, "text": " \\det (A) = \\frac{(-1)^{n}}{n!} B_n(s_1, s_2, \\ldots, s_n), ~\\qquad \\text{where } s_k = - (k - 1)! \\operatorname{tr}(A^k)." }, { "math_id": 84, "text": "S_n" }, { "math_id": 85, "text": " Z(S_n) = \\frac{B_n(0!\\,a_1, 1!\\,a_2, \\dots, (n-1)!\\,a_n)}{n!}." }, { "math_id": 86, "text": "\\mu_n' = B_n(\\kappa_1,\\dots,\\kappa_n)=\\sum_{k=1}^n B_{n,k}(\\kappa_1,\\dots,\\kappa_{n-k+1})" }, { "math_id": 87, "text": "\\kappa_n = \\sum_{k=1}^n (-1)^{k-1} (k-1)! B_{n,k}(\\mu'_1,\\ldots,\\mu'_{n-k+1})." }, { "math_id": 88, "text": "\\operatorname{He}_n(x) = B_n(x,-1,0,\\ldots,0)," }, { "math_id": 89, "text": "\\exp \\left(xt-\\frac{t^2}{2} \\right) = \\sum_{n=0}^\\infty \\operatorname{He}_n(x) \\frac {t^n}{n!}" }, { "math_id": 90, "text": "p_n(x)= B_n(a_1 x, \\ldots, a_n x) = \\sum_{k=1}^n B_{n,k}(a_1,\\dots,a_{n-k+1}) x^k." }, { "math_id": 91, "text": "p_n(x+y)=\\sum_{k=0}^n {n \\choose k} p_k(x) p_{n-k}(y)." }, { "math_id": 92, "text": "p_n(x)" }, { "math_id": 93, "text": "h(x)=\\sum_{k=1}^\\infty {a_k \\over k!} x^k," }, { "math_id": 94, "text": "h^{-1}\\left( {d \\over dx}\\right) p_n(x) = n p_{n-1}(x)." } ]
https://en.wikipedia.org/wiki?curid=572498
5725210
SQ-universal group
Type of countable group in group theory In mathematics, in the realm of group theory, a countable group is said to be SQ-universal if every countable group can be embedded in one of its quotient groups. SQ-universality can be thought of as a measure of largeness or complexity of a group. History. Many classic results of combinatorial group theory, going back to 1949, are now interpreted as saying that a particular group or class of groups is (are) SQ-universal. However the first explicit use of the term seems to be in an address given by Peter Neumann to The London Algebra Colloquium entitled "SQ-universal groups" on 23 May 1968. Examples of SQ-universal groups. In 1949 Graham Higman, Bernhard Neumann and Hanna Neumann proved that every countable group can be embedded in a two-generator group. Using the contemporary language of SQ-universality, this result says that "F"2, the free group (non-abelian) on two generators, is SQ-universal. This is the first known example of an SQ-universal group. Many more examples are now known: formula_0 In addition much stronger versions of the Higmann-Neumann-Neumann theorem are now known. Ould Houcine has proved: For every countable group "G" there exists a 2-generator SQ-universal group "H" such that "G" can be embedded in every non-trivial quotient of "H". Some elementary properties of SQ-universal groups. A free group on countably many generators "h"1, "h"2, ..., "hn", ... , say, must be embeddable in a quotient of an SQ-universal group "G". If formula_1 are chosen such that formula_2 for all "n", then they must freely generate a free subgroup of "G". Hence: Every SQ-universal group has as a subgroup, a free group on countably many generators. Since every countable group can be embedded in a countable simple group, it is often sufficient to consider embeddings of simple groups. This observation allows us to easily prove some elementary results about SQ-universal groups, for instance: If "G" is an SQ-universal group and "N" is a normal subgroup of "G" (i.e. formula_3) then either "N" is SQ-universal or the quotient group "G"/"N" is SQ-universal. To prove this suppose "N" is not SQ-universal, then there is a countable group "K" that cannot be embedded into a quotient group of "N". Let "H" be any countable group, then the direct product "H" × "K" is also countable and hence can be embedded in a countable simple group "S". Now, by hypothesis, "G" is SQ-universal so "S" can be embedded in a quotient group, "G"/"M", say, of "G". The second isomorphism theorem tells us: formula_4 Now formula_5 and "S" is a simple subgroup of "G"/"M" so either: formula_6 or: formula_7. The latter cannot be true because it implies "K" ⊆ "H" × "K" ⊆ "S" ⊆ "N"/("M" ∩ "N") contrary to our choice of "K". It follows that "S" can be embedded in ("G"/"M")/("MN"/"M"), which by the third isomorphism theorem is isomorphic to "G"/"MN", which is in turn isomorphic to ("G"/"N")/("MN"/"N"). Thus "S" has been embedded into a quotient group of "G"/"N", and since "H" ⊆ "S" was an arbitrary countable group, it follows that "G"/"N" is SQ-universal. Since every subgroup "H" of finite index in a group "G" contains a normal subgroup "N" also of finite index in "G", it easily follows that: If a group "G" is SQ-universal then so is any finite index subgroup "H" of "G". The converse of this statement is also true. Variants and generalizations of SQ-universality. Several variants of SQ-universality occur in the literature. The reader should be warned that terminology in this area is not yet completely stable and should read this section with this caveat in mind. Let formula_8 be a class of groups. (For the purposes of this section, groups are defined "up to isomorphism") A group "G" is called SQ-universal in the class formula_8 if formula_9 and every countable group in formula_8 is isomorphic to a subgroup of a quotient of "G". The following result can be proved: Let "n", "m" ∈ Z where "m" is odd, formula_10 and "m" &gt; 1, and let "B"("m", "n") be the free m-generator Burnside group, then every non-cyclic subgroup of "B"("m", "n") is SQ-universal in the class of groups of exponent "n". Let formula_8 be a class of groups. A group "G" is called SQ-universal for the class formula_8 if every group in formula_8 is isomorphic to a subgroup of a quotient of "G". Note that there is no requirement that formula_9 nor that any groups be countable. The standard definition of SQ-universality is equivalent to SQ-universality both "in" and "for" the class of countable groups. Given a countable group "G", call an SQ-universal group "H" "G"-stable, if every non-trivial factor group of "H" contains a copy of "G". Let formula_11 be the class of finitely presented SQ-universal groups that are "G"-stable for some "G" then Houcine's version of the HNN theorem that can be re-stated as: The free group on two generators is SQ-universal "for" formula_11. However, there are uncountably many finitely generated groups, and a countable group can only have countably many finitely generated subgroups. It is easy to see from this that: No group can be SQ-universal "in" formula_11. An infinite class formula_8 of groups is wrappable if given any groups formula_12 there exists a simple group "S" and a group formula_13 such that "F" and "G" can be embedded in "S" and "S" can be embedded in "H". The it is easy to prove: If formula_8 is a wrappable class of groups, "G" is an SQ-universal for formula_8 and formula_3 then either "N" is SQ-universal for formula_8 or "G"/"N" is SQ-universal for formula_8. If formula_8 is a wrappable class of groups and "H" is of finite index in "G" then "G" is SQ-universal for the class formula_8 if and only if "H" is SQ-universal for formula_8. The motivation for the definition of wrappable class comes from results such as the Boone-Higman theorem, which states that a countable group "G" has soluble word problem if and only if it can be embedded in a simple group "S" that can be embedded in a finitely presented group "F". Houcine has shown that the group "F" can be constructed so that it too has soluble word problem. This together with the fact that taking the direct product of two groups preserves solubility of the word problem shows that: The class of all finitely presented groups with soluble word problem is wrappable. Other examples of wrappable classes of groups are: The fact that a class formula_8 is wrappable does not imply that any groups are SQ-universal for formula_8. It is clear, for instance, that some sort of cardinality restriction for the members of formula_8 is required. If we replace the phrase "isomorphic to a subgroup of a quotient of" with "isomorphic to a subgroup of" in the definition of "SQ-universal", we obtain the stronger concept of S-universal (respectively S-universal for/in formula_8). The Higman Embedding Theorem can be used to prove that there is a finitely presented group that contains a copy of every finitely presented group. If formula_14 is the class of all finitely presented groups with soluble word problem, then it is known that there is no uniform algorithm to solve the word problem for groups in formula_14. It follows, although the proof is not a straightforward as one might expect, that no group in formula_14 can contain a copy of every group in formula_14. But it is clear that any SQ-universal group is "a fortiori" SQ-universal for formula_14. If we let formula_15 be the class of finitely presented groups, and "F"2 be the free group on two generators, we can sum this up as: The following questions are open (the second implies the first): While it is quite difficult to prove that "F"2 is SQ-universal, the fact that it is SQ-universal "for the class of finite groups" follows easily from these two facts: SQ-universality in other categories. If formula_16 is a category and formula_8 is a class of objects of formula_16, then the definition of "SQ-universal for formula_8" clearly makes sense. If formula_16 is a concrete category, then the definition of "SQ-universal in formula_8" also makes sense. As in the group theoretic case, we use the term SQ-universal for an object that is SQ-universal both "for" and "in" the class of countable objects of formula_16. Many embedding theorems can be restated in terms of SQ-universality. Shirshov's Theorem that a Lie algebra of finite or countable dimension can be embedded into a 2-generator Lie algebra is equivalent to the statement that the 2-generator free Lie algebra is SQ-universal (in the category of Lie algebras). This can be proved by proving a version of the Higman, Neumann, Neumann theorem for Lie algebras. However versions of the HNN theorem can be proved for categories where there is no clear idea of a free object. For instance it can be proved that every separable topological group is isomorphic to a topological subgroup of a group having two topological generators (that is, having a dense 2-generator subgroup). A similar concept holds for free lattices. The free lattice in three generators is countably infinite. It has, as a sublattice, the free lattice in four generators, and, by induction, as a sublattice, the free lattice in a countable number of generators.
[ { "math_id": 0, "text": "P=\\left\\langle a,b,c,d\\,|\\, a^{2}=b^{2}=c^{2}=d^{2}=(ab)^{3}=(bc)^{3}=(ac)^{3}=(ad)^{3}=(cd)^{3}=(bd)^{3}=1\\right\\rangle" }, { "math_id": 1, "text": "h^*_1,h^*_2, \\dots ,h^*_n \\dots \\in G" }, { "math_id": 2, "text": "h^*_n \\mapsto h_n" }, { "math_id": 3, "text": "N\\triangleleft G" }, { "math_id": 4, "text": "MN/M \\cong N/(M \\cap N)" }, { "math_id": 5, "text": "MN/M\\triangleleft G/M" }, { "math_id": 6, "text": "MN/M \\cap S \\cong 1" }, { "math_id": 7, "text": "S\\subseteq MN/M \\cong N/(M \\cap N)" }, { "math_id": 8, "text": "\\mathcal{P}" }, { "math_id": 9, "text": "G\\in \\mathcal{P}" }, { "math_id": 10, "text": "n>10^{78}" }, { "math_id": 11, "text": "\\mathcal{G}" }, { "math_id": 12, "text": "F,G\\in \\mathcal{P}" }, { "math_id": 13, "text": "H\\in \\mathcal{P}" }, { "math_id": 14, "text": "\\mathcal{W}" }, { "math_id": 15, "text": "\\mathcal{F}" }, { "math_id": 16, "text": "\\mathcal{C}" } ]
https://en.wikipedia.org/wiki?curid=5725210
57253540
Pure inductive logic
Pure inductive logic (PIL) is the area of mathematical logic concerned with the philosophical and mathematical foundations of probabilistic inductive reasoning. It combines classical predicate logic and probability theory (Bayesian inference). Probability values are assigned to sentences of a first-order relational language to represent degrees of belief that should be held by a rational agent. Conditional probability values represent degrees of belief based on the assumption of some received evidence. PIL studies prior probability functions on the set of sentences and evaluates the rationality of such prior probability functions through principles that such functions should arguably satisfy. Each of the principles directs the function to assign probability values and conditional probability values to sentences "in some respect" rationally. Not all desirable principles of PIL are compatible, so no prior probability function exists that satisfies them all. Some prior probability functions however are distinguished through satisfying an important collection of principles. History. Inductive logic started to take a clearer shape in the early 20th century in the work of William Ernest Johnson and John Maynard Keynes, and was further developed by Rudolf Carnap. Carnap introduced the distinction between pure and applied inductive logic, and the modern Pure Inductive Logic evolves along the lines of the pure, uninterpreted approach envisaged by Carnap. Framework. General case. In its basic form, PIL uses first-order logic without equality, with the usual connectives formula_0 ("and, or, not" and "implies" respectively), quantifiers formula_1 finitely many predicate (relation) symbols, and countably many constant symbols formula_2. There are no function symbols. The predicate symbols can be unary, binary or of higher arities. The finite set of predicate symbols may vary while the rest of the language is fixed. It is a convention to refer to the language as formula_3 and write formula_4 where the formula_5 list the predicate symbols. The set of all sentences is denoted formula_6. If a sentence is written with constants appearing in it listed then it is assumed that the list includes at least all those that appear. formula_7 is the set of structures for formula_3 with universe formula_8 and with each constant symbol formula_9 interpreted as itself. A probability function for sentences of formula_3 is a function formula_10 with domain formula_6 and values in the unit interval formula_11 satisfying the following conditions: – any logically valid sentence formula_12 has probability formula_13 formula_14 – if sentences formula_12 and formula_15 are mutually exclusive then formula_16 – for a formula formula_17 with one free variable the probability of formula_18 is the limit of probabilities of formula_19 as formula_20 tends to formula_21. This last condition, which goes beyond the standard Kolmogorov axioms (for finite additivity) is referred to as Gaifman's Axiom and it is intended to capture the idea that the formula_9 exhaust the universe. For a probability function formula_10 and a sentence formula_15 with formula_22, the corresponding conditional probability function formula_23 is defined by formula_24 Unlike belief functions in "many valued logics", it is "not" the case that the probability value of a compound sentence is determined by the probability values of its components. Probability respects the classical semantics: logically equivalent sentences must be given the same probability. Hence logically equivalent sentences are often identified. A state description for a finite set of constants is a conjunction of atomic sentences (predicates or their negations) instantiated exclusively by these constants, such that for any eligible atomic sentence either it or its negation (but not both) appears in the conjunction. Any probability function is uniquely determined by its values on state descriptions. To define a probability function, it suffices to specify nonnegative values of all state descriptions for formula_25 (for all formula_20) so that the values of all state descriptions for formula_26 extending a given state description for formula_25 sum to the value of the state description they all extend, with the convention that the (only) state description for no constants is a tautology and that has value formula_27. If formula_28 is a state description for a set of constants including formula_29 then it is said that formula_29 are indistinguishable in formula_28, formula_30, just when upon adding equality to the language (and axioms of equality to the logic) the sentence formula_31 is consistent. formula_32 is an equivalence relation. Unary case. In the special case of Unary PIL, all the predicates formula_33 are unary. Formulae of the form formula_34 where formula_35 stands for one of formula_36, formula_37, are called atoms. It is assumed that they are listed in some fixed order as formula_38. A state description specifies an atom for each constant involved in it, and it can be written as a conjunction of these atoms instantiated by the corresponding constants. Two constants are indistinguishable in the state description if it specifies the same atom for both of them. Central question. Assume a rational agent inhabits a structure in formula_7 but knows nothing about which one it is. What probability function formula_10 should s/he adopt when formula_39 is to represent his/her degree of belief that a sentence formula_12 is true in this ambient structure? Rational principles. General rational principles. The following principles have been proposed as desirable properties of a rational prior probability function formula_10 for formula_3. The constant exchangeability principle, Ex. The probability of a sentence formula_40 does not change when the formula_41 in it are replaced by any other formula_42-tuple of (distinct) constants. The principle of predicate exchangeability, Px. If formula_43 are predicates of the same arity then for a sentence formula_12, formula_44 where formula_45 is the result of simultaneously replacing formula_36 by formula_46 and formula_46 by formula_36 throughout formula_12. The strong negation principle, SN. For a predicate formula_36 and sentence formula_47, formula_44 where formula_45 is the result of simultaneously replacing formula_36 by formula_37 and formula_37 by formula_36 throughout formula_12. The principle of regularity, Reg. If a quantifier-free sentence formula_47 is satisfiable then formula_48. The principle of super regularity (universal certainty), SReg. If a sentence formula_47 is satisfiable then formula_48. The constant irrelevance principle, IP. If sentences formula_49 have no constants in common then formula_50. The weak irrelevance principle, WIP. If sentences formula_49 have no constants nor predicates in common then formula_50. Language invariance principle, Li. There is a family of probability functions formula_51, one on each language formula_52, all satisfying Px and Ex, and such that formula_53 and if all predicates of formula_52 belong also to formula_54 then formula_55 and formula_56 agree on sentences of formula_52. The (strong) counterpart principle, CP. If formula_57 are sentences such that formula_45 is the result of replacing some constant/relation symbols in formula_12 by new constant/relation symbols of the same arity not occurring in formula_12 then formula_58 (SCP) If moreover formula_59 is the result of replacing the same and possibly also additional constant/relation symbols in formula_12 by new constant/relation symbols of the same arity not occurring in formula_12 then formula_60 The Invariance Principle, INV. If formula_61 is an isomorphism of the "Lindenbaum-Tarski algebra" of sentences of formula_3 supported by some permutation formula_62 of formula_63 in the sense that for sentences formula_64, formula_65 just when formula_66 then formula_67. The Permutation Invariance Principle, PIP. As INV except that formula_61 is additionally required to map (equivalence classes of) state descriptions to (equivalence classes of) state descriptions. The Spectrum Exchangeability Principle, Sx. The probability formula_68 of a state description formula_28 depends only on the "spectrum" of formula_28, that is, on the multiset of sizes of equivalence classes with respect to the equivalence relation formula_69. Li with Sx. As the Language Invariance Principle but all the probability functions in the family also satisfy Spectrum Exchangeability. The Principle of Induction, PI. Let formula_70 be a state description and formula_71 a constant not appearing in formula_28. Let formula_72, formula_73 be state descriptions extending formula_28 to include (just) formula_71. If formula_71 is formula_74-equivalent to some and at least as many constants as it is formula_75-equivalent to then formula_76. Further rational principles for unary PIL. The Principle of Instantial Relevance, PIR. For a sentence formula_12, atom formula_77 and constants formula_78 not appearing in formula_12, formula_79. The Generalized Principle of Instantial Relevance, GPIR. For quantifier-free sentences formula_80 with constants formula_78 not appearing in formula_12, if formula_81 then formula_82 Johnson Sufficientness Principle, JSP. For a state description formula_28 for formula_20 constants, atom formula_77 and constant formula_71 not appearing in formula_28, the probability formula_83 depends only on formula_20 and on the number of constants for which formula_28 specifies formula_77. The Principle of Atom Exchangeability, Ax. If formula_84 is a permutation of formula_85 and formula_28 is a state description expressed as a conjunction of instantiated atoms then formula_86 where formula_87 obtains from formula_28 upon replacing each formula_88 by formula_89. Reichenbach's Axiom, RA. Let formula_90 for formula_91 be an infinite sequence of atoms and formula_77 an atom. Then as formula_92 tends to formula_21, the difference between the conditional probability formula_93 and the proportion of occurrences of formula_77 amongst the formula_94 tends to formula_95. Principle of Induction for Unary languages, UPI. For a state description formula_28, atoms formula_96 and constant formula_71 not appearing in formula_28, if formula_28 specifies formula_88 for at least as many constants as formula_97 then formula_98 Recovery. Whenever formula_99 is a state description then there is another state description formula_100 such that formula_101 and for any quantifier-free sentence formula_102, formula_103 Unary Language Invariance Principle, ULi. As Li, but with the languages restricted to the unary ones. ULi with Ax. As ULi but with all the probability functions in the family also satisfying Atom Exchangeability. Relationships between principles. General Case. Sx implies Ex, Px and SN. PIP + Ex implies Sx. INV implies PIP and Ex. Li implies CP and SCP. Li with Sx implies PI. Unary case. Ex implies PIR. Ax is equivalent to PIP. Ax+Ex implies UPI. Ax+Ex is equivalent to Sx. ULi with Ax implies Li with Sx. Important probability functions. General probability functions. Functions formula_104. For a given structure formula_105 and formula_106, formula_107 Functions formula_108. For a given state description formula_109, formula_110 is defined via specifying its values for state descriptions as follows. formula_111 is the probability that when formula_112 are randomly picked from formula_113, "with replacement" and according to the uniform distribution, then formula_114 Functions formula_115. As above but employing a non-standard universe (starting with a possibly non-standard state description formula_73) to obtain the standard formula_115. formula_116 The formula_117 are the only probability functions that satisfy Ex and IP. Functions formula_118. For a given infinite sequence formula_119 of non-negative real numbers such that formula_120 and formula_121, formula_118 is defined via specifying its values for state descriptions as follows: For a sequence formula_122 of natural numbers and a state description formula_123, formula_28 is consistent with formula_124 if whenever formula_125 then formula_126. formula_127 is the number of state descriptions for formula_128 consistent with formula_124. formula_129 is the sum over those formula_130 with which formula_28 is compatible, of formula_131 formula_116 The formula_118 are the only probability functions that satisfy WIP and Li with Sx. (The language invariant family witnessing Li with Sx consists of the functions formula_132 with fixed formula_133, where formula_132 is as formula_118 but defined with language formula_52.) Further probability functions (unary PIL). Functions formula_10formula_124. For a vector formula_134 of non-negative real numbers summing to one, formula_10formula_124 is defined via specifying its values for state descriptions as follows: formula_10formula_124formula_135 where formula_136 the is number of constants for which formula_28 specifies formula_97. formula_116 The formula_10formula_124 are the only probability functions that satisfy Ex and IP (they are also expressible as formula_137 ). Carnap continuum functions formula_138 For formula_139, the probability function formula_140 is uniquely determined by the values formula_141 where formula_28 is a state description for formula_20 constants not including formula_71 and formula_136 is the number of constants for which formula_28 specifies formula_97. Furthermore, formula_142 is the probability function that assigns formula_143 to every state description for formula_20 constants and formula_144 is the probability function that assigns formula_145 to any state description in which all constants are indistinguishable, formula_95 to any other state description. formula_146 The formula_140 are the only probability functions that satisfy Ex and JSP. formula_146 They also satisfy Li – the functions formula_147 with fixed formula_148, where formula_147 is as formula_140 but defined with language formula_52 provide the unary language-invariant family members. Functions formula_149. For formula_150, formula_149 is the average of the formula_151 functions formula_10formula_124 where formula_124 has all but one coordinate equal to each other with the odd coordinate differing from them by formula_152, so formula_153formula_10formula_154 where formula_155, (formula_156 in formula_157th place) and formula_158. For formula_159, the formula_149 are equal to formula_160 for formula_161 and as such they satisfy Li. formula_116 The formula_149 are the only functions that satisfy GPIR, Ex, Ax and Reg. formula_116 The formula_149 with formula_162 are the only functions that satisfy Recovery, Reg and ULi with Ax. Representation theorems. A representation theorem for a class of probability functions provides means of expressing "every" probability function in the class in terms of generic, relatively simple probability functions from the same class. Representation Theorem for all probability functions. Every probability function formula_10 for formula_3 can be represented as formula_163 where formula_62 is a formula_164-additive measure on the formula_164-algebra of subsets of formula_63 generated by the sets formula_165 Representation Theorem for Ex (employing non-standard analysis and Loeb Integration Theory). Every probability function formula_10 for formula_3 satisfying Ex can be represented as formula_166 where formula_167 is an internal set of state descriptions for formula_168 (with formula_169 a fixed infinite natural number) and formula_62 is a formula_164-additive measure on a formula_164-algebra of subsets of formula_167 . Representation Theorem for Li with Sx. Every probability function formula_10 for formula_3 satisfying Li with Sx can be represented as formula_170 where formula_171 is the set of sequences formula_119 of non-negative reals summing to formula_27 and such that formula_172 and formula_62 is a formula_164-additive measure on the Borel subsets of formula_171 in the product topology. de Finetti's Representation Theorem (unary). In the unary case (where formula_3 is a language containing formula_173 unary predicates), the representation theorem for Ex is equivalent to: Every probability function formula_10 for formula_3 satisfying Ex can be represented as formula_174 where formula_175 is the set of vectors formula_176 of non-negative real numbers summing to one and formula_62 is a formula_164-additive measure on formula_175. Notes. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\wedge, \\vee, \\neg, \\to" }, { "math_id": 1, "text": "\\exist, \\forall," }, { "math_id": 2, "text": "a_1, a_2, a_3, \\ldots \\," }, { "math_id": 3, "text": "L" }, { "math_id": 4, "text": "L = \\{R_1, R_2, \\ldots, R_q\\}" }, { "math_id": 5, "text": "R_i" }, { "math_id": 6, "text": "SL" }, { "math_id": 7, "text": "{\\cal T}L" }, { "math_id": 8, "text": "\\{a_1, a_2, a_3, \\ldots\\}" }, { "math_id": 9, "text": "a_i" }, { "math_id": 10, "text": "w" }, { "math_id": 11, "text": "[0,1]" }, { "math_id": 12, "text": "\\theta" }, { "math_id": 13, "text": "1\\!:\\," }, { "math_id": 14, "text": "w(\\theta)=1" }, { "math_id": 15, "text": "\\phi" }, { "math_id": 16, "text": "w(\\theta \\vee \\phi)= w(\\theta) + w(\\phi)" }, { "math_id": 17, "text": "\\psi(x)" }, { "math_id": 18, "text": "\\exists x \\, \\psi(x)" }, { "math_id": 19, "text": "\\psi(a_1) \\vee \\psi(a_2) \\vee \\ldots \\vee \\psi(a_n)" }, { "math_id": 20, "text": "n" }, { "math_id": 21, "text": "\\infty" }, { "math_id": 22, "text": "w(\\phi)>0" }, { "math_id": 23, "text": " w(\\,. |\\, \\phi)" }, { "math_id": 24, "text": "w(\\theta \\mid \\phi) = \\frac{w(\\theta \\wedge \\varphi)}{w(\\varphi)} \\quad\\ (\\theta \\in SL)." }, { "math_id": 25, "text": "a_1, \\ldots,a_n" }, { "math_id": 26, "text": "a_1, \\ldots,a_n, a_{n+1}" }, { "math_id": 27, "text": "1" }, { "math_id": 28, "text": "\\Theta" }, { "math_id": 29, "text": "a_i,a_j" }, { "math_id": 30, "text": "a_i \\sim_\\Theta a_j" }, { "math_id": 31, "text": "\\Theta \\wedge a_i=a_j" }, { "math_id": 32, "text": "\\,\\sim_\\Theta" }, { "math_id": 33, "text": "R_1, \\ldots, R_q" }, { "math_id": 34, "text": "~~~~~~~~~~~~\\beta(x) = \\pm R_1(x)\\wedge \\pm R_2(x) \\wedge \\ldots \\wedge \\pm R_q(x)" }, { "math_id": 35, "text": "\\pm R " }, { "math_id": 36, "text": "R" }, { "math_id": 37, "text": "\\neg R" }, { "math_id": 38, "text": "\\beta_1, \\beta_2,\\ldots, \\beta_{2^q}" }, { "math_id": 39, "text": "w(\\theta)" }, { "math_id": 40, "text": "\\theta(a_1,a_2, \\ldots, a_m) " }, { "math_id": 41, "text": "a_1, a_2, \\ldots, a_m" }, { "math_id": 42, "text": "m" }, { "math_id": 43, "text": "R,R'" }, { "math_id": 44, "text": "w(\\theta)=w(\\theta')" }, { "math_id": 45, "text": "\\theta'" }, { "math_id": 46, "text": "R'" }, { "math_id": 47, "text": "\\theta " }, { "math_id": 48, "text": "w(\\theta) >0" }, { "math_id": 49, "text": "\\theta, \\phi " }, { "math_id": 50, "text": "w(\\theta \\wedge \\phi) = w(\\theta) \\cdot w(\\phi)" }, { "math_id": 51, "text": "w^{J}" }, { "math_id": 52, "text": "J" }, { "math_id": 53, "text": "w^L=w" }, { "math_id": 54, "text": "K" }, { "math_id": 55, "text": "w^J" }, { "math_id": 56, "text": "w^K" }, { "math_id": 57, "text": "\\theta, \\theta' " }, { "math_id": 58, "text": "w(\\theta \\mid \\theta') \\geq w(\\theta). " }, { "math_id": 59, "text": "\\theta''" }, { "math_id": 60, "text": "w(\\theta \\mid \\theta') \\geq w(\\theta \\mid \\theta'') \\geq w(\\theta). " }, { "math_id": 61, "text": "F" }, { "math_id": 62, "text": "\\mu" }, { "math_id": 63, "text": "{\\cal T} L" }, { "math_id": 64, "text": "\\theta, \\phi" }, { "math_id": 65, "text": "F([\\theta]) = [\\phi]~" }, { "math_id": 66, "text": "~ M \\models \\theta \\Longleftrightarrow \\mu(M) \\models \\phi" }, { "math_id": 67, "text": "w(\\theta) = w(\\phi)" }, { "math_id": 68, "text": "w(\\Theta)" }, { "math_id": 69, "text": "\\sim_\\Theta" }, { "math_id": 70, "text": "\\Theta " }, { "math_id": 71, "text": "a_k" }, { "math_id": 72, "text": "\\Phi" }, { "math_id": 73, "text": "\\Psi" }, { "math_id": 74, "text": "\\sim_\\Phi" }, { "math_id": 75, "text": "\\sim_\\Psi" }, { "math_id": 76, "text": "w(\\Phi\\mid \\Theta) \\geq w(\\Psi \\mid \\Theta)" }, { "math_id": 77, "text": "\\beta" }, { "math_id": 78, "text": "a_k,a_m" }, { "math_id": 79, "text": "w(\\beta(a_k) \\mid \\beta(a_m) \\wedge \\theta) \\geq w(\\beta(a_k) \\mid \\theta)" }, { "math_id": 80, "text": "\\psi(a_k), \\phi(a_m), \\theta " }, { "math_id": 81, "text": "\\psi(x) \\models \\phi(x)" }, { "math_id": 82, "text": " w( \\psi(a_{k}) \\mid \\phi(a_{m}) \\wedge \\theta) \\geq w( \\psi(a_{k}) \\mid \\theta)." }, { "math_id": 83, "text": "w(\\beta(a_k)\\mid \\Theta)" }, { "math_id": 84, "text": "\\tau" }, { "math_id": 85, "text": "\\{1,2, \\ldots, 2^q\\}" }, { "math_id": 86, "text": "w(\\Theta)=w(\\Theta')" }, { "math_id": 87, "text": "\\Theta'" }, { "math_id": 88, "text": "\\beta_i" }, { "math_id": 89, "text": "\\beta_{\\tau(i)}" }, { "math_id": 90, "text": " \\beta_{h_i}" }, { "math_id": 91, "text": "i=1,2,3,\\ldots" }, { "math_id": 92, "text": "n " }, { "math_id": 93, "text": "w(\\beta(a_{n+1}) \\mid \\beta_{h_1}(a_1) \\wedge \\beta_{h_2}(a_2) \\wedge \\ldots \\wedge \\beta_{h_n}(a_n))" }, { "math_id": 94, "text": "\\beta_{h_1}, \\beta_{h_2}, \\ldots ,\\beta_{h_n}" }, { "math_id": 95, "text": "0" }, { "math_id": 96, "text": "\\beta_i, \\beta_j" }, { "math_id": 97, "text": "\\beta_j" }, { "math_id": 98, "text": "w(\\beta_i(a_k)\\mid \\Theta) \\geq w(\\beta_j(a_k)\\mid \\Theta)." }, { "math_id": 99, "text": "\\Psi(a_1,a_2, \\ldots, a_n)" }, { "math_id": 100, "text": "\\Phi(a_{n+1}, a_{n+2}, \\ldots, a_{h})" }, { "math_id": 101, "text": "w(\\Phi \\wedge \\Psi) \\neq 0" }, { "math_id": 102, "text": "\\theta(a_{h+1}, a_{h+2}, \\ldots, a_{h+g})" }, { "math_id": 103, "text": "w(\\theta(a_{h+1}, a_{h+2}, \\ldots, a_{h+g})\\,|\\,\\Phi \\wedge \\Psi) = w(\\theta(a_{h+1}, a_{h+2}, \\ldots, a_{h+g}))." }, { "math_id": 104, "text": "V_M" }, { "math_id": 105, "text": "M \\in {\\cal T} L" }, { "math_id": 106, "text": "\\theta \\in SL" }, { "math_id": 107, "text": "V_M(\\theta)= \\left\\{ \\begin{array}{ll} 1& {\\rm if}~ M\\models \\theta,\\\\ 0&{\\rm otherwise}.\\end{array} \\right." }, { "math_id": 108, "text": "\\omega^{\\Psi}" }, { "math_id": 109, "text": "\\Psi(a_1,a_2, \\ldots, a_K)" }, { "math_id": 110, "text": "\\,\\omega^{\\Psi}" }, { "math_id": 111, "text": "\\,\\omega^{\\Psi}(\\Theta(a_1,a_2, \\ldots, a_n))" }, { "math_id": 112, "text": "a_{h_1},a_{h_2}, \\ldots, a_{h_n}" }, { "math_id": 113, "text": "\\{a_1, \\ldots,a_K\\}" }, { "math_id": 114, "text": " \\Psi(a_1, \\ldots, a_K) \\models \\Theta(a_{h_1}, a_{h_2}, \\ldots, a_{h_n})." }, { "math_id": 115, "text": "^\\circ \\! (\\omega^\\Psi)" }, { "math_id": 116, "text": "\\bullet " }, { "math_id": 117, "text": "^\\circ \\! (\\omega^{\\Psi})" }, { "math_id": 118, "text": "u^{\\overline{p}}" }, { "math_id": 119, "text": "\\overline{p} = \\langle p_0,p_1,p_2,p_3, \\ldots \\rangle" }, { "math_id": 120, "text": "p_1 \\geq p_2 \\geq p_3 \\geq \\ldots \\geq 0\\, \\, " }, { "math_id": 121, "text": "~\\sum_{i=0}^\\infty p_i = 1" }, { "math_id": 122, "text": "\\vec{c} = \\langle c_1,c_2, \\ldots, c_n\\rangle" }, { "math_id": 123, "text": "\\Theta(a_{1}, a_{2}, \\ldots, a_{n})" }, { "math_id": 124, "text": "\\vec{c}" }, { "math_id": 125, "text": "c_s=c_t \\neq 0" }, { "math_id": 126, "text": "a_{s} \\sim_\\Theta a_{t}" }, { "math_id": 127, "text": "C(\\vec{c})" }, { "math_id": 128, "text": "a_{1}, a_{2}, \\ldots, a_{n}" }, { "math_id": 129, "text": "\\,u^{\\overline{p}}(\\Theta) " }, { "math_id": 130, "text": "\\vec{c} " }, { "math_id": 131, "text": " C(\\vec{c})^{-1} \\prod_{s=1}^n p_{c_s}." }, { "math_id": 132, "text": "u^{\\overline{p}, J}" }, { "math_id": 133, "text": "\\overline{p}" }, { "math_id": 134, "text": "\\vec{c} = \\langle c_1,c_2, \\ldots, c_{2^q}\\rangle" }, { "math_id": 135, "text": "(\\Theta )= \\prod_{j=1}^{2^q} c_{j}^{m_j}" }, { "math_id": 136, "text": "m_j" }, { "math_id": 137, "text": "^\\circ \\! (w^{\\Psi})" }, { "math_id": 138, "text": "c_{\\lambda}.\\," }, { "math_id": 139, "text": "\\lambda>0" }, { "math_id": 140, "text": "c_\\lambda" }, { "math_id": 141, "text": "c_\\lambda(\\beta_j(a_{n+1}) \\mid \\Theta) = \\frac{m_j + \\lambda2^{-q}}{n + \\lambda}" }, { "math_id": 142, "text": "c_\\infty" }, { "math_id": 143, "text": "2^{-nq}" }, { "math_id": 144, "text": "c_0" }, { "math_id": 145, "text": "2^{-q} " }, { "math_id": 146, "text": "\\bullet" }, { "math_id": 147, "text": "c^{J}_\\lambda" }, { "math_id": 148, "text": "\\lambda" }, { "math_id": 149, "text": "w^{\\delta}" }, { "math_id": 150, "text": "-(2^q-1)^{-1} \\leq \\delta \\leq 1" }, { "math_id": 151, "text": "2^q" }, { "math_id": 152, "text": "\\delta" }, { "math_id": 153, "text": " w^\\delta= 2^{-q} \\sum_{i=1}^{2^q} " }, { "math_id": 154, "text": "\\vec{e_i}" }, { "math_id": 155, "text": "\\vec{e_i} = \\langle \\gamma, \\gamma, \\ldots, \\gamma, \\gamma + \\delta, \\gamma, \\ldots, \\gamma \\rangle ~" }, { "math_id": 156, "text": "\\gamma+\\delta" }, { "math_id": 157, "text": "i" }, { "math_id": 158, "text": "\\gamma = 2^{-q}(1-\\delta)" }, { "math_id": 159, "text": "0\\leq \\delta \\leq 1" }, { "math_id": 160, "text": "u^{\\bar{p}}" }, { "math_id": 161, "text": "\\bar{p} = \\langle 1-\\delta, \\delta, 0,0,0,\\ldots \\rangle" }, { "math_id": 162, "text": "0\\leq\\delta <1" }, { "math_id": 163, "text": "w= \\int_{{\\cal T} L} V_M \\,d\\mu(M)" }, { "math_id": 164, "text": "\\sigma" }, { "math_id": 165, "text": "\\{\\, M \\in {\\cal T} L \\mid M \\vDash \\theta\\,\\} ~ ~~~ (\\theta \\in SL)." }, { "math_id": 166, "text": "w = \\int_A \\,^\\circ\\!(\\omega^{\\Psi}) \\, d\\mu(\\Psi)" }, { "math_id": 167, "text": "A" }, { "math_id": 168, "text": "a_1, a_2, \\ldots, a_\\nu" }, { "math_id": 169, "text": "\\nu" }, { "math_id": 170, "text": "w = \\int_{\\mathbb B} \\,u^{\\overline{p}}\\, d\\mu(\\overline{p}) " }, { "math_id": 171, "text": "{\\mathbb B}" }, { "math_id": 172, "text": "p_1 \\geq p_2 \\geq p_3 \\geq \\ldots \\,\\geq 0 \\," }, { "math_id": 173, "text": "q" }, { "math_id": 174, "text": " w= \\int_{\\mathbb D} w_{\\vec{x}}\\, d\\mu(\\vec{x})." }, { "math_id": 175, "text": "{\\mathbb D}" }, { "math_id": 176, "text": "\\vec{x} = \\langle x_1,x_2, \\ldots, x_{2^q}\\rangle" } ]
https://en.wikipedia.org/wiki?curid=57253540
57255362
Thermal boundary layer thickness and shape
This page describes some parameters used to characterize the properties of the thermal boundary layer formed by a heated (or cooled) fluid moving along a heated (or cooled) wall. In many ways, the thermal boundary layer description parallels the velocity (momentum) boundary layer description first conceptualized by Ludwig Prandtl. Consider a fluid of uniform temperature formula_0 and velocity formula_1 impinging onto a stationary plate uniformly heated to a temperature formula_2. Assume the flow and the plate are semi-infinite in the positive/negative direction perpendicular to the formula_3 plane. As the fluid flows along the wall, the fluid at the wall surface satisfies a no-slip boundary condition and has zero velocity, but as you move away from the wall, the velocity of the flow asymptotically approaches the free stream velocity formula_4. The temperature at the solid wall is formula_2 and gradually changes to formula_0 as one moves toward the free stream of the fluid. It is impossible to define a sharp point at which the thermal boundary layer fluid or the velocity boundary layer fluid becomes the free stream, yet these layers have a well-defined characteristic thickness given by formula_5 and formula_6. The parameters below provide a useful definition of this characteristic, measurable thickness for the thermal boundary layer. Also included in this boundary layer description are some parameters useful in describing the shape of the thermal boundary layer. 99% thermal boundary layer thickness. The thermal boundary layer thickness, formula_5, is the distance across a boundary layer from the wall to a point where the flow temperature has essentially reached the 'free stream' temperature, formula_7. This distance is defined normal to the wall in the formula_8-direction. The thermal boundary layer thickness is customarily defined as the point in the boundary layer, formula_9, where the temperature formula_10 reaches 99% of the free stream value formula_7: formula_11 such that formula_12 = 0.99 formula_7 at a position formula_13 along the wall. In a real fluid, this quantity can be estimated by measuring the temperature profile at a position formula_14 along the wall. The temperature profile is the temperature as a function of formula_8 at a fixed formula_13 position. For laminar flow over a flat plate at zero incidence, the thermal boundary layer thickness is given by: formula_15 formula_16 where formula_17 is the Prandtl Number formula_6 is the thickness of the velocity boundary layer thickness formula_4 is the freestream velocity formula_13 is the distance downstream from the start of the boundary layer formula_18 is the kinematic viscosity For turbulent flow over a flat plate, the thickness of the thermal boundary layer that is formed is not determined by thermal diffusion, but instead, it is random fluctuations in the outer region of the boundary layer of the fluid that is the driving force determining thermal boundary layer thickness. Thus the thermal boundary layer thickness for turbulent flow does not depend on the Prandtl number but instead on the Reynolds number. Hence, the turbulent thermal boundary layer thickness is given approximately by the turbulent velocity boundary layer thickness expression given by: formula_19 where formula_20 is the Reynolds number This turbulent boundary layer thickness formula assumes 1) the flow is turbulent right from the start of the boundary layer and 2) the turbulent boundary layer behaves in a geometrically similar manner (i.e. the velocity profiles are geometrically similar along the flow in the x-direction, differing only by stretching factors in formula_8 and formula_21). Neither one of these assumptions is true for the general turbulent boundary layer case so care must be exercised in applying this formula. Thermal displacement thickness. The thermal displacement thickness, formula_22 may be thought of in terms of the difference between a real fluid and a hypothetical fluid with thermal diffusion turned off but with velocity formula_4 and temperature formula_7. With no thermal diffusion, the temperature drop is abrupt. The thermal displacement thickness is the distance by which the hypothetical fluid surface would have to be moved in the formula_8-direction to give the same integrated temperature as occurs between the wall and the reference plane at formula_5 in the real fluid. It is a direct analog to the velocity displacement thickness which is often described in terms of an equivalent shift of a hypothetical inviscid fluid (see Schlichting for velocity displacement thickness). The definition of the thermal displacement thickness for incompressible flow is based on the integral of the reduced temperature: formula_23 where the dimensionless temperature is formula_24. In a wind tunnel, the velocity and temperature profiles are obtained by measuring the velocity and temperature at many discrete formula_8-values at a fixed formula_13-position. The thermal displacement thickness can then be estimated by numerically integrating the scaled temperature profile. Moment method. A relatively new method for describing the thickness and shape of the thermal boundary layer utilizes the moment method commonly used to describe a random variable's probability distribution. The moment method was developed from the observation that the plot of the second derivative of the thermal profile for laminar flow over a plate looks very much like a Gaussian distribution curve. It is straightforward to cast the properly scaled thermal profile into a suitable integral kernel. The thermal profile central moments are defined as: formula_25 where the mean location, formula_26, is given by: formula_27 There are some advantages to also include descriptions of moments of the boundary layer profile derivatives with respect to the height above the wall. Consider the first derivative temperature profile central moments given by: formula_28 where the mean location is the thermal displacement thickness formula_22. Finally the second derivative temperature profile central moments are given by: formula_29 where the mean location, formula_30, is given by: formula_31 With the moments and the thermal mean location defined, the boundary layer thickness and shape can be described in terms of the thermal boundary layer width (variance), thermal skewnesses, and thermal excess (excess kurtosis). For the Pohlhausen solution for laminar flow on a heated flat plate, it is found that thermal boundary layer thickness defined as formula_32 where formula_33, tracks the 99% thickness very well. For laminar flow, the three different moment cases all give similar values for the thermal boundary layer thickness. For turbulent flow, the thermal boundary layer can be divided into a region near the wall where thermal diffusion is important and an outer region where thermal diffusion effects are mostly absent. Taking a cue from the boundary layer energy balance equation, the second derivative boundary layer moments, formula_34 track the thickness and shape of that portion of the thermal boundary layer where thermal diffusivity formula_35 is significant. Hence the moment method makes it possible to track and quantify the region where thermal diffusivity is important using formula_34 moments whereas the overall thermal boundary layer is tracked using formula_36 and formula_37 moments. Calculation of the derivative moments without the need to take derivatives is simplified by using integration by parts to reduce the moments to simply integrals based on the thermal displacement thickness kernel: formula_38 This means that the second derivative skewness, for example, can be calculated as: formula_39 Notes. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "T_o" }, { "math_id": 1, "text": "u_o" }, { "math_id": 2, "text": "T_s" }, { "math_id": 3, "text": "x-y" }, { "math_id": 4, "text": "u_0" }, { "math_id": 5, "text": "\\delta_T" }, { "math_id": 6, "text": "\\delta_v" }, { "math_id": 7, "text": "T_0" }, { "math_id": 8, "text": "y" }, { "math_id": 9, "text": "y_{99}" }, { "math_id": 10, "text": "T(x,y)" }, { "math_id": 11, "text": "\\delta_T = y_{99}" }, { "math_id": 12, "text": "T(x,y_{99})" }, { "math_id": 13, "text": "x" }, { "math_id": 14, "text": "x \n" }, { "math_id": 15, "text": " \\delta_T = \\delta_v \\mathrm{Pr}^{-1/3} " }, { "math_id": 16, "text": " \\delta_T = 5.0 {}\\sqrt{ {\\nu x}\\over u_0} \\mathrm{Pr}^{-1/3}" }, { "math_id": 17, "text": "\\mathrm{Pr}" }, { "math_id": 18, "text": "\\nu" }, { "math_id": 19, "text": " \\delta_T \\approx \\delta \\approx 0.37x/ {\\mathrm{Re}_x}^{1/5} " }, { "math_id": 20, "text": "{\\mathrm{Re}_x}= u_0 x/\\nu" }, { "math_id": 21, "text": "u(x,y)" }, { "math_id": 22, "text": "\\beta^*" }, { "math_id": 23, "text": " {\\beta^*}= \\int_0^\\infty {\\theta(x,y) \\,\\mathrm{d}y}" }, { "math_id": 24, "text": "\\theta(x,y) = (T(x,y)-T_0)/(T_s-T_0)" }, { "math_id": 25, "text": " {\\xi_n} = {1\\over\\beta^*}\\int_0^\\infty { (y- m_T)^n \\theta(x,y) \\mathrm{d}y}" }, { "math_id": 26, "text": "m_T" }, { "math_id": 27, "text": " m_T = {1\\over\\beta^*}\\int_0^\\infty { y \\theta(x,y) \\mathrm{d}y}" }, { "math_id": 28, "text": " {\\epsilon_n} = \\int_0^\\infty { (y-{\\beta^*})^n {d \\theta(x,y) \\over dy} \\mathrm{d}y}" }, { "math_id": 29, "text": " {\\phi_n} = \\mu_T \\int_0^\\infty { (y-{\\mu_T})^n {d^2 \\theta(x,y) \\over dy^2} \\mathrm{d}y}" }, { "math_id": 30, "text": "\\mu_T" }, { "math_id": 31, "text": " {1 \\over \\mu_T} = -\\left( \\frac{d\\theta(x,y) }{d y}\\right)_{y=0}" }, { "math_id": 32, "text": "\\delta_T = m_T + 4\\sigma_T" }, { "math_id": 33, "text": "\\sigma_T=\\xi_2^{1/2}" }, { "math_id": 34, "text": "{\\phi_n}" }, { "math_id": 35, "text": "{\\alpha }" }, { "math_id": 36, "text": "{\\epsilon_n}" }, { "math_id": 37, "text": "{\\xi_n}" }, { "math_id": 38, "text": " {k_n}= \\int_0^\\infty {y^n\\theta(x,y) \\,\\mathrm{d}y}" }, { "math_id": 39, "text": "\\gamma_{T} = \\phi_3/\\phi_2^{3/2} = (2\\mu_T^3 - 6\\beta^*\\mu_T^2 + 6\\mu_T k_1)/(-\\mu_T^2 + 2\\mu_T\\beta^*)^{3/2} " } ]
https://en.wikipedia.org/wiki?curid=57255362
57256998
Gekko (optimization software)
Python package The GEKKO Python package solves large-scale mixed-integer and differential algebraic equations with nonlinear programming solvers (IPOPT, APOPT, BPOPT, SNOPT, MINOS). Modes of operation include machine learning, data reconciliation, real-time optimization, dynamic simulation, and nonlinear model predictive control. In addition, the package solves Linear programming (LP), Quadratic programming (QP), Quadratically constrained quadratic program (QCQP), Nonlinear programming (NLP), Mixed integer programming (MIP), and Mixed integer linear programming (MILP). GEKKO is available in Python and installed with pip from PyPI of the Python Software Foundation. pip install gekko GEKKO works on all platforms and with Python 2.7 and 3+. By default, the problem is sent to a public server where the solution is computed and returned to Python. There are Windows, MacOS, Linux, and ARM (Raspberry Pi) processor options to solve without an Internet connection. GEKKO is an extension of the APMonitor Optimization Suite but has integrated the modeling and solution visualization directly within Python. A mathematical model is expressed in terms of variables and equations such as the Hock &amp; Schittkowski Benchmark Problem #71 used to test the performance of nonlinear programming solvers. This particular optimization problem has an objective function formula_0 and subject to the inequality constraint formula_1 and equality constraint formula_2. The four variables must be between a lower bound of 1 and an upper bound of 5. The initial guess values are formula_3. This optimization problem is solved with GEKKO as shown below. from gekko import GEKKO m = GEKKO() # Initialize gekko x1 = m.Var(value=1, lb=1, ub=5) x2 = m.Var(value=5, lb=1, ub=5) x3 = m.Var(value=5, lb=1, ub=5) x4 = m.Var(value=1, lb=1, ub=5) m.Equation(x1 * x2 * x3 * x4 &gt;= 25) m.Equation(x1 ** 2 + x2 ** 2 + x3 ** 2 + x4 ** 2 == 40) m.Minimize(x1 * x4 * (x1 + x2 + x3) + x3) m.solve(disp=False) # Solve print("x1: " + str(x1.value)) print("x2: " + str(x2.value)) print("x3: " + str(x3.value)) print("x4: " + str(x4.value)) print("Objective: " + str(m.options.objfcnval)) Applications of GEKKO. Applications include cogeneration (power and heat), drilling automation, severe slugging control, solar thermal energy production, solid oxide fuel cells, flow assurance, Enhanced oil recovery, Essential oil extraction, and Unmanned Aerial Vehicles (UAVs). There are many other references to APMonitor and GEKKO as a sample of the types of applications that can be solved. GEKKO is developed from the National Science Foundation (NSF) research grant #1547110 and is detailed in a Special Issue collection on combined scheduling and control. Other notable mentions of GEKKO are the listing in the Decision Tree for Optimization Software, added support for APOPT and BPOPT solvers, projects reports of the online Dynamic Optimization course from international participants. GEKKO is a topic in online forums where users are solving optimization and optimal control problems. GEKKO is used for advanced control in the Temperature Control Lab (TCLab) for process control education at 20 universities. Machine learning. One application of machine learning is to perform regression from training data to build a correlation. In this example, deep learning generates a model from training data that is generated with the function formula_4. An artificial neural network with three layers is used for this example. The first layer is linear, the second layer has a hyperbolic tangent activation function, and the third layer is linear. The program produces parameter weights that minimize the sum of squared errors between the measured data points and the neural network predictions at those points. GEKKO uses gradient-based optimizers to determine the optimal weight values instead of standard methods such as backpropagation. The gradients are determined by automatic differentiation, similar to other popular packages. The problem is solved as a constrained optimization problem and is converged when the solver satisfies Karush–Kuhn–Tucker conditions. Using a gradient-based optimizer allows additional constraints that may be imposed with domain knowledge of the data or system. from gekko import brain import numpy as np b = brain.Brain() b.input_layer(1) b.layer(linear=3) b.layer(tanh=3) b.layer(linear=3) b.output_layer(1) x = np.linspace(-np.pi, 3 * np.pi, 20) y = 1 - np.cos(x) b.learn(x, y) The neural network model is tested across the range of training data as well as for extrapolation to demonstrate poor predictions outside of the training data. Predictions outside the training data set are improved with hybrid machine learning that uses fundamental principles (if available) to impose a structure that is valid over a wider range of conditions. In the example above, the hyperbolic tangent activation function (hidden layer 2) could be replaced with a sine or cosine function to improve extrapolation. The final part of the script displays the neural network model, the original function, and the sampled data points used for fitting. import matplotlib.pyplot as plt xp = np.linspace(-2 * np.pi, 4 * np.pi, 100) yp = b.think(xp) plt.figure() plt.plot(x, y, "bo") plt.plot(xp, yp[0], "r-") plt.show() Optimal control. Optimal control is the use of mathematical optimization to obtain a policy that is constrained by differential formula_5, equality formula_6, or inequality formula_7 equations and minimizes an objective/reward function formula_8. The basic optimal control is solved with GEKKO by integrating the objective and transcribing the differential equation into algebraic form with orthogonal collocation on finite elements. from gekko import GEKKO import numpy as np import matplotlib.pyplot as plt m = GEKKO() # initialize gekko nt = 101 m.time = np.linspace(0, 2, nt) x1 = m.Var(value=1) x2 = m.Var(value=0) u = m.Var(value=0, lb=-1, ub=1) p = np.zeros(nt) # mark final time point p[-1] = 1.0 final = m.Param(value=p) m.Equation(x1.dt() == u) m.Equation(x2.dt() == 0.5 * x1 ** 2) m.Minimize(x2 * final) m.options.IMODE = 6 # optimal control mode m.solve() # solve plt.figure(1) # plot results plt.plot(m.time, x1.value, "k-", label=r"$x_1$") plt.plot(m.time, x2.value, "b-", label=r"$x_2$") plt.plot(m.time, u.value, "r--", label=r"$u$") plt.legend(loc="best") plt.xlabel("Time") plt.ylabel("Value") plt.show() References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\min_{x\\in\\mathbb R}\\; x_1 x_4 (x_1+x_2+x_3)+x_3" }, { "math_id": 1, "text": "x_1 x_2 x_3 x_4 \\ge 25" }, { "math_id": 2, "text": "{x_1}^2 + {x_2}^2 + {x_3}^2 + {x_4}^2=40" }, { "math_id": 3, "text": "x_1 = 1, x_2=5, x_3=5, x_4=1" }, { "math_id": 4, "text": "1-\\cos(x)" }, { "math_id": 5, "text": "\\left(\\frac{d\\,x_1}{d\\,t}=u\\right)" }, { "math_id": 6, "text": "\\left(x_1(0) = 1\\right)" }, { "math_id": 7, "text": "\\left(-1 \\le u(t) \\le 1\\right)" }, { "math_id": 8, "text": "\\left(\\min_u \\frac{1}{2} \\int_0^2 x_1^2(t) \\, dt\\right)" } ]
https://en.wikipedia.org/wiki?curid=57256998
57258626
Jouanolou's trick
Theorem in algebraic geometry that builds a homotopy equivalent affine variety In algebraic geometry, Jouanolou's trick is a theorem that asserts, for an algebraic variety "X", the existence of a surjection with affine fibers from an affine variety "W" to "X". The variety "W" is therefore homotopy-equivalent to "X", but it has the technically advantageous property of being affine. Jouanolou's original statement of the theorem required that "X" be quasi-projective over an affine scheme, but this has since been considerably weakened. Jouanolou's construction. Jouanolou's original statement was: If "X" is a scheme quasi-projective over an affine scheme, then there exists a vector bundle "E" over "X" and an affine "E"-torsor "W". By the definition of a torsor, "W" comes with a surjective map to "X" and is Zariski-locally on "X" an affine space bundle. Jouanolou's proof used an explicit construction. Let "S" be an affine scheme and formula_0. Interpret the affine space formula_1 as the space of ("r" + 1) × ("r" + 1) matrices over "S". Within this affine space, there is a subvariety "W" consisting of idempotent matrices of rank one. The image of such a matrix is therefore a point in "X", and the map formula_2 that sends a matrix to the point corresponding to its image is the map claimed in the statement of the theorem. To show that this map has the desired properties, Jouanolou notes that there is a short exact sequence of vector bundles: formula_3 where the first map is defined by multiplication by a basis of sections of formula_4 and the second map is the cokernel. Jouanolou then asserts that "W" is a torsor for formula_5. Jouanolou deduces the theorem in general by reducing to the above case. If "X" is projective over an affine scheme "S", then it admits a closed immersion into some projective space formula_6. Pulling back the variety "W" constructed above for formula_6 along this immersion yields the desired variety "W" for "X". Finally, if "X" is quasi-projective, then it may be realized as an open subscheme of a projective "S"-scheme. Blow up the complement of "X" to get formula_7, and let formula_8 denote the inclusion morphism. The complement of "X" in formula_7 is a Cartier divisor, and therefore "i" is an affine morphism. Now perform the previous construction for formula_7 and pull back along "i". Thomason's construction. Robert Thomason observed that, by making a less explicit construction, it was possible to obtain the same conclusion under significantly weaker hypotheses. Thomason's construction first appeared in a paper of Weibel. Thomason's theorem asserts: Let "X" be a quasicompact and quasiseparated scheme with an ample family of line bundles. Then an affine vector bundle torsor over "X" exists. Having an ample family of line bundles was first defined in SGA 6 Exposé II Définition 2.2.4. Any quasi-projective scheme over an affine scheme has an ample family of line bundles, as does any separated locally factorial Noetherian scheme. Thomason's proof abstracts the key features of Jouanolou's. By hypothesis, "X" admits a set of line bundles "L"0, ..., "L""N" and sections "s"0, ..., "s""N" whose non-vanishing loci are affine and cover "X". Define "X""i" to be the non-vanishing locus of "s""i", and define formula_9 to be the direct sum of "L"0, ..., "L""N". The sections define a morphism of vector bundles formula_10. Define formula_11 to be the cokernel of "s". On "X""i", "s" is a split monomorphism since it is inverted by the inverse of "s""i". Therefore formula_11 is a vector bundle over "X""i", and because these open sets cover "X", formula_11 is a vector bundle. Define formula_12 and similarly for formula_13. Let "W" be the complement of formula_13 in formula_14. There is an equivalent description of "W" as formula_15, and from this description, it is easy to check that it is a torsor for formula_11. Therefore the projection formula_16 is affine. To see that "W" is itself affine, apply a criterion of Serre (EGA II 5.2.1(b), EGA IV1 1.7.17). Each "s""i" determines a global section "f""i" of "W". The non-vanishing locus "W""i" of "f""i" is contained in formula_17, which is affine, and hence "W""i" is affine. The sum of the sections "f"0, ..., "f""N" is 1, so the ideal they generate is the ring of global sections. Serre's criterion now implies that "W" is affine. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "X = \\mathbf{P}^r_S" }, { "math_id": 1, "text": "\\mathbf{A}^{(r+1)^2}_S" }, { "math_id": 2, "text": "W \\to X" }, { "math_id": 3, "text": "0 \\to \\mathcal{O}_X(-1) \\to \\mathcal{O}_X^{\\oplus r + 1} \\to \\mathcal{F} \\to 0," }, { "math_id": 4, "text": "\\mathcal{O}_X(1)" }, { "math_id": 5, "text": "\\mathcal{E} = \\operatorname{Hom}(\\mathcal{F}, \\mathcal{O}_X(-1))" }, { "math_id": 6, "text": "\\mathbf{P}^r_S" }, { "math_id": 7, "text": "\\bar X" }, { "math_id": 8, "text": "i \\colon X \\to \\bar X" }, { "math_id": 9, "text": "\\mathcal{E}" }, { "math_id": 10, "text": "s = (s_0, \\ldots, s_N) \\colon \\mathcal{O}_X \\to \\mathcal{E}" }, { "math_id": 11, "text": "\\mathcal{F}" }, { "math_id": 12, "text": "\\mathbf{P}(\\mathcal{E}) = \\operatorname{Proj} \\operatorname{Sym}^* \\mathcal{E}" }, { "math_id": 13, "text": "\\mathbf{P}(\\mathcal{F})" }, { "math_id": 14, "text": "\\mathbf{P}(\\mathcal{E})" }, { "math_id": 15, "text": "\\operatorname{Spec}(\\operatorname{Sym}^* \\mathcal{E} / (s - 1))" }, { "math_id": 16, "text": "\\pi \\colon W \\to X" }, { "math_id": 17, "text": "\\pi^{-1}(X_i)" } ]
https://en.wikipedia.org/wiki?curid=57258626
57261507
RAMnets
RAMnets is one of the oldest practical neurally inspired classification algorithms. The RAMnets is also known as a type of ""n"-tuple recognition method" or "weightless neural network". Algorithm. Consider (let us say "N") sets of n distinct bit locations are selected randomly. These are the "n"-tuples. The restriction of a pattern to an n-tuple can be regarded as an "n"-bit number which, together with the identity of the "n"-tuple, constitutes a `feature' of the pattern. The standard "n"-tuple recognizer operates simply as follows: "A pattern is classified as belonging to the class for which it has the most features in common with at least one training pattern of that class." This is the formula_0= 0 case of a more general rule whereby the class assigned to unclassified pattern u is formula_1 where Dc is the set of training patterns in class c, formula_2= x for formula_3 ,formula_4 for formula_5,formula_6 is the Kronecker delta(formula_6=1 if i=j and 0 otherwise.)and formula_7is the ith feature of the pattern u: formula_8 Here uk is the kth bit of u and formula_9is the jth bit location of the ith n-tuple. With C classes to distinguish, the system can be implemented as a network of NC nodes, each of which is a random access memory (RAM); hence the term "RAMnet." The memory content formula_10 at address formula_11 of the ith node allocated to class c is set to formula_10 = formula_12 In the usual formula_13 = 1 case, the 1-bit content of formula_10 is set if any pattern of Dc has feature formula_14 and unset otherwise. Recognition is accomplished by summing the contents of the nodes of each class at the addresses given by the features of the unclassified pattern. That is, pattern u is assigned to class formula_15 RAM-discriminators and WiSARD. The RAMnets formed the basis of a commercial product known as WiSARD (Wilkie, Stonham and Aleksander Recognition Device) was the first artificial neural network machine to be patented. A RAM-discriminator consists of a set of X one-bit word RAMs with n inputs and a summing device (Σ). Any such RAM-discriminator can receive a binary pattern of X⋅n bits as input. The RAM input lines are connected to the input pattern by means of a biunivocal pseudo-random mapping. The summing device enables this network of RAMs to exhibit – just like other ANN models based on synaptic weights – generalization and noise tolerance. In order to train the discriminator one has to set all RAM memory locations to 0 and choose a training set formed by binary patterns of X⋅n bits. For each training pattern, a 1 is stored in the memory location of each RAM addressed by this input pattern. Once the training of patterns is completed, RAM memory contents will be set to a certain number of 0’s and 1’s. The information stored by the RAM during the training phase is used to deal with previous unseen patterns. When one of these is given as input, the RAM memory contents addressed by the input pattern are read and summed by Σ. The number r thus obtained, which is called the discriminator response, is equal to the number of RAMs that output 1. r reaches the maximum X if the input belongs to the training set. r is equal to 0 if no "n"-bit component of the input pattern appears in the training set (not a single RAM outputs 1). Intermediate values of r express a kind of “similarity measure” of the input pattern with respect to the patterns in the training set. A system formed by various RAM-discriminators is called WiSARD. Each RAM-discriminator is trained on a particular class of patterns, and classification by the multi-discriminator system is performed in the following way. When a pattern is given as input, each RAM-discriminator gives a response to that input. The various responses are evaluated by an algorithm which compares them and computes the relative confidence c of the highest response (e.g., the difference d between the highest response and the second highest response, divided by the highest response). A schematic representation of a RAM-discriminator and a 10 RAM-discriminator WiSARD is shown in Figure 1. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\Theta" }, { "math_id": 1, "text": "\\begin{align} \\underset{c}argmax(\\sum_{i=1}^N\\Theta(\\sum_{v\\in D_{c}}\\delta(\\alpha_{i}(u),\\alpha_{i}(v))))\\end{align}" }, { "math_id": 2, "text": "\\Theta(x)" }, { "math_id": 3, "text": "0\\leq x\\leq \\theta" }, { "math_id": 4, "text": "\\Theta(x)=\\theta" }, { "math_id": 5, "text": "x\\geq\\theta" }, { "math_id": 6, "text": "\\delta_{i,j}" }, { "math_id": 7, "text": "(\\alpha_{i}(u))" }, { "math_id": 8, "text": "\\sum_{j=0}^{n-1}u_\\eta i(j)2^{j}" }, { "math_id": 9, "text": "u_\\eta i (j)" }, { "math_id": 10, "text": "m_{ci\\alpha}" }, { "math_id": 11, "text": "\\alpha" }, { "math_id": 12, "text": "\\Theta(\\sum_{v\\in D_{c}}\\delta(\\alpha,\\alpha_{i}(v)))" }, { "math_id": 13, "text": "\\theta" }, { "math_id": 14, "text": "\\alpha " }, { "math_id": 15, "text": "\\begin{align} \\underset{c}argmax(\\sum_{i=1}^N m_{ci\\alpha}(u)) \\end{align}" } ]
https://en.wikipedia.org/wiki?curid=57261507
57264039
Einstein's thought experiments
Albert Einstein's hypothetical situations to argue scientific points A hallmark of Albert Einstein's career was his use of visualized thought experiments () as a fundamental tool for understanding physical issues and for elucidating his concepts to others. Einstein's thought experiments took diverse forms. In his youth, he mentally chased beams of light. For special relativity, he employed moving trains and flashes of lightning to explain his most penetrating insights. For general relativity, he considered a person falling off a roof, accelerating elevators, blind beetles crawling on curved surfaces and the like. In his debates with Niels Bohr on the nature of reality, he proposed imaginary devices that attempted to show, at least in concept, how the Heisenberg uncertainty principle might be evaded. In a profound contribution to the literature on quantum mechanics, Einstein considered two particles briefly interacting and then flying apart so that their states are correlated, anticipating the phenomenon known as quantum entanglement. Introduction. A thought experiment is a logical argument or mental model cast within the context of an imaginary (hypothetical or even counterfactual) scenario. A scientific thought experiment, in particular, may examine the implications of a theory, law, or set of principles with the aid of fictive and/or natural particulars (demons sorting molecules, cats whose lives hinge upon a radioactive disintegration, men in enclosed elevators) in an idealized environment (massless trapdoors, absence of friction). They describe experiments that, except for some specific and necessary idealizations, could conceivably be performed in the real world. As opposed to "physical" experiments, thought experiments do not report new empirical data. They can only provide conclusions based on deductive or inductive reasoning from their starting assumptions. Thought experiments invoke particulars that are irrelevant to the generality of their conclusions. It is the invocation of these particulars that give thought experiments their experiment-like appearance. A thought experiment can always be reconstructed as a straightforward argument, without the irrelevant particulars. John D. Norton, a well-known philosopher of science, has noted that "a good thought experiment is a good argument; a bad thought experiment is a bad argument." When effectively used, the irrelevant particulars that convert a straightforward argument into a thought experiment can act as "intuition pumps" that stimulate readers' ability to apply their "intuitions" to their understanding of a scenario. Thought experiments have a long history. Perhaps the best known in the history of modern science is Galileo's demonstration that falling objects must fall at the same rate regardless of their masses. This has sometimes been taken to be an actual physical demonstration, involving his climbing up the Leaning Tower of Pisa and dropping two heavy weights off it. In fact, it was a logical demonstration described by Galileo in "Discorsi e dimostrazioni matematiche" (1638). Einstein had a highly visual understanding of physics. His work in the patent office "stimulated [him] to see the physical ramifications of theoretical concepts." These aspects of his thinking style inspired him to fill his papers with vivid practical detail making them quite different from, say, the papers of Lorentz or Maxwell. This included his use of thought experiments. Special relativity. Pursuing a beam of light. Late in life, Einstein recalled &lt;templatestyles src="Template:Blockquote/styles.css" /&gt;...a paradox upon which I had already hit at the age of sixteen: If I pursue a beam of light with the velocity c (velocity of light in a vacuum), I should observe such a beam of light as an electromagnetic field at rest though spatially oscillating. There seems to be no such thing, however, neither on the basis of experience nor according to Maxwell's equations. From the very beginning it appeared to me intuitively clear that, judged from the standpoint of such an observer, everything would have to happen according to the same laws as for an observer who, relative to the earth, was at rest. For how should the first observer know or be able to determine, that he is in a state of fast uniform motion? One sees in this paradox the germ of the special relativity theory is already contained. Einstein's recollections of his youthful musings are widely cited because of the hints they provide of his later great discovery. However, Norton has noted that Einstein's reminiscences were probably colored by a half-century of hindsight. Norton lists several problems with Einstein's recounting, both historical and scientific: 1. At 16 years old and a student at the Gymnasium in Aarau, Einstein would have had the thought experiment in late 1895 to early 1896. But various sources note that Einstein did not learn Maxwell's theory until 1898, in university. 2. A 19th century aether theorist would have had no difficulties with the thought experiment. Einstein's statement, "...there seems to be no such thing...on the basis of experience," would not have counted as an objection, but would have represented a mere statement of fact, since no one had ever traveled at such speeds. 3. An aether theorist would have regarded "...nor according to Maxwell's equations" as simply representing a misunderstanding on Einstein's part. Unfettered by any notion that the speed of light represents a cosmic limit, the aether theorist would simply have set velocity equal to "c", noted that yes indeed, the light would appear to be frozen, and then thought no more of it. Rather than the thought experiment being at all incompatible with aether theories (which it is not), the youthful Einstein appears to have reacted to the scenario out of an intuitive sense of wrongness. He felt that the laws of optics should obey the principle of relativity. As he grew older, his early thought experiment acquired deeper levels of significance: Einstein felt that Maxwell's equations should be the same for all observers in inertial motion. From Maxwell's equations, one can deduce a single speed of light, and there is nothing in this computation that depends on an observer's speed. Einstein sensed a conflict between Newtonian mechanics and the constant speed of light determined by Maxwell's equations. Regardless of the historical and scientific issues described above, Einstein's early thought experiment was part of the repertoire of test cases that he used to check on the viability of physical theories. Norton suggests that the real importance of the thought experiment was that it provided a powerful objection to emission theories of light, which Einstein had worked on for several years prior to 1905. Magnet and conductor. In the very first paragraph of Einstein's seminal 1905 work introducing special relativity, he writes: &lt;templatestyles src="Template:Blockquote/styles.css" /&gt;It is well known that Maxwell's electrodynamics—as usually understood at present—when applied to moving bodies, leads to asymmetries that do not seem to attach to the phenomena. Let us recall, for example, the electrodynamic interaction between a magnet and a conductor. The observable phenomenon depends here only on the relative motion of conductor and magnet, while according to the customary conception the two cases, in which, respectively, either the one or the other of the two bodies is the one in motion, are to be strictly differentiated from each other. For if the magnet is in motion and the conductor is at rest, there arises in the surroundings of the magnet an electric field endowed with a certain energy value that produces a current in the places where parts of the conductor are located. But if the magnet is at rest and the conductor is in motion, no electric field arises in the surroundings of the magnet, while in the conductor an electromotive force will arise, to which in itself there does not correspond any energy, but which, provided that the relative motion in the two cases considered is the same, gives rise to electrical currents that have the same magnitude and the same course as those produced by the electric forces in the first-mentioned case. This opening paragraph recounts well-known experimental results obtained by Michael Faraday in 1831. The experiments describe what appeared to be two different phenomena: the "motional EMF" generated when a wire moves through a magnetic field (see Lorentz force), and the "transformer EMF" generated by a changing magnetic field (due to the Maxwell–Faraday equation). James Clerk Maxwell himself drew attention to this fact in his 1861 paper "On Physical Lines of Force". In the latter half of Part II of that paper, Maxwell gave a separate physical explanation for each of the two phenomena. Although Einstein calls the asymmetry "well-known", there is no evidence that any of Einstein's contemporaries considered the distinction between motional EMF and transformer EMF to be in any way odd or pointing to a lack of understanding of the underlying physics. Maxwell, for instance, had repeatedly discussed Faraday's laws of induction, stressing that the magnitude and direction of the induced current was a function only of the relative motion of the magnet and the conductor, without being bothered by the clear distinction between conductor-in-motion and magnet-in-motion in the underlying theoretical treatment. Yet Einstein's reflection on this experiment represented the decisive moment in his long and tortuous path to special relativity. Although the equations describing the two scenarios are entirely different, there is no measurement that can distinguish whether the magnet is moving, the conductor is moving, or both. In a 1920 review on the "Fundamental Ideas and Methods of the Theory of Relativity" (unpublished), Einstein related how disturbing he found this asymmetry: &lt;templatestyles src="Template:Blockquote/styles.css" /&gt; Einstein needed to extend the relativity of motion that he perceived between magnet and conductor in the above thought experiment to a full theory. For years, however, he did not know how this might be done. The exact path that Einstein took to resolve this issue is unknown. We do know, however, that Einstein spent several years pursuing an emission theory of light, encountering difficulties that eventually led him to give up the attempt. &lt;templatestyles src="Template:Blockquote/styles.css" /&gt; That decision ultimately led to his development of special relativity as a theory founded on two postulates of which he could be sure. Expressed in contemporary physics vocabulary, his postulates were as follows: 1. The laws of physics take the same form in all inertial frames. 2. In any given inertial frame, the velocity of light "c" is the same "whether the light be emitted by a body at rest or by a body in uniform motion." [Emphasis added by editor] Einstein's wording of the second postulate was one with which nearly all theorists of his day could agree. His wording is a far more intuitive form of the second postulate than the stronger version frequently encountered in popular writings and college textbooks. Trains, embankments, and lightning flashes. The topic of how Einstein arrived at special relativity has been a fascinating one to many scholars: A lowly, twenty-six year old patent officer (third class), largely self-taught in physics and completely divorced from mainstream research, nevertheless in the year 1905 produced four extraordinary works ("Annus Mirabilis" papers), only one of which (his paper on Brownian motion) appeared related to anything that he had ever published before. Einstein's paper, "On the Electrodynamics of Moving Bodies", is a polished work that bears few traces of its gestation. Documentary evidence concerning the development of the ideas that went into it consist of, quite literally, only two sentences in a handful of preserved early letters, and various later historical remarks by Einstein himself, some of them known only second-hand and at times contradictory. In regards to the relativity of simultaneity, Einstein's 1905 paper develops the concept vividly by carefully considering the basics of how time may be disseminated through the exchange of signals between clocks. In his popular work, "Relativity: The Special and General Theory," Einstein translates the formal presentation of his paper into a thought experiment using a train, a railway embankment, and lightning flashes. The essence of the thought experiment is as follows: A routine supposition among historians of science is that, in accordance with the analysis given in his 1905 special relativity paper and in his popular writings, Einstein discovered the relativity of simultaneity by thinking about how clocks could be synchronized by light signals. The Einstein synchronization convention was originally developed by telegraphers in the middle 19th century. The dissemination of precise time was an increasingly important topic during this period. Trains needed accurate time to schedule use of track, cartographers needed accurate time to determine longitude, while astronomers and surveyors dared to consider the worldwide dissemination of time to accuracies of thousandths of a second. Following this line of argument, Einstein's position in the patent office, where he specialized in evaluating electromagnetic and electromechanical patents, would have exposed him to the latest developments in time technology, which would have guided him in his thoughts towards understanding the relativity of simultaneity. However, all of the above is supposition. In later recollections, when Einstein was asked about what inspired him to develop special relativity, he would mention his riding a light beam and his magnet and conductor thought experiments. He would also mention the importance of the Fizeau experiment and the observation of stellar aberration. "They were enough", he said. He never mentioned thought experiments about clocks and their synchronization. The routine analyses of the Fizeau experiment and of stellar aberration, that treat light as Newtonian corpuscles, do not require relativity. But problems arise if one considers light as waves traveling through an aether, which are resolved by applying the relativity of simultaneity. It is entirely possible, therefore, that Einstein arrived at special relativity through a different path than that commonly assumed, through Einstein's examination of Fizeau's experiment and stellar aberration. We therefore do not know just how important clock synchronization and the train and embankment thought experiment were to Einstein's development of the concept of the relativity of simultaneity. We do know, however, that the train and embankment thought experiment was the preferred means whereby he chose to teach this concept to the general public. Relativistic center-of-mass theorem. Einstein proposed the equivalence of mass and energy in his final Annus Mirabilis paper. Over the next several decades, the understanding of energy and its relationship with momentum were further developed by Einstein and other physicists including Max Planck, Gilbert N. Lewis, Richard C. Tolman, Max von Laue (who in 1911 gave a comprehensive proof of "M"0 = "E"0/"c"2 from the stress–energy tensor), and Paul Dirac (whose investigations of negative solutions in his 1928 formulation of the energy–momentum relation led to the 1930 prediction of the existence of antimatter). Einstein's relativistic center-of-mass theorem of 1906 is a case in point. In 1900, Henri Poincaré had noted a paradox in modern physics as it was then understood: When he applied well-known results of Maxwell's equations to the equality of action and reaction, he could describe a cyclic process which would result in creation of a reactionless drive, "i.e." a device which could displace its center of mass without the exhaust of a propellant, in violation of the conservation of momentum. Poincaré resolved this paradox by imagining electromagnetic energy to be a fluid having a given density, which is created and destroyed with a given momentum as energy is absorbed and emitted. The motions of this fluid would oppose displacement of the center of mass in such fashion as to preserve the conservation of momentum. Einstein demonstrated that Poincaré's artifice was superfluous. Rather, he argued that mass-energy equivalence was a necessary and sufficient condition to resolve the paradox. In his demonstration, Einstein provided a derivation of mass-energy equivalence that was distinct from his original derivation. Einstein began by recasting Poincaré's abstract mathematical argument into the form of a thought experiment: Einstein considered (a) an initially stationary, closed, hollow cylinder free-floating in space, of mass formula_0 and length formula_1, (b) with some sort of arrangement for sending a quantity of radiative energy (a burst of photons) formula_2 from the left to the right. The radiation has momentum formula_3 Since the total momentum of the system is zero, the cylinder recoils with a speed formula_4 (c) The radiation hits the other end of the cylinder in time formula_5 (assuming formula_6), bringing the cylinder to a stop after it has moved through a distance formula_7 (d) The energy deposited on the right wall of the cylinder is transferred to a massless shuttle mechanism formula_8 (e) which transports the energy to the left wall (f) and then returns to re-create the starting configuration of the system, except with the cylinder displaced to the left. The cycle may then be repeated. The reactionless drive described here violates the laws of mechanics, according to which the center of mass of a body at rest cannot be displaced in the absence of external forces. Einstein argued that the shuttle formula_9 cannot be massless while transferring energy from the right to the left. If energy formula_2 possesses the inertia formula_10 the contradiction disappears. Modern analysis suggests that neither Einstein's original 1905 derivation of mass-energy equivalence nor the alternate derivation implied by his 1906 center-of-mass theorem are definitively correct. For instance, the center-of-mass thought experiment regards the cylinder as a completely rigid body. In reality, the impulse provided to the cylinder by the burst of light in step (b) cannot travel faster than light, so that when the burst of photons reaches the right wall in step (c), the wall has not yet begun to move. Ohanian has credited von Laue (1911) as having provided the first truly definitive derivation of "M"0 = "E"0/"c"2. Impossibility of faster-than-light signaling. In 1907, Einstein noted that from the composition law for velocities, one could deduce that there cannot exist an effect that allows faster-than-light signaling. Einstein imagined a strip of material that allows propagation of signals at the faster-than-light speed of formula_11 (as viewed from the material strip). Imagine two observers, A and B, standing on the "x"-axis and separated by the distance formula_1. They stand next to the material strip, which is not at rest, but rather is moving in the "negative" "x"-direction with speed formula_12. A uses the strip to send a signal to B. From the velocity composition formula, the signal propagates from A to B with speed formula_13. The time formula_14 required for the signal to propagate from A to B is given by formula_15 The strip can move at any speed formula_16. Given the starting assumption formula_17, one can always set the strip moving at a speed formula_12 such that formula_18. In other words, given the existence of a means of transmitting signals faster-than-light, scenarios can be envisioned whereby the recipient of a signal will receive the signal before the transmitter has transmitted it. About this thought experiment, Einstein wrote: &lt;templatestyles src="Template:Blockquote/styles.css" /&gt; General relativity. Falling painters and accelerating elevators. In his unpublished 1920 review, Einstein related the genesis of his thoughts on the equivalence principle: &lt;templatestyles src="Template:Blockquote/styles.css" /&gt;When I was busy (in 1907) writing a summary of my work on the theory of special relativity for the "Jahrbuch der Radioaktivität und Elektronik" [Yearbook for Radioactivity and Electronics], I also had to try to modify the Newtonian theory of gravitation such as to fit its laws into the theory. While attempts in this direction showed the practicability of this enterprise, they did not satisfy me because they would have had to be based upon unfounded physical hypotheses. At that moment I got the happiest thought of my life in the following form: In an example worth considering, the gravitational field has a relative existence only in a manner similar to the electric field generated by magneto-electric induction. "Because for an observer in free-fall from the roof of a house there is during the fall"—at least in his immediate vicinity—"no gravitational field." Namely, if the observer lets go of any bodies, they remain relative to him, in a state of rest or uniform motion, independent of their special chemical or physical nature. The observer, therefore, is justified in interpreting his state as being "at rest." The realization "startled" Einstein, and inspired him to begin an eight-year quest that led to what is considered to be his greatest work, the theory of general relativity. Over the years, the story of the falling man has become an iconic one, much embellished by other writers. In most retellings of Einstein's story, the falling man is identified as a painter. In some accounts, Einstein was inspired after he witnessed a painter falling from the roof of a building adjacent to the patent office where he worked. This version of the story leaves unanswered the question of why Einstein might consider his observation of such an unfortunate accident to represent the happiest thought in his life. Einstein later refined his thought experiment to consider a man inside a large enclosed chest or elevator falling freely in space. While in free fall, the man would consider himself weightless, and any loose objects that he emptied from his pockets would float alongside him. Then Einstein imagined a rope attached to the roof of the chamber. A powerful "being" of some sort begins pulling on the rope with constant force. The chamber begins to move "upwards" with a uniformly accelerated motion. Within the chamber, all of the man's perceptions are consistent with his being in a uniform gravitational field. Einstein asked, "Ought we to smile at the man and say that he errs in his conclusion?" Einstein answered no. Rather, the thought experiment provided "good grounds for extending the principle of relativity to include bodies of reference which are accelerated with respect to each other, and as a result we have gained a powerful argument for a generalised postulate of relativity." Through this thought experiment, Einstein addressed an issue that was so well known, scientists rarely worried about it or considered it puzzling: Objects have "gravitational mass," which determines the force with which they are attracted to other objects. Objects also have "inertial mass," which determines the relationship between the force applied to an object and how much it accelerates. Newton had pointed out that, even though they are defined differently, gravitational mass and inertial mass always seem to be equal. But until Einstein, no one had conceived a good explanation as to why this should be so. From the correspondence revealed by his thought experiment, Einstein concluded that "it is impossible to discover by experiment whether a given system of coordinates is accelerated, or whether...the observed effects are due to a gravitational field." This correspondence between gravitational mass and inertial mass is the "equivalence principle". An extension to his accelerating observer thought experiment allowed Einstein to deduce that "rays of light are propagated curvilinearly in gravitational fields." Early applications of the equivalence principle. Einstein's formulation of special relativity was in terms of kinematics (the study of moving bodies without reference to forces). Late in 1907, his former mathematics professor, Hermann Minkowski, presented an alternative, geometric interpretation of special relativity in a lecture to the Göttingen Mathematical society, introducing the concept of spacetime. Einstein was initially dismissive of Minkowski's geometric interpretation, regarding it as "überflüssige Gelehrsamkeit" (superfluous learnedness). As with special relativity, Einstein's early results in developing what was ultimately to become general relativity were accomplished using kinematic analysis rather than geometric techniques of analysis. In his 1907 "Jahrbuch" paper, Einstein first addressed the question of whether the propagation of light is influenced by gravitation, and whether there is any effect of a gravitational field on clocks. In 1911, Einstein returned to this subject, in part because he had realized that certain predictions of his nascent theory were amenable to experimental test. By the time of his 1911 paper, Einstein and other scientists had offered several alternative demonstrations that the inertial mass of a body increases with its energy content: If the energy increase of the body is formula_2, then the increase in its inertial mass is formula_19 Einstein asked whether there is an increase of gravitational mass corresponding to the increase in inertial mass, and if there is such an increase, is the increase in gravitational mass "precisely" the same as its increase in inertial mass? Using the equivalence principle, Einstein concluded that this must be so. To show that the equivalence principle necessarily implies the gravitation of energy, Einstein considered a light source formula_20 separated along the "z"-axis by a distance formula_21 above a receiver formula_22 in a homogeneous gravitational field having a force per unit mass of 1 formula_23 A certain amount of electromagnetic energy formula_2 is emitted by formula_20 towards formula_24 According to the equivalence principle, this system is equivalent to a gravitation-free system which moves with uniform acceleration formula_25 in the direction of the positive "z"-axis, with formula_20 separated by a constant distance formula_21 from formula_24 In the accelerated system, light emitted from formula_20 takes (to a first approximation) formula_26 to arrive at formula_24 But in this time, the velocity of formula_22 will have increased by formula_27 from its velocity when the light was emitted. The energy arriving at formula_22 will therefore not be the energy formula_28 but the greater energy formula_29 given by formula_30 According to the equivalence principle, the same relation holds for the non-accelerated system in a gravitational field, where we replace formula_31 by the gravitational potential difference formula_32 between formula_20 and formula_22 so that formula_33 The energy formula_29 arriving at formula_22 is greater than the energy formula_34 emitted by formula_20 by the potential energy of the mass formula_35 in the gravitational field. Hence formula_36 corresponds to the gravitational mass as well as the inertial mass of a quantity of energy. To further clarify that the energy of gravitational mass must equal the energy of inertial mass, Einstein proposed the following cyclic process: (a) A light source formula_20 is situated a distance formula_21 above a receiver formula_22 in a uniform gravitational field. A movable mass formula_0 can shuttle between formula_20 and formula_24 (b) A pulse of electromagnetic energy formula_2 is sent from formula_20 to formula_24 The energy formula_37 is absorbed by formula_24 (c) Mass formula_0 is lowered from formula_20 to formula_38 releasing an amount of work equal to formula_39 (d) The energy absorbed by formula_22 is transferred to formula_40 This increases the gravitational mass of formula_0 to a new value formula_41 (e) The mass is lifted back to formula_20, requiring the input of work formula_42 (e) The energy carried by the mass is then transferred to formula_43 completing the cycle. Conservation of energy demands that the difference in work between raising the mass and lowering the mass, formula_44, must equal formula_45 or one could potentially define a perpetual motion machine. Therefore, formula_46 In other words, the increase in gravitational mass predicted by the above arguments is precisely equal to the increase in inertial mass predicted by special relativity. Einstein then considered sending a continuous electromagnetic beam of frequency formula_47 (as measured at formula_20) from formula_20 to formula_22 in a homogeneous gravitational field. The frequency of the light as measured at formula_22 will be a larger value formula_48 given by formula_49 Einstein noted that the above equation seemed to imply something absurd: Given that the transmission of light from formula_20 to formula_22 is continuous, how could the number of periods emitted per second from formula_20 be different from that received at formula_50 It is impossible for wave crests to appear on the way down from formula_20 to formula_22. The simple answer is that this question presupposes an absolute nature of time, when in fact there is nothing that compels us to assume that clocks situated at different gravitational potentials must be conceived of as going at the same rate. The principle of equivalence implies gravitational time dilation. It is important to realize that Einstein's arguments predicting gravitational time dilation are valid for "any" theory of gravity that respects the principle of equivalence. This includes Newtonian gravitation. Experiments such as the Pound–Rebka experiment, which have firmly established gravitational time dilation, therefore do not serve to distinguish general relativity from Newtonian gravitation. In the remainder of Einstein's 1911 paper, he discussed the bending of light rays in a gravitational field, but given the incomplete nature of Einstein's theory as it existed at the time, the value that he predicted was half the value that would later be predicted by the full theory of general relativity. Non-Euclidean geometry and the rotating disk. By 1912, Einstein had reached an impasse in his kinematic development of general relativity, realizing that he needed to go beyond the mathematics that he knew and was familiar with. Stachel has identified Einstein's analysis of the rigid relativistic rotating disk as being key to this realization. The rigid rotating disk had been a topic of lively discussion since Max Born and Paul Ehrenfest, in 1909, both presented analyses of rigid bodies in special relativity. An observer on the edge of a rotating disk experiences an apparent ("fictitious" or "pseudo") force called "centrifugal force". By 1912, Einstein had become convinced of a close relationship between gravitation and pseudo-forces such as centrifugal force:&lt;templatestyles src="Template:Blockquote/styles.css" /&gt;Such a system "K", according to the equivalence principle, is strictly equivalent to a system at rest in which a matter-free static gravitational field of a certain kind exists. In the accompanying illustration, A represents a circular disk of 10 units diameter at rest in an inertial reference frame. The circumference of the disk is formula_51 times the diameter, and the illustration shows 31.4 rulers laid out along the circumference. B represents a circular disk of 10 units diameter that is spinning rapidly. According to a non-rotating observer, each of the rulers along the circumference is length-contracted along its line of motion. More rulers are required to cover the circumference, while the number of rulers required to span the diameter is unchanged. Note that we have not stated that we set A spinning to get B. In special relativity, it is not possible to set spinning a disk that is "rigid" in Born's sense of the term. Since spinning up disk A would cause the material to contract in the circumferential direction but not in the radial direction, a rigid disk would become fragmented from the induced stresses. In later years, Einstein repeatedly stated that consideration of the rapidly rotating disk was of "decisive importance" to him because it showed that a gravitational field causes non-Euclidean arrangements of measuring rods. Einstein realized that he did not have the mathematical skills to describe the non-Euclidean view of space and time that he envisioned, so he turned to his mathematician friend, Marcel Grossmann, for help. After researching in the library, Grossman found a review article by Ricci and Levi-Civita on absolute differential calculus (tensor calculus). Grossman tutored Einstein on the subject, and in 1913 and 1914, they published two joint papers describing an initial version of a generalized theory of gravitation. Over the next several years, Einstein used these mathematical tools to generalize Minkowski's geometric approach to relativity so as to encompass curved spacetime. Quantum mechanics. Background: Einstein and the quantum. Many myths have grown up about Einstein's relationship with quantum mechanics. Freshman physics students are aware that Einstein explained the photoelectric effect and introduced the concept of the photon. But students who have grown up with the photon may not be aware of how revolutionary the concept was for his time. The best-known factoids about Einstein's relationship with quantum mechanics are his statement, "God does not play dice with the universe" and the indisputable fact that he just did not like the theory in its final form. This has led to the general impression that, despite his initial contributions, Einstein was out of touch with quantum research and played at best a secondary role in its development. Concerning Einstein's estrangement from the general direction of physics research after 1925, his well-known scientific biographer, Abraham Pais, wrote: &lt;templatestyles src="Template:Blockquote/styles.css" /&gt; In hindsight, we know that Pais was incorrect in his assessment. Einstein was arguably the greatest single contributor to the "old" quantum theory. Therefore, Einstein before 1925 originated most of the key concepts of quantum theory: light quanta, wave–particle duality, the fundamental randomness of physical processes, the concept of indistinguishability, and the probability density interpretation of the wave equation. In addition, Einstein can arguably be considered the father of solid state physics and condensed matter physics. He provided a correct derivation of the blackbody radiation law and sparked the notion of the laser. What of "after" 1925? In 1935, working with two younger colleagues, Einstein issued a final challenge to quantum mechanics, attempting to show that it could not represent a final solution. Despite the questions raised by this paper, it made little or no difference to how physicists employed quantum mechanics in their work. Of this paper, Pais was to write: &lt;templatestyles src="Template:Blockquote/styles.css" /&gt;The only part of this article that will ultimately survive, I believe, is this last phrase [i.e. ""No reasonable definition of reality could be expect to permit this" where "this"" refers to the instantaneous transmission of information over a distance], which so poignantly summarizes Einstein's views on quantum mechanics in his later years...This conclusion has not affected subsequent developments in physics, and it is doubtful that it ever will. In contrast to Pais' negative assessment, this paper, outlining the EPR paradox, has become one of the most widely cited articles in the entire physics literature. It is considered the centerpiece of the development of quantum information theory, which has been termed the "third quantum revolution." Wave–particle duality. All of Einstein's major contributions to the old quantum theory were arrived at via statistical argument. This includes his 1905 paper arguing that light has particle properties, his 1906 work on specific heats, his 1909 introduction of the concept of wave–particle duality, his 1916 work presenting an improved derivation of the blackbody radiation formula, and his 1924 work that introduced the concept of indistinguishability. Einstein's 1909 arguments for the wave–particle duality of light were based on a thought experiment. Einstein imagined a mirror in a cavity containing particles of an ideal gas and filled with black-body radiation, with the entire system in thermal equilibrium. The mirror is constrained in its motions to a direction perpendicular to its surface. The mirror jiggles from Brownian motion due to collisions with the gas molecules. Since the mirror is in a radiation field, the moving mirror transfers some of its kinetic energy to the radiation field as a result of the difference in the radiation pressure between its forwards and reverse surfaces. This implies that there must be fluctuations in the black-body radiation field, and hence fluctuations in the black-body radiation pressure. Reversing the argument shows that there must be a route for the return of energy from the fluctuating black-body radiation field back to the gas molecules. Given the known shape of the radiation field given by Planck's law, Einstein could calculate the mean square energy fluctuation of the black-body radiation. He found the root mean square energy fluctuation formula_53 in a small volume formula_12 of a cavity filled with thermal radiation in the frequency interval between formula_54 and formula_55 to be a function of frequency and temperature: formula_56 where formula_57 would be the average energy of the volume in contact with the thermal bath. The above expression has two terms, the second corresponding to the classical Rayleigh-Jeans law ("i.e." a wavelike term), and the first corresponding to the Wien distribution law (which from Einstein's 1905 analysis, would result from point-like quanta with energy formula_52). From this, Einstein concluded that radiation had simultaneous wave and particle aspects. Bubble paradox. From 1905 to 1923, Einstein was virtually the only physicist who took light-quanta seriously. Throughout most of this period, the physics community treated the light-quanta hypothesis with "skepticism bordering on derision" and maintained this attitude even after Einstein's photoelectric law was validated. The citation for Einstein's 1922 Nobel Prize very deliberately avoided all mention of light-quanta, instead stating that it was being awarded for "his services to theoretical physics and especially for his discovery of the law of the photoelectric effect". This dismissive stance contrasts sharply with the enthusiastic manner in which Einstein's other major contributions were accepted, including his work on Brownian motion, special relativity, general relativity, and his numerous other contributions to the "old" quantum theory. Various explanations have been given for this neglect on the part of the physics community. First and foremost was wave theory's long and indisputable success in explaining purely optical phenomena. Second was the fact that his 1905 paper, which pointed out that certain phenomena would be more readily explained under the assumption that light is particulate, presented the hypothesis only as a "heuristic viewpoint". The paper offered no compelling, comprehensive alternative to existing electromagnetic theory. Third was the fact that his 1905 paper introducing light quanta and his two 1909 papers that argued for a wave–particle fusion theory approached their subjects via statistical arguments that his contemporaries "might accept as theoretical exercise—crazy, perhaps, but harmless". Most of Einstein's contemporaries adopted the position that light is ultimately a wave, but appears particulate in certain circumstances only because atoms absorb wave energy in discrete units. Among the thought experiments that Einstein presented in his 1909 lecture on the nature and constitution of radiation was one that he used to point out the implausibility of the above argument. He used this thought experiment to argue that atoms emit light as discrete particles rather than as continuous waves: (a) An electron in a cathode ray beam strikes an atom in a target. The intensity of the beam is set so low that we can consider one electron at a time as impinging on the target. (b) The atom emits a spherically radiating electromagnetic wave. (c) This wave excites an atom in a secondary target, causing it to release an electron of energy comparable to that of the original electron. The energy of the secondary electron depends only on the energy of the original electron and not at all on the distance between the primary and secondary targets. All the energy spread around the circumference of the radiating electromagnetic wave would appear to be instantaneously focused on the target atom, an action that Einstein considered implausible. Far more plausible would be to say that the first atom emitted a particle in the direction of the second atom. Although Einstein originally presented this thought experiment as an argument for light having a particulate nature, it has been noted that this thought experiment, which has been termed the "bubble paradox", foreshadows the famous 1935 EPR paper. In his 1927 Solvay debate with Bohr, Einstein employed this thought experiment to illustrate that according to the Copenhagen interpretation of quantum mechanics that Bohr championed, the quantum wavefunction of a particle would abruptly collapse like a "popped bubble" no matter how widely dispersed the wavefunction. The transmission of energy from opposite sides of the bubble to a single point would occur faster than light, violating the principle of locality. In the end, it was experiment, not any theoretical argument, that finally enabled the concept of the light quantum to prevail. In 1923, Arthur Compton was studying the scattering of high energy X-rays from a graphite target. Unexpectedly, he found that the scattered X-rays were shifted in wavelength, corresponding to inelastic scattering of the X-rays by the electrons in the target. His observations were totally inconsistent with wave behavior, but instead could only be explained if the X-rays acted as particles. This observation of the Compton effect rapidly brought about a change in attitude, and by 1926, the concept of the "photon" was generally accepted by the physics community. Einstein's light box. Einstein did not like the direction in which quantum mechanics had turned after 1925. Although excited by Heisenberg's matrix mechanics, Schroedinger's wave mechanics, and Born's clarification of the meaning of the Schroedinger wave equation ("i.e." that the absolute square of the wave function is to be interpreted as a probability density), his instincts told him that something was missing. In a letter to Born, he wrote: &lt;templatestyles src="Template:Blockquote/styles.css" /&gt; The Solvay Debates between Bohr and Einstein began in dining-room discussions at the "Fifth Solvay International Conference on Electrons and Photons" in 1927. Einstein's issue with the new quantum mechanics was not just that, with the probability interpretation, it rendered invalid the notion of rigorous causality. After all, as noted above, Einstein himself had introduced random processes in his 1916 theory of radiation. Rather, by defining and delimiting the maximum amount of information obtainable in a given experimental arrangement, the Heisenberg uncertainty principle denied the existence of any knowable reality in terms of a complete specification of the momenta and description of individual particles, an objective reality that would exist whether or not we could ever observe it. Over dinner, during after-dinner discussions, and at breakfast, Einstein debated with Bohr and his followers on the question whether quantum mechanics in its present form could be called complete. Einstein illustrated his points with increasingly clever thought experiments intended to prove that position and momentum could in principle be simultaneously known to arbitrary precision. For example, one of his thought experiments involved sending a beam of electrons through a shuttered screen, recording the positions of the electrons as they struck a photographic screen. Bohr and his allies would always be able to counter Einstein's proposal, usually by the end of the same day. On the final day of the conference, Einstein revealed that the uncertainty principle was not the only aspect of the new quantum mechanics that bothered him. Quantum mechanics, at least in the Copenhagen interpretation, appeared to allow action at a distance, the ability for two separated objects to communicate at speeds greater than light. By 1928, the consensus was that Einstein had lost the debate, and even his closest allies during the Fifth Solvay Conference, for example Louis de Broglie, conceded that quantum mechanics appeared to be complete. At the Sixth Solvay International Conference on Magnetism (1930), Einstein came armed with a new thought experiment. This involved a box with a shutter that operated so quickly, it would allow only one photon to escape at a time. The box would first be weighed exactly. Then, at a precise moment, the shutter would open, allowing a photon to escape. The box would then be re-weighed. The well-known relationship between mass and energy formula_58 would allow the energy of the particle to be precisely determined. With this gadget, Einstein believed that he had demonstrated a means to obtain, simultaneously, a precise determination of the energy of the photon as well as its exact time of departure from the system. Bohr was shaken by this thought experiment. Unable to think of a refutation, he went from one conference participant to another, trying to convince them that Einstein's thought experiment could not be true, that if it were true, it would literally mean the end of physics. After a sleepless night, he finally worked out a response which, ironically, depended on Einstein's general relativity. Consider the illustration of Einstein's light box: 1. After emitting a photon, the loss of weight causes the box to rise in the gravitational field. 2. The observer returns the box to its original height by adding weights until the pointer points to its initial position. It takes a certain amount of time formula_59 for the observer to perform this procedure. How long it takes depends on the strength of the spring and on how well-damped the system is. If undamped, the box will bounce up and down forever. If over-damped, the box will return to its original position sluggishly (See Damped spring-mass system). 3. The longer that the observer allows the damped spring-mass system to settle, the closer the pointer will reach its equilibrium position. At some point, the observer will conclude that his setting of the pointer to its initial position is within an allowable tolerance. There will be some residual error formula_60 in returning the pointer to its initial position. Correspondingly, there will be some residual error formula_61 in the weight measurement. 4. Adding the weights imparts a momentum formula_62 to the box which can be measured with an accuracy formula_63 delimited by formula_64 It is clear that formula_65 where formula_25 is the gravitational constant. Plugging in yields formula_66 5. General relativity informs us that while the box has been at a height different than its original height, it has been ticking at a rate different than its original rate. The red shift formula informs us that there will be an uncertainty formula_67 in the determination of formula_68 the emission time of the photon. 6. Hence, formula_69 The accuracy with which the energy of the photon is measured restricts the precision with which its moment of emission can be measured, following the Heisenberg uncertainty principle. After finding his last attempt at finding a loophole around the uncertainty principle refuted, Einstein quit trying to search for inconsistencies in quantum mechanics. Instead, he shifted his focus to the other aspects of quantum mechanics with which he was uncomfortable, focusing on his critique of action at a distance. His next paper on quantum mechanics foreshadowed his later paper on the EPR paradox. Einstein was gracious in his defeat. The following September, Einstein nominated Heisenberg and Schroedinger for the Nobel Prize, stating, "I am convinced that this theory undoubtedly contains a part of the ultimate truth." EPR paradox. &lt;templatestyles src="Template:Blockquote/styles.css" /&gt;Both Bohr and Einstein were subtle men. Einstein tried very hard to show that quantum mechanics was inconsistent; Bohr, however, was always able to counter his arguments. But in his final attack Einstein pointed to something so deep, so counterintuitive, so troubling, and yet so exciting, that at the beginning of the twenty-first century it has returned to fascinate theoretical physicists. Bohr's only answer to Einstein's last great discovery—the discovery of entanglement—was to ignore it. Einstein's fundamental dispute with quantum mechanics was not about whether God rolled dice, whether the uncertainty principle allowed simultaneous measurement of position and momentum, or even whether quantum mechanics was complete. It was about reality. Does a physical reality exist independent of our ability to observe it? To Bohr and his followers, such questions were meaningless. All that we can know are the results of measurements and observations. It makes no sense to speculate about an ultimate reality that exists beyond our perceptions. Einstein's beliefs had evolved over the years from those that he had held when he was young, when, as a logical positivist heavily influenced by his reading of David Hume and Ernst Mach, he had rejected such unobservable concepts as absolute time and space. Einstein believed: 1. A reality exists independent of our ability to observe it. 2. Objects are located at distinct points in spacetime and have their own independent, real existence. In other words, he believed in "separability and locality." 3. Although at a superficial level, quantum events may appear random, at some ultimate level, strict causality underlies all processes in nature. Einstein considered that realism and localism were fundamental underpinnings of physics. After leaving Nazi Germany and settling in Princeton at the Institute for Advanced Study, Einstein began writing up a thought experiment that he had been mulling over since attending a lecture by Léon Rosenfeld in 1933. Since the paper was to be in English, Einstein enlisted the help of the 46-year-old Boris Podolsky, a fellow who had moved to the institute from Caltech; he also enlisted the help of the 26-year-old Nathan Rosen, also at the institute, who did much of the math. The result of their collaboration was the four page EPR paper, which in its title asked the question "Can Quantum-Mechanical Description of Physical Reality be Considered Complete?" After seeing the paper in print, Einstein found himself unhappy with the result. His clear conceptual visualization had been buried under layers of mathematical formalism. Einstein's thought experiment involved two particles that have collided or which have been created in such a way that they have properties which are correlated. The total wave function for the pair links the positions of the particles as well as their linear momenta. The figure depicts the spreading of the wave function from the collision point. However, observation of the position of the first particle allows us to determine precisely the position of the second particle no matter how far the pair have separated. Likewise, measuring the momentum of the first particle allows us to determine precisely the momentum of the second particle. "In accordance with our criterion for reality, in the first case we must consider the quantity P as being an element of reality, in the second case the quantity Q is an element of reality." Einstein concluded that the second particle, which we have never directly observed, must have at any moment a position that is real and a momentum that is real. Quantum mechanics does not account for these features of reality. Therefore, quantum mechanics is not complete. It is known, from the uncertainty principle, that position and momentum cannot be measured at the same time. But even though their values can only be determined in distinct contexts of measurement, can they both be definite at the same time? Einstein concluded that the answer must be yes. The only alternative, claimed Einstein, would be to assert that measuring the first particle instantaneously affected the reality of the position and momentum of the second particle. "No reasonable definition of reality could be expected to permit this." Bohr was stunned when he read Einstein's paper and spent more than six weeks framing his response, which he gave exactly the same title as the EPR paper. The EPR paper forced Bohr to make a major revision in his understanding of complementarity in the Copenhagen interpretation of quantum mechanics. Prior to EPR, Bohr had maintained that disturbance caused by the act of observation was the physical explanation for quantum uncertainty. In the EPR thought experiment, however, Bohr had to admit that "there is no question of a mechanical disturbance of the system under investigation." On the other hand, he noted that the two particles were one system described by one quantum function. Furthermore, the EPR paper did nothing to dispel the uncertainty principle. Later commentators have questioned the strength and coherence of Bohr's response. As a practical matter, however, physicists for the most part did not pay much attention to the debate between Bohr and Einstein, since the opposing views did not affect one's ability to apply quantum mechanics to practical problems, but only affected one's interpretation of the quantum formalism. If they thought about the problem at all, most working physicists tended to follow Bohr's leadership. In 1964, John Stewart Bell made the groundbreaking discovery that Einstein's local realist world view made experimentally verifiable predictions that would be in conflict with those of quantum mechanics. Bell's discovery shifted the Einstein–Bohr debate from philosophy to the realm of experimental physics. Bell's theorem showed that, for any local realist formalism, there exist limits on the predicted correlations between pairs of particles in an experimental realization of the EPR thought experiment. In 1972, the first experimental tests were carried out that demonstrated violation of these limits. Successive experiments improved the accuracy of observation and closed loopholes. To date, it is virtually certain that local realist theories have been falsified. The EPR paper has recently been recognized as prescient, since it identified the phenomenon of quantum entanglement, which has inspired approaches to quantum mechanics different from the Copenhagen interpretation, and has been at the forefront of major technological advances in quantum computing, quantum encryption, and quantum information theory. Notes. &lt;templatestyles src="Reflist/styles.css" /&gt; Primary sources. &lt;templatestyles src="Reflist/styles.css" /&gt; References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "M" }, { "math_id": 1, "text": "L" }, { "math_id": 2, "text": "E" }, { "math_id": 3, "text": "E/c." }, { "math_id": 4, "text": "v = -E/(Mc)." }, { "math_id": 5, "text": "\\Delta t = L/c," }, { "math_id": 6, "text": "v << c" }, { "math_id": 7, "text": "\\Delta x = - \\frac{{EL}}{{M c^2}} ." }, { "math_id": 8, "text": "k," }, { "math_id": 9, "text": "k" }, { "math_id": 10, "text": "m = E/c^2," }, { "math_id": 11, "text": "W" }, { "math_id": 12, "text": "v" }, { "math_id": 13, "text": " {(W-v) /(1 -(Wv/c^2))} " }, { "math_id": 14, "text": "T" }, { "math_id": 15, "text": " T = L { 1 - (Wv/c^2) \\over W-v } . " }, { "math_id": 16, "text": "v < c" }, { "math_id": 17, "text": " W > c" }, { "math_id": 18, "text": "T < 0" }, { "math_id": 19, "text": "E / c^2 ." }, { "math_id": 20, "text": "S_2" }, { "math_id": 21, "text": "h" }, { "math_id": 22, "text": "S_1" }, { "math_id": 23, "text": "g ." }, { "math_id": 24, "text": "S_1." }, { "math_id": 25, "text": "g" }, { "math_id": 26, "text": "h/c" }, { "math_id": 27, "text": " v = gh/c " }, { "math_id": 28, "text": "E_2," }, { "math_id": 29, "text": "E_1" }, { "math_id": 30, "text": "E_1 \\approx E_2 \\left( 1 + \\frac{v}{c}\\right) = E_2 \\left( 1 + \\frac{gh}{c^2}\\right) ." }, { "math_id": 31, "text": "gh" }, { "math_id": 32, "text": " \\Phi " }, { "math_id": 33, "text": "E_1 = E_2 + \\frac{E_2}{c^2}\\Phi." }, { "math_id": 34, "text": "E_2" }, { "math_id": 35, "text": "E_2/c^2" }, { "math_id": 36, "text": "E/c^2" }, { "math_id": 37, "text": "E ( 1 + gh/c^2)" }, { "math_id": 38, "text": "S_1," }, { "math_id": 39, "text": "Mgh." }, { "math_id": 40, "text": "M." }, { "math_id": 41, "text": "M'." }, { "math_id": 42, "text": "M'gh." }, { "math_id": 43, "text": "S_2," }, { "math_id": 44, "text": " M'gh - Mgh," }, { "math_id": 45, "text": "Egh/c^2, " }, { "math_id": 46, "text": "M' - M = \\frac{E}{c^2} . " }, { "math_id": 47, "text": "v_2" }, { "math_id": 48, "text": "v_1" }, { "math_id": 49, "text": "v_1 = v_2 \\left(1 + \\frac{\\Phi}{c^2}\\right) ." }, { "math_id": 50, "text": "S_1 ?" }, { "math_id": 51, "text": "\\pi" }, { "math_id": 52, "text": " h \\nu " }, { "math_id": 53, "text": " \\left\\langle \\epsilon ^2 \\right\\rangle " }, { "math_id": 54, "text": "\\nu" }, { "math_id": 55, "text": "\\nu + d\\nu" }, { "math_id": 56, "text": " \\left\\langle \\epsilon ^2 (\\nu, T) \\right\\rangle = \\left( h \\nu \\rho + \\frac{c^3}{8 \\pi \\nu ^2} \\rho^2 \\right) v d\\nu ," }, { "math_id": 57, "text": "\\rho v d\\nu" }, { "math_id": 58, "text": "E = m c^2" }, { "math_id": 59, "text": "t" }, { "math_id": 60, "text": "\\Delta q" }, { "math_id": 61, "text": "\\Delta m" }, { "math_id": 62, "text": "p" }, { "math_id": 63, "text": "\\Delta p" }, { "math_id": 64, "text": "\\Delta p \\Delta q \\approx h ." }, { "math_id": 65, "text": "\\Delta p < gt \\Delta m ," }, { "math_id": 66, "text": "gt \\Delta m \\Delta q > h ." }, { "math_id": 67, "text": "\\Delta t = c^{-2} g t \\Delta q" }, { "math_id": 68, "text": "t_0 ," }, { "math_id": 69, "text": "c^2 \\Delta m \\Delta t = \\Delta E \\Delta t > h ." } ]
https://en.wikipedia.org/wiki?curid=57264039
57265177
Lifelong Planning A*
Algorithm LPA* or Lifelong Planning A* is an incremental heuristic search algorithm based on A*. It was first described by Sven Koenig and Maxim Likhachev in 2001. Description. LPA* is an incremental version of A*, which can adapt to changes in the graph without recalculating the entire graph, by updating the g-values (distance from start) from the previous search during the current search to correct them when necessary. Like A*, LPA* uses a heuristic, which is a lower boundary for the cost of the path from a given node to the goal. A heuristic is admissible if it is guaranteed to be non-negative (zero being admissible) and never greater than the cost of the cheapest path to the goal. Predecessors and successors. With the exception of the start and goal node, each node "n" has "predecessors" and "successors": In the following description, these two terms refer only to the "immediate" predecessors and successors, not to predecessors of predecessors or successors of successors. Start distance estimates. LPA* maintains two estimates of the start distance "g"*("n") for each node: For the start node, the following always holds true: formula_0 If "rhs"("n") equals "g"("n"), then "n" is called "locally consistent". If all nodes are locally consistent, then a shortest path can be determined as with A*. However, when edge costs change, local consistency needs to be re-established only for those nodes which are relevant for the route. Priority queue. When a node becomes locally inconsistent (because the cost of its predecessor or the edge linking it to a predecessor has changed), it is placed in a priority queue for re-evaluation. LPA* uses a two-dimensional key: formula_1 Entries are ordered by "k"1 (which corresponds directly to the f-values used in A*), then by "k"2. Node expansion. The top node in the queue is expanded as follows: Since changing the g-value of a node may also change the rhs-values of its successors (and thus their local consistence), they are evaluated and their queue membership and key is updated if necessary. Expansion of nodes continues with the next node at the top of the queue until two conditions are met: Initial run. The graph is initialized by setting the rhs-value of the start node to 0 and its g-value to infinity. For all other nodes, both the g-value and the rhs-value are assumed to be infinity until assigned otherwise. This initially makes the start node the only locally inconsistent node, and thus the only node in the queue. After that, node expansion begins. The first run of LPA* thus behaves in the same manner as A*, expanding the same nodes in the same order. Cost changes. When the cost of an edge changes, LPA* examines all nodes affected by the change, i.e. all nodes at which one of the changed edges terminates (if an edge can be traversed in both directions and the change affects both directions, both nodes connected by the edge are examined): After that, node expansion resumes until the end condition has been reached. Finding the shortest path. Once node expansion has finished (i.e. the exit conditions are met), the shortest path is evaluated. If the cost for the goal equals infinity, there is no finite-cost path from start to goal. Otherwise, the shortest path can be determined by moving backwards: Pseudocode. This code assumes a priority queue codice_0, which supports the following operations: void main() { initialize(); while (true) { computeShortestPath(); while (!hasCostChanges()) sleep; for (edge : getChangedEdges()) { edge.setCost(getNewCost(edge)); updateNode(edge.endNode); void initialize() { queue = new PriorityQueue(); for (node : getAllNodes()) { node.g = INFINITY; node.rhs = INFINITY; start.rhs = 0; queue.insert(start, calculateKey(start)); /** Expands the nodes in the priority queue. */ void computeShortestPath() { while ((queue.getTopKey() &lt; calculateKey(goal)) || (goal.rhs != goal.g)) { node = queue.pop(); if (node.g &gt; node.rhs) { node.g = node.rhs; } else { node.g = INFINITY; updateNode(node); for (successor : node.getSuccessors()) updateNode(successor); /** Recalculates rhs for a node and removes it from the queue. * If the node has become locally inconsistent, it is (re-)inserted into the queue with its new key. */ void updateNode(node) { if (node != start) { node.rhs = INFINITY; for (predecessor: node.getPredecessors()) node.rhs = min(node.rhs, predecessor.g + predecessor.getCostTo(node)); if (queue.contains(node)) queue.remove(node); if (node.g != node.rhs) queue.insert(node, calculateKey(node)); int[] calculateKey(node) { return {min(node.g, node.rhs) + node.getHeuristic(goal), min(node.g, node.rhs)}; Properties. Being algorithmically similar to A*, LPA* shares many of its properties. For an A* implementation which breaks ties between two nodes with equal f-values in favor of the node with the smaller g-value (which is not well-defined in A*), the following statements are also true: LPA* additionally has the following properties: References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "rhs(start) = g(start) = 0" }, { "math_id": 1, "text": "k(n) =\n \\begin{bmatrix}\n k_1(n)\\\\\n k_2(n)\\\\\n \\end{bmatrix} =\n \\begin{bmatrix}\n min(g(n), rhs(n)) + h(n, goal)\\\\\n min(g(n), rhs(n))\\\\\n \\end{bmatrix}\n" } ]
https://en.wikipedia.org/wiki?curid=57265177
5726601
Family of curves
Set of curves from a function with variable parameter(s) In geometry, a family of curves is a set of curves, each of which is given by a function or parametrization in which one or more of the parameters is variable. In general, the parameter(s) influence the shape of the curve in a way that is more complicated than a simple linear transformation. Sets of curves given by an implicit relation may also represent families of curves. Families of curves appear frequently in solutions of differential equations; when an additive constant of integration is introduced, it will usually be manipulated algebraically until it no longer represents a simple linear transformation. Families of curves may also arise in other areas. For example, all non-degenerate conic sections can be represented using a single polar equation with one parameter, the eccentricity of the curve: formula_0 as the value of e changes, the appearance of the curve varies in a relatively complicated way. Applications. Families of curves may arise in various topics in geometry, including the envelope of a set of curves and the caustic of a given curve. Generalizations. In algebraic geometry, an algebraic generalization is given by the notion of a linear system of divisors.
[ { "math_id": 0, "text": "r(\\theta) = {l \\over 1+e \\cos \\theta}" } ]
https://en.wikipedia.org/wiki?curid=5726601
572813
Sphericon
Type of rollable 3D shape In solid geometry, the sphericon is a solid that has a continuous developable surface with two congruent, semi-circular edges, and four vertices that define a square. It is a member of a special family of rollers that, while being rolled on a flat surface, bring all the points of their surface to contact with the surface they are rolling on. It was discovered independently by carpenter Colin Roberts (who named it) in the UK in 1969, by dancer and sculptor Alan Boeding of MOMIX in 1979, and by inventor David Hirsch, who patented it in Israel in 1980. Construction. The sphericon may be constructed from a bicone (a double cone) with an apex angle of 90 degrees, by splitting the bicone along a plane through both apexes, rotating one of the two halves by 90 degrees, and reattaching the two halves. Alternatively, the surface of a sphericon can be formed by cutting and gluing a paper template in the form of four circular sectors (with central angles formula_0) joined edge-to-edge. Geometric properties. The surface area of a sphericon with radius formula_1 is given by formula_2. The volume is given by formula_3, exactly half the volume of a sphere with the same radius. History. Around 1969, Colin Roberts (a carpenter from the UK) made a sphericon out of wood while attempting to carve a Möbius strip without a hole. In 1979, David Hirsch invented a device for generating a meander motion. The device consisted of two perpendicular half discs joined at their axes of symmetry. While examining various configurations of this device, he discovered that the form created by joining the two half discs, exactly at their diameter centers, is actually a skeletal structure of a solid made of two half bicones, joined at their square cross-sections with an offset angle of 90 degrees, and that the two objects have exactly the same meander motion. Hirsch filed a patent in Israel in 1980, and a year later, a pull toy named Wiggler Duck, based on Hirsch's device, was introduced by Playskool Company. In 1999, Colin Roberts sent Ian Stewart a package containing a letter and two sphericon models. In response, Stewart wrote an article "Cone with a Twist" in his Mathematical Recreations column of Scientific American. This sparked quite a bit of interest in the shape, and has been used by Tony Phillips to develop theories about mazes. Roberts' name for the shape, the sphericon, was taken by Hirsch as the name for his company, Sphericon Ltd. In popular culture. In 1979, modern dancer Alan Boeding designed his "Circle Walker" sculpture from two crosswise semicircles, a skeletal version of the sphericon. He began dancing with a scaled-up version of the sculpture in 1980 as part of an MFA program in sculpture at Indiana University, and after he joined the MOMIX dance company in 1984 the piece became incorporated into the company's performances. The company's later piece "Dream Catcher" is based around a similar Boeding sculpture whose linked teardrop shapes incorporate the skeleton and rolling motion of the oloid, a similar rolling shape formed from two perpendicular circles each passing through the center of the other. In 2008, renowned British woodturner David Springett published the book "Woodturning Full Circle", which explains how sphericons (and other unusual solid forms, such as streptohedrons) can be made on a wood lathe. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\pi/\\sqrt{2}" }, { "math_id": 1, "text": "r" }, { "math_id": 2, "text": "S = 2\\sqrt{2}\\pi r^2" }, { "math_id": 3, "text": "V = \\frac{2}{3}\\pi r^3" } ]
https://en.wikipedia.org/wiki?curid=572813
572903
Exponential tree
An exponential tree is a type of search tree where the number of children of its nodes decreases doubly-exponentially with increasing depth. Values are stored only in the leaf nodes. Each node contains a splitter, a value less than or equal to all values in the subtree which is used during search. Exponential trees use another data structure in inner nodes containing the splitters from children, allowing fast lookup. Exponential trees achieve optimal asymptotic complexity on some operations. They have mainly theoretical importance. Tree structure. An exponential tree is a rooted tree where every node contains a splitter and every leaf node contains a value. The value may be different from the splitter. An exponential tree with formula_0 values is defined recursively: An additional condition is that searching for a value using the splitters must yield the correct node (i.e. the one containing the value). Therefore, if a root of a subtree contains the splitter formula_3 and its right sibling contains the splitter formula_4, then this subtree can only contain keys in the range formula_5. Local data structure. The tree uses a static data structure in every inner node to allow fast lookup of values. It must be possible to build this structure with formula_6 values in time formula_7. The lookup time in this structure is denoted formula_8. A Fusion tree can be used as this data structure. Operations. Search. The exponential tree can be searched in the same way as a normal search tree. In each node, the local data structure can be used to find the next child quickly. Let formula_9 denote the time complexity of the search. Then it satisfies the following recurrence: formula_10
[ { "math_id": 0, "text": "n" }, { "math_id": 1, "text": "\\Theta(n^{1/k})" }, { "math_id": 2, "text": "\\Theta(n^{1-1/k})" }, { "math_id": 3, "text": "s" }, { "math_id": 4, "text": "s'" }, { "math_id": 5, "text": "[s,s')" }, { "math_id": 6, "text": "d" }, { "math_id": 7, "text": "O(d^{k-1})" }, { "math_id": 8, "text": "S(d)" }, { "math_id": 9, "text": "T(n)" }, { "math_id": 10, "text": "T(n) \\le T(n^{1-1/k}) + O(S(n))" } ]
https://en.wikipedia.org/wiki?curid=572903
5729639
Chopsticks (hand game)
Hand game for two or more players Chopsticks (sometimes called Calculator, or just Sticks) is a hand game for two or more players, in which players extend a number of fingers from each hand and transfer those scores by taking turns tapping one hand against another. Chopsticks is an example of a combinatorial game, and is solved in the sense that with perfect play, an optimal strategy from any point is known. Description. Gameplay. Chopsticks consists of players tallying points using the fingers on their hands. Each player starts with two points (one finger on each hand). Taking turns, players tap the opponent's hand, which adds points to it equal to the value of the tapping hand. A player’s hands do not change when the opponent’s hand is tapped. For example, if an opposing player has the maximum number of points on their hand, they may not subtract points from it if they decide to knock out the other player's hand, such as if a player has five points and the other has two, the player with five points cannot give the other player a portion of their points to avoid being knocked out. When a hand gets five points only, it is "knocked out" and called a "dead hand". A dead hand cannot attack or be attacked. A player wins by knocking out both of their opponent's hands. Instead of attacking on their turn, a player may "split" points among their hands. A split can be either a transfer or a division. A transfer involves moving a certain number of points from one living hand to another; transferring all points off one hand knocks it out ("suicide"), and is allowed in some variations. A division can resurrect a dead hand by moving points from a living hand, bringing it back into play. The new distribution must be distinct from the original distribution; a player may not simply swap points between hands. Due to the game's simple basic structure, there are many variations with additional rules. In some variations, a sum greater than 5 "rolls over" to a smaller value by subtracting 5 from the sum (modular arithmetic); a hand is eliminated only when it has exactly five points. In other variations, more complex transfer and division moves are allowed. Abbreviation. Each position in a two-player game of Chopsticks can be encoded as a four-digit number, with each digit ranging from 0 to 4, representing the number of active fingers on each hand. This can be notated as [ABCD], where A and B are the hands of the player who is about to take their turn, and C and D are the hands of the player who is not about to take their turn. Each pair of hands is notated in ascending order, so every distinct position is represented by one and only one four-digit number. For example, the code 1023 is not allowed, and should be notated 0123. The starting position is 1111. Unless any special transfers are used, the next position must be 1211. The next position must be either 1212 or 1312. During the game, the smallest position is 0001, and the largest is 4434. This abbreviation can be expanded to games with more players. A three-player game can be represented by six digits (e.g. [111211]), where each pair of adjacent digits represents a single player, and each pair is ordered based on when players will take their turns. The leftmost pair represents the hands of the player about to take his turn; the middle pair represents the player who will go next, and so on. The rightmost pair represents the player who must wait the longest before his turn (usually because he just went). Moves. Under normal rules, there are a maximum of 14 possible moves: However, only 5 or fewer of these are available on a given turn. For example, the early position 1312 can become 2213, 1313, 2413, 0113, or 1222. Game lengths. The shortest possible game is five moves. There is one instance: Without revisitation (repeating a position), the longest possible game is nine moves. There are two instances: With revisitation, the longest possible game is indefinite. Positions. Since the roll-over amount is 5, Chopsticks is a base-5 game. In a two-player game, each position is four digits long. Counting from 0000 to 4444 (in base 5) yields 625 positions. However, this includes redundancies—most of these positions are incorrect notations (e.g. 0132, 1023, and 1032 are incorrect notations of 0123), which appear different but are functionally the same in gameplay. To find the number of functionally distinct positions, note that each player can be one of 15 distinct pairs (00, 01, 02, 03, 04, 11, 12, 13, 14, 22, 23, 24, 33, 34, and 44). With two players, there are 15*15 = 225 functionally distinct positions. In general, for formula_0 players, there are formula_1 functionally distinct positions. However, there are 21 unreachable positions: 0000, 0100, 0200, 0300, 0400, 1100, 1101, 1200, 1300, 1400, 2200, 2202, 2300, 2400, 3300, 3303, 3400, 3444, 4400, 4404, and 4444. This gives a total of 204 unique, reachable positions. There are 14 reachable endgames: 0001, 0002, 0003, 0004, 0011, 0012, 0013, 0014, 0022, 0023, 0024, 0033, 0034, 0044. Satisfyingly enough, these are all the 14 possible endgames; in other words, someone can win using any of the 14 distinct live pairs. Out of these 14 endgames, the first player wins 8 of them, assuming that the games are ended in the minimum number of moves. Generalisations. Chopsticks can be generalized into a formula_3-type game, where "formula_4" is the number of players and "formula_5" is the rollover amount. Fewer than two players. In a one-player game, the player trivially wins for virtue of being the last player in the game. A game with zero players is likewise trivial as there can be no winners. Two players. Given "formula_6" and a rollover of formula_5, Thus, for formula_10, there are formula_22 reachable positions. More than two players. Given a rollover of 5, Degenerate cases. A game with a rollover amount of 1 is the trivial game, because all hands start dead. A game with a rollover amount of 2 is degenerate, because splitting is impossible, and the rollover and cutoff variations result in the same game. Hands are either alive and dead, with no middle state, and attacking a hand kills the hand. In fact, one could simply keep count of the number of 'hands' a player has (by using fingers or some other method of counting), and when a player attacks an opponent, the number of hands that opponent has decreases by one. There are a total of formula_23 reachable positions in the game, and a game length of formula_24. The two player game is strongly solved as a first person win. When two players have only one hand, the game becomes degenerate, because splits cannot occur and each player only has one move. Given a rollover of formula_5, each position after formula_17 moves in the game can be represented by the tuple formula_25, where formula_26 is the formula_17-th Fibonacci number with formula_27 and formula_28. The number of positions is given by least positive number formula_17 such that formula_5 divides formula_29. This variant is strongly solved as a win for either side depending upon formula_5 and the divisibility properties of Fibonacci numbers. The length of the game is formula_30. Optimal strategy. Using the rules above, two perfect players will play indefinitely; the game will continue in a loop. In fact, even very inexperienced players can avoid losing by simply looking one move ahead. In the cutoff variation, the first player can force a win. One winning strategy is to always reach one of the following configurations after each move (preferentially choosing the first one): Conversely, in the Division and Suicide only variation, then the second player has a winning strategy. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "n" }, { "math_id": 1, "text": "15^n" }, { "math_id": 2, "text": "0 < k < 5" }, { "math_id": 3, "text": "(p,r)" }, { "math_id": 4, "text": "p" }, { "math_id": 5, "text": "r" }, { "math_id": 6, "text": "p=2" }, { "math_id": 7, "text": "r^{2p}=r^4" }, { "math_id": 8, "text": "{r + 1 \\choose 2}" }, { "math_id": 9, "text": "{r + 1 \\choose 2}^p" }, { "math_id": 10, "text": "r > 2" }, { "math_id": 11, "text": "{r + 1 \\choose 2} + (r - 1) + 2" }, { "math_id": 12, "text": "(r - 1)" }, { "math_id": 13, "text": "A=B=k" }, { "math_id": 14, "text": "C=0" }, { "math_id": 15, "text": "D=k" }, { "math_id": 16, "text": "0 < k < r" }, { "math_id": 17, "text": "k" }, { "math_id": 18, "text": "r - 1" }, { "math_id": 19, "text": "A=r-2" }, { "math_id": 20, "text": "B=r-1" }, { "math_id": 21, "text": "C=D=r-1" }, { "math_id": 22, "text": "{r + 1 \\choose 2}^p - \\left({r + 1 \\choose 2} + (r - 1) + 2\\right)" }, { "math_id": 23, "text": "2^p - 1" }, { "math_id": 24, "text": "2p - 1" }, { "math_id": 25, "text": "\\left(F_{k + 2} \\bmod r, F_{k + 1} \\bmod r \\right)" }, { "math_id": 26, "text": "F_k" }, { "math_id": 27, "text": "F_0 = 0" }, { "math_id": 28, "text": "F_1 = 1" }, { "math_id": 29, "text": "F_{k + 2}" }, { "math_id": 30, "text": "k + 1" }, { "math_id": 31, "text": "r = 5" } ]
https://en.wikipedia.org/wiki?curid=5729639
57297034
Pauthenier equation
The Pauthenier equation states that the maximum charge accumulated by a particle modelled by a small sphere passing through an electric field is given by: formula_0 where formula_1 is the permittivity of free space, formula_2 is the radius of the sphere, formula_3 is the electric field strength, and formula_4 is a material dependent constant. For conductors, formula_5. For dielectrics: formula_6 where formula_7 is the relative permittivity. Low charges on nanoparticles and microparticles are stable over more than 103 second time scales. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "Q_{\\mathrm{max}}=4\\pi R^2\\epsilon_0pE" }, { "math_id": 1, "text": "\\epsilon_0" }, { "math_id": 2, "text": "R" }, { "math_id": 3, "text": "E" }, { "math_id": 4, "text": "p" }, { "math_id": 5, "text": "p=3" }, { "math_id": 6, "text": "p = 3\\epsilon_r/(\\epsilon_r + 2)" }, { "math_id": 7, "text": "\\epsilon_r" } ]
https://en.wikipedia.org/wiki?curid=57297034
5730974
Stability derivatives
Stability derivatives, and also control derivatives, are measures of how particular forces and moments on an aircraft change as other parameters related to stability change (parameters such as airspeed, altitude, angle of attack, etc.). For a defined "trim" flight condition, changes and oscillations occur in these parameters. "Equations of motion" are used to analyze these changes and oscillations. Stability and control derivatives are used to linearize (simplify) these equations of motion so the stability of the vehicle can be more readily analyzed. Stability and control derivatives change as flight conditions change. The collection of stability and control derivatives as they change over a range of flight conditions is called an aero model. Aero models are used in engineering flight simulators to analyze stability, and in real-time flight simulators for training and entertainment. "Stability" derivative vs. "control" derivative. "Stability" derivatives and "control" derivatives are related because they both are measures of forces and moments on a vehicle as other parameters change. Often the words are used together and abbreviated in the term "S&amp;C derivatives." They differ in that stability derivatives measure the effects of changes in flight conditions while control derivatives measure effects of changes in the control surface positions: Uses. Linearization (simplification) of stability analysis. Stability and control derivatives change as flight conditions change. That is, the forces and moments on the vehicle are seldom simple (linear) functions of its states. Because of this, the dynamics of atmospheric flight vehicles can be difficult to analyze. The following are two methods used to tackle this complexity. Use in flight simulators. In addition to engineering simulators, aero models are often used in "real time flight simulators" for home use and professional flight training. Names for the axes of vehicles. Air vehicles use a coordinate system of axes to help name important parameters used in the analysis of stability. All the axes run through the center of gravity (called the "CG"): Two slightly different alignments of these axes are used depending on the situation: "body-fixed axes", and "stability axes". Body-fixed axes. Body-fixed axes, or "body axes", are defined and fixed relative to the body of the vehicle.: Stability axes. Aircraft (usually not missiles) operate at a nominally constant "trim" angle of attack. The angle of the nose (the X Axis) does not align with the direction of the oncoming air. The difference in these directions "is" the "angle of attack". So, for many purposes, parameters are defined in terms of a slightly modified axis system called "stability axes". The stability axis system is used to get the X axis aligned with the oncoming flow direction. Essentially, the body axis system is rotated about the Y body axis by the trim angle of attack and then "re-fixed" to the body of the aircraft: Names for forces, moments, and velocities. Forces and velocities along each of the axes. Forces on the vehicle along the body axes are called "Body-axis Forces": It is helpful to think of these speeds as projections of the relative wind vector on to the three body axes, rather than in terms of the translational motion of the vehicle relative to the fluid. As the body rotates relative to direction of the relative wind, these components change, even when there is no net change in speed. Equations of motion. The use of stability derivatives is most conveniently demonstrated with missile or rocket configurations, because these exhibit greater symmetry than aeroplanes, and the equations of motion are correspondingly simpler. If it is assumed that the vehicle is roll-controlled, the pitch and yaw motions may be treated in isolation. It is common practice to consider the yaw plane, so that only 2D motion need be considered. Furthermore, it is assumed that thrust equals drag, and the longitudinal equation of motion may be ignored. The body is oriented at angle formula_0 (psi) with respect to inertial axes. The body is oriented at an angle formula_1 (beta) with respect to the velocity vector, so that the components of velocity in body axes are: formula_2 formula_3 where formula_4 is the speed. The aerodynamic forces are generated with respect to body axes, which is not an inertial frame. In order to calculate the motion, the forces must be referred to inertial axes. This requires the body components of velocity to be resolved through the heading angle formula_5 into inertial axes. Resolving into fixed (inertial) axes: formula_6 formula_7 The acceleration with respect to inertial axes is found by differentiating these components of velocity with respect to time: formula_8 formula_9 From Newton's Second Law, this is equal to the force acting divided by the mass. Now forces arise from the pressure distribution over the body, and hence are generated in body axes, and not in inertial axes, so the body forces must be resolved to inertial axes, as Newton's Second Law does not apply in its simplest form to an accelerating frame of reference. Resolving the body forces: formula_10 formula_11 Newton's Second Law, assuming constant mass: formula_12 formula_13 where "m" is the mass. Equating the inertial values of acceleration and force, and resolving back into body axes, yields the equations of motion: formula_14 formula_15 The sideslip, formula_1, is a small quantity, so the small perturbation equations of motion become: formula_16 formula_17 The first resembles the usual expression of Newton's Second Law, whilst the second is essentially the centrifugal acceleration. The equation of motion governing the rotation of the body is derived from the time derivative of angular momentum: formula_18 where C is the moment of inertia about the yaw axis. Assuming constant speed, there are only two state variables; formula_1 and formula_19, which will be written more compactly as the yaw rate r. There is one force and one moment, which for a given flight condition will each be functions of formula_1, r and their time derivatives. For typical missile configurations the forces and moments depend, in the short term, on formula_1 and r. The forces may be expressed in the form: formula_20 where formula_21 is the force corresponding to the equilibrium condition (usually called the trim) whose stability is being investigated. It is common practice to employ a shorthand: formula_22 The partial derivative formula_23 and all similar terms characterising the increments in forces and moments due to increments in the state variables are called stability derivatives. Typically, formula_24 is insignificant for missile configurations, so the equations of motion reduce to: formula_25 formula_26 Stability derivative contributions. Each stability derivative is determined by the position, size, shape and orientation of the missile components. In aircraft, the directional stability determines such features as dihedral of the main planes, size of fin and area of tailplane, but the large number of important stability derivatives involved precludes a detailed discussion within this article. The missile is characterised by only three stability derivatives, and hence provides a useful introduction to the more complex aeroplane dynamics. Consider first formula_27, a body at an angle of attack formula_1 generates a lift force in the opposite direction to the motion of the body. For this reason formula_27 is always negative. At low angles of attack, the lift is generated primarily by the wings, fins and the nose region of the body. The total lift acts at a distance formula_28 ahead of the centre of gravity (it has a negative value in the figure), this, in missile parlance, is the centre of pressure . If the lift acts ahead of the centre of gravity, the yawing moment will be negative, and will tend to increase the angle of attack, increasing both the lift and the moment further. It follows that the centre of pressure must lie aft of the centre of gravity for static stability. formula_28 is the static margin and must be negative for longitudinal static stability. Alternatively, positive angle of attack must generate positive yawing moment on a statically stable missile, i.e. formula_29 must be positive. It is common practice to design manoeuvrable missiles with near zero static margin (i.e. neutral static stability). The need for positive formula_29 explains why arrows and darts have flights and unguided rockets have fins. The effect of angular velocity is mainly to decrease the nose lift and increase the tail lift, both of which act in a sense to oppose the rotation. formula_30 is therefore always negative. There is a contribution from the wing, but since missiles tend to have small static margins (typically less than a calibre), this is usually small. Also the fin contribution is greater than that of the nose, so there is a net force formula_31, but this is usually insignificant compared with formula_27 and is usually ignored. Response. Manipulation of the equations of motion yields a second order homogeneous linear differential equation in the angle of attack formula_1: formula_32 The qualitative behavior of this equation is considered in the article on directional stability. Since formula_27 and formula_30 are both negative, the damping is positive. The stiffness does not only depend on the static stability term formula_29, it also contains a term which effectively determines the angle of attack due to the body rotation. The distance of the center of lift, including this term, ahead of the centre of gravity is called the maneuver margin. It must be negative for stability. This damped oscillation in angle of attack and yaw rate, following a disturbance, is called the 'weathercock' mode, after the tendency of a weathercock to point into wind. Comments. The state variables were chosen to be the angle of attack formula_1 and the yaw rate r, and have omitted the speed perturbation u, together with the associated derivatives e.g. formula_33. This may appear arbitrary. However, since the timescale of the speed variation is much greater than that of the variation in angle of attack, its effects are negligible as far as the directional stability of the vehicle is concerned. Similarly, the effect of roll on yawing motion was also ignored, because missiles generally have low aspect ratio configurations and the roll inertia is much less than the yaw inertia, consequently the roll loop is expected to be much faster than the yaw response, and is ignored. These simplifications of the problem based on "a priori" knowledge, represent an engineer's approach. Mathematicians prefer to keep the problem as general as possible and only simplify it at the end of the analysis, if at all. Aircraft dynamics is more complex than missile dynamics, mainly because the simplifications, such as separation of fast and slow modes, and the similarity between pitch and yaw motions, are not obvious from the equations of motion, and are consequently deferred until a late stage of the analysis. Subsonic transport aircraft have high aspect ratio configurations, so that yaw and roll cannot be treated as decoupled. However, this is merely a matter of degree; the basic ideas needed to understand aircraft dynamics are covered in this simpler analysis of missile motion. Control derivatives. Deflection of control surfaces modifies the pressure distribution over the vehicle, and these are dealt with by including perturbations in forces and moments due to control deflection. The fin deflection is normally denoted formula_34 (zeta). Including these terms, the equations of motion become: formula_35 formula_36 Including the control derivatives enables the response of the vehicle to be studied, and the equations of motion used to design the autopilot. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\psi" }, { "math_id": 1, "text": "\\beta" }, { "math_id": 2, "text": "u=U \\cos\\beta" }, { "math_id": 3, "text": "v=U \\sin\\beta" }, { "math_id": 4, "text": "U" }, { "math_id": 5, "text": "(\\beta)" }, { "math_id": 6, "text": "u_f=U\\cos(\\beta)\\cos(\\psi)-U\\sin(\\beta)\\sin(\\psi)=U\\cos(\\beta+\\psi)" }, { "math_id": 7, "text": "v_f=U\\sin(\\beta)\\cos(\\psi)+U\\cos(\\beta)\\sin(\\psi)=U\\sin(\\beta+\\psi)" }, { "math_id": 8, "text": " \\frac {du_f}{dt}=\\frac {dU} {dt} \\cos(\\beta+\\psi)-U\\frac {d(\\beta+\\psi)} {dt} \\sin(\\beta+\\psi) " }, { "math_id": 9, "text": "\\frac{dv_f}{dt}=\\frac{dU}{dt}\\sin(\\beta+\\psi)+U\\frac{d(\\beta+\\psi)}{dt}\\cos(\\beta+\\psi)" }, { "math_id": 10, "text": "X_f=X\\cos(\\psi)-Y\\sin(\\psi)" }, { "math_id": 11, "text": "Y_f=Y\\cos(\\psi)+X\\sin(\\psi)" }, { "math_id": 12, "text": "X_f=m\\frac{du_f}{dt}" }, { "math_id": 13, "text": "Y_f=m\\frac{dv_f}{dt}" }, { "math_id": 14, "text": "X=m\\frac{dU}{dt}\\cos(\\beta)-mU\\frac{d(\\beta+\\psi)}{dt}\\sin(\\beta)" }, { "math_id": 15, "text": "Y=m\\frac{dU}{dt}\\sin(\\beta)+mU\\frac{d(\\beta+\\psi)}{dt}\\cos(\\beta)" }, { "math_id": 16, "text": "X=m\\frac{dU}{dt}" }, { "math_id": 17, "text": "Y=mU\\frac{d(\\beta+\\psi)}{dt}" }, { "math_id": 18, "text": "N=C\\frac{d^2\\psi}{dt^2}" }, { "math_id": 19, "text": "\\frac{d\\psi}{dt}" }, { "math_id": 20, "text": "Y=Y_0 + \\frac {\\partial Y}{\\partial \\beta} \\beta +\\frac {\\partial Y}{\\partial r}r" }, { "math_id": 21, "text": "Y_0" }, { "math_id": 22, "text": "\\frac{\\partial Y}{\\partial \\beta}=Y_\\beta" }, { "math_id": 23, "text": "\\frac{\\partial Y}{\\partial \\beta} " }, { "math_id": 24, "text": "\\frac{\\partial Y}{\\partial r}" }, { "math_id": 25, "text": "\\frac{d\\beta}{dt}=\\frac{Y_\\beta}{mU}\\beta-r" }, { "math_id": 26, "text": "\\frac{dr}{dt}=\\frac{N_\\beta}{C}\\beta+\\frac{N_r}{C}r" }, { "math_id": 27, "text": "Y_\\beta" }, { "math_id": 28, "text": "x_{cp}" }, { "math_id": 29, "text": "N_\\beta" }, { "math_id": 30, "text": "N_r" }, { "math_id": 31, "text": "Y_r" }, { "math_id": 32, "text": "\\frac{d^2\\beta}{dt^2}-\\left(\\frac{Y_\\beta}{mU}+\\frac{N_r}{C}\\right)\\frac{d\\beta}{dt}+\\left(\\frac{N_\\beta}{C}+\\frac{Y_\\beta}{mU}\\frac{N_r}{C}\\right)\\beta=0" }, { "math_id": 33, "text": "Y_u" }, { "math_id": 34, "text": "\\zeta" }, { "math_id": 35, "text": "\\frac{d\\beta}{dt}=\\frac{Y_\\beta}{mU}\\beta-r+\\frac{Y_\\zeta}{mU}\\zeta" }, { "math_id": 36, "text": "\\frac{dr}{dt}=\\frac{N_\\beta}{C}\\beta+\\frac{N_r}{C}r+\\frac{N_\\zeta}{C}\\zeta" } ]
https://en.wikipedia.org/wiki?curid=5730974
5730990
Mean absolute difference
The mean absolute difference (univariate) is a measure of statistical dispersion equal to the average absolute difference of two independent values drawn from a probability distribution. A related statistic is the relative mean absolute difference, which is the mean absolute difference divided by the arithmetic mean, and equal to twice the Gini coefficient. The mean absolute difference is also known as the absolute mean difference (not to be confused with the absolute value of the mean signed difference) and the Gini mean difference (GMD). The mean absolute difference is sometimes denoted by Δ or as MD. Definition. The mean absolute difference is defined as the "average" or "mean", formally the expected value, of the absolute difference of two random variables "X" and "Y" independently and identically distributed with the same (unknown) distribution henceforth called "Q". formula_0 Calculation. Specifically, in the discrete case, formula_1 formula_2 In the continuous case, formula_3 An alternative form of the equation is given by: formula_4 formula_5 Relative mean absolute difference. When the probability distribution has a finite and nonzero arithmetic mean AM, the relative mean absolute difference, sometimes denoted by Δ or RMD, is defined by formula_6 The relative mean absolute difference quantifies the mean absolute difference in comparison to the size of the mean and is a dimensionless quantity. The relative mean absolute difference is equal to twice the Gini coefficient which is defined in terms of the Lorenz curve. This relationship gives complementary perspectives to both the relative mean absolute difference and the Gini coefficient, including alternative ways of calculating their values. Properties. The mean absolute difference is invariant to translations and negation, and varies proportionally to positive scaling. That is to say, if X is a random variable and "c" is a constant: The relative mean absolute difference is invariant to positive scaling, commutes with negation, and varies under translation in proportion to the ratio of the original and translated arithmetic means. That is to say, if X is a random variable and c is a constant: If a random variable has a positive mean, then its relative mean absolute difference will always be greater than or equal to zero. If, additionally, the random variable can only take on values that are greater than or equal to zero, then its relative mean absolute difference will be less than 2. Compared to standard deviation. The mean absolute difference is twice the L-scale (the second L-moment), while the standard deviation is the square root of the variance about the mean (the second conventional central moment). The differences between L-moments and conventional moments are first seen in comparing the mean absolute difference and the standard deviation (the first L-moment and first conventional moment are both the mean). Both the standard deviation and the mean absolute difference measure dispersion—how spread out are the values of a population or the probabilities of a distribution. The mean absolute difference is not defined in terms of a specific measure of central tendency, whereas the standard deviation is defined in terms of the deviation from the arithmetic mean. Because the standard deviation squares its differences, it tends to give more weight to larger differences and less weight to smaller differences compared to the mean absolute difference. When the arithmetic mean is finite, the mean absolute difference will also be finite, even when the standard deviation is infinite. See the examples for some specific comparisons. The recently introduced distance standard deviation plays similar role to the mean absolute difference but the distance standard deviation works with centered distances. See also E-statistics. Sample estimators. For a random sample "S" from a random variable X, consisting of "n" values "y""i", the statistic formula_7 is a consistent and unbiased estimator of MD(X). The statistic: formula_8 is a consistent estimator of RMD(X), but is not, in general, unbiased. Confidence intervals for RMD(X) can be calculated using bootstrap sampling techniques. There does not exist, in general, an unbiased estimator for RMD(X), in part because of the difficulty of finding an unbiased estimation for multiplying by the inverse of the mean. For example, even where the sample is known to be taken from a random variable X("p") for an unknown "p", and X("p") − 1 has the Bernoulli distribution, so that Pr(X("p") = 1) = 1 − "p" and Pr(X("p") = 2) = "p", then RMD(X("p")) = 2"p"(1 − "p")/(1 + "p"). But the expected value of any estimator "R"(S) of RMD(X("p")) will be of the form: formula_9 where the "r" i are constants. So E("R"(S)) can never equal RMD(X("p")) for all "p" between 0 and 1. † formula_10 is the Beta function References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\mathrm{MD} := E[|X - Y|] ." }, { "math_id": 1, "text": "\\mathrm{MD} = E[|X - Y|] = E_X[E_{Y|X}[|X - Y|]]=\\frac{1}{n^2} \\sum_{i=1}^n \\sum_{j=1}^n | x_i - y_j | ." }, { "math_id": 2, "text": "\\mathrm{MD} = \\sum_{i=1}^n \\sum_{j=1}^n f(y_i) f(y_j) | y_i - y_j | ." }, { "math_id": 3, "text": "\\mathrm{MD} = \\int_{-\\infty}^\\infty \\int_{-\\infty}^\\infty f(x)\\,f(y)\\,|x-y|\\,dx\\,dy ." }, { "math_id": 4, "text": "\\mathrm{MD} = \\int_{0}^\\infty \\int_{-\\infty}^\\infty 2\\,f(x)\\,f(x+\\delta)\\,\\delta\\,dx\\,d\\delta ." }, { "math_id": 5, "text": "\\mathrm{MD} = \\int_0^1 \\int_0^1 |Q(F_1)-Q(F_2)|\\,dF_1\\,dF_2 ." }, { "math_id": 6, "text": "\\mathrm{RMD} = \\frac{\\mathrm{MD}}{\\mathrm{AM}}." }, { "math_id": 7, "text": "\\mathrm{MD}(S) = \\frac{\\sum_{i=1}^n \\sum_{j=1}^n | y_i - y_j |}{n(n-1)}" }, { "math_id": 8, "text": "\\mathrm{RMD}(S) = \\frac{\\sum_{i=1}^n \\sum_{j=1}^n | y_i - y_j |}{(n-1)\\sum_{i=1}^n y_i}" }, { "math_id": 9, "text": "\\operatorname{E}(R(S)) = \\sum_{i=0}^n p^i (1-p)^{n-i} r_i ," }, { "math_id": 10, "text": "\\Beta(x,y)" } ]
https://en.wikipedia.org/wiki?curid=5730990
5731287
Slope rating
Relative measure of golf course difficulty The slope rating of a golf course is a measure of its relative difficulty for a bogey golfer compared to a scratch golfer. It is used by handicapping systems to equalize the field by accounting for the likelihood that, when playing on more difficult courses, higher handicap players' scores will rise more quickly than their handicaps would otherwise predict. The term was invented by the United States Golf Association. History of slope rating. With the aim of developing their handicap system in order to account for variances in golf course playing difficulty for golfers of different abilities, in 1979 the USGA setup the Handicap Research Team (HRT). Two years earlier, in 1977, then Lt. Commander Dean Knuth, a graduate student at the Naval Postgraduate School, had devised improvements to the course rating system, including weighted ratings of ten characteristics for each hole, to provide an adjustment to the distance rating for the course. It was to be the basis for the present USGA Course Rating System. Later, while living in Norfolk, Virginia, he developed a method for Bogey Rating by analyzing data gathered from average ability volunteers scores played on the local courses. Knuth went on to serve as the USGA's Senior Director of Handicapping for 16 years, beginning in 1981. The result of the Knuth's and HRT's work was a calculation based on the difference between the course rating and bogey rating to give a numerical measure of the difference in difficulty for the scratch and bogey golfer that could be used to adjust golfer's handicaps dependent on the course being played. This remains the basis of what is now called the slope system. In 1982, the Colorado Golf Association rated all of its courses using the new procedure, under the leadership of HRT member Dr. Byron Williamson. In 1983, Colorado tested the Slope System with positive results. Five other states joined Colorado in the test during 1984, before the slope system began being implemented nationally from 1987. Since January 1, 1990, every golf association in the United States that rates golf courses uses the USGA Course Rating System. The USGA Course and Slope Rating System forms the basis for many of the world's foremost handicapping systems, including the World Handicap System, jointly developed by the USGA and The R&amp;A, that was introduced globally in 2020. USGA Slope Rating. The USGA Slope Rating is a numerical value that indicates the relative difficulty of a set of tees on a golf course for a bogey golfer in comparison to a scratch golfer. It describes the fact that when playing on a more difficult course, the scores of higher-handicapped players will rise more quickly than those of lower handicapped golfers. The slope rating of a set of tees predicts the straight-line rise in anticipated score versus USGA course handicap, as in the mathematical slope of a graph. Slope ratings are calculated as a multiple of the difference between the expected good score for a bogey golfer (handicap in the range 20 to 24), called the bogey rating, and the expected good score for a scratch golfer (zero handicap), called the USGA Course Rating. The course and bogey ratings are determined by course raters, who measure and record more than 460 variables on a standard course rating form for each set of tees. Slope ratings are in the range from 55 to 155, with a course of standard playing difficulty having a rating of 113. The higher the slope rating, the more difficult the course will play for a bogey golfer. In order to calculate the slope rating, the difference between the bogey and scratch rating is multiplied by 5.381 for men and 4.240 for women. formula_0 formula_1 References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": " \\mbox{Slope Rating (men)} : \\mbox{5.381} \\times (\\mbox{Bogey Rating} - \\mbox{USGA Course Rating}) " }, { "math_id": 1, "text": " \\mbox{Slope Rating (women)} : \\mbox{4.24} \\times (\\mbox{Bogey Rating} - \\mbox{USGA Course Rating}) " } ]
https://en.wikipedia.org/wiki?curid=5731287
5731468
Trajpar
trajpar is a system parameter in Creo Elements/Pro that varies from 0 to 1 across the length of a given path. It is used to create complex geometric and non-geometric shapes that vary in dimension along the length of any given path. Naming. trajpar derives from the words: trajectory parameter. It is a pseudo-variable as it is controlled not by any mathematical representation, but by the physical representation of a path, i.e. a "trajectory". Syntax. sd3=evalgraph('graph1',trajpar*100) Where sd3= A variable inside the variable section sweep that will be driven by a graph feature called 'graph1' Uses and notability. trajpar can be used with various mathematical functions to create alternating, flaring, bowing or sinusoidal protrusions. It is primarily used in conjunction with the "variable section sweep" command in Creo but can also remove material if desired. trajpar typifies what makes Creo different from many other CAD software systems. The level of complexity and control given to the user is much greater than, for example other design or engineering software like SolidWorks or Alias Samples. To create an undulating wave, a sinusoid might be desired: formula_0 A flare: formula_1 etc., where d1 is the dimension to be controlled.
[ { "math_id": 0, "text": "d1=sin(trajpar \\cdot 8\\pi)" }, { "math_id": 1, "text": "d1=1 + trajpar^2" } ]
https://en.wikipedia.org/wiki?curid=5731468
57316019
Tsetlin machine
Artificial intelligence algorithm A Tsetlin machine is an artificial intelligence algorithm based on propositional logic. &lt;templatestyles src="Machine learning/styles.css"/&gt; Background. A Tsetlin machine is a form of learning automaton collective for learning patterns using propositional logic. Ole-Christoffer Granmo created and gave the method its name after Michael Lvovitch Tsetlin, who invented the Tsetlin automaton and worked on Tsetlin automata collectives and games. Collectives of Tsetlin automata were originally constructed, implemented, and studied theoretically by Vadim Stefanuk in 1962. The Tsetlin machine uses computationally simpler and more efficient primitives compared to more ordinary artificial neural networks. As of April 2018 it has shown promising results on a number of test sets. Original Tsetlin machine. Tsetlin automaton. The Tsetlin automaton is the fundamental learning unit of the Tsetlin machine. It tackles the multi-armed bandit problem, learning the optimal action in an environment from penalties and rewards. Computationally, it can be seen as a finite-state machine (FSM) that changes its states based on the inputs. The FSM will generate its outputs based on the current states. formula_1 formula_2 formula_3 formula_4 formula_5 formula_6 Boolean input. A basic Tsetlin machine takes a vector formula_7 of o Boolean features as input, to be classified into one of two classes, formula_8 or formula_9. Together with their negated counterparts, formula_10, the features form a literal set formula_11. Clause computing module. A Tsetlin machine pattern is formulated as a conjunctive clause formula_12, formed by ANDing a subset formula_13 of the literal set: formula_14. For example, the clause formula_15 consists of the literals formula_16 and outputs 1 iff formula_17 and formula_18. Summation and thresholding module. The number of clauses employed is a user-configurable parameter n. Half of the clauses are assigned positive polarity. The other half is assigned negative polarity. The clause outputs, in turn, are combined into a classification decision through summation and thresholding using the unit step function formula_19: formula_20 In other words, classification is based on a majority vote, with the positive clauses voting for formula_9 and the negative for formula_8. The classifier formula_21, for instance, captures the XOR-relation. Feedback module. Resource allocation. Resource allocation dynamics ensure that clauses distribute themselves across the frequent patterns, rather than missing some and overconcentrating on others. That is, for any input X, the probability of reinforcing a clause gradually drops to zero as the clause output sum formula_22 approaches a user-set target T for formula_9 (formula_23 for formula_8). If a clause is not reinforced, it does not give feedback to its Tsetlin automata, and these are thus left unchanged. In the extreme, when the voting sum v equals or exceeds the target T (the Tsetlin Machine has successfully recognized the input X), no clauses are reinforced. Accordingly, they are free to learn new patterns, naturally balancing the pattern representation resources.
[ { "math_id": 0, "text": "2n" }, { "math_id": 1, "text": "\\{\\underline{\\Phi}, \\underline{\\alpha}, \\underline{\\beta}, F(\\cdot,\\cdot), G(\\cdot)\\}." }, { "math_id": 2, "text": "\\underline{\\Phi} = \\{\\phi_1, \\phi_2, \\phi_3, \\phi_4, \\phi_5, \\phi_6\\}" }, { "math_id": 3, "text": "\\underline{\\beta} = \\{\\beta_{\\mathrm{Penalty}}, \\beta_{\\mathrm{Reward}}\\}" }, { "math_id": 4, "text": "F(\\phi_u, \\beta_v) = \\begin{cases}\n \\phi_{u+1},& \\text{if}~ 1 \\le u \\le 3 ~\\text{and}~ v = \\text{Penalty}\\\\\n \\phi_{u-1},& \\text{if}~ 4 \\le u \\le 6 ~\\text{and}~ v = \\text{Penalty}\\\\\n \\phi_{u-1},& \\text{if}~ 1 < u \\le 3 ~\\text{and}~ v = \\text{Reward}\\\\\n \\phi_{u+1},& \\text{if}~ 4 \\le u < 6 ~\\text{and}~ v = \\text{Reward}\\\\\n \\phi_{u},& \\text{otherwise}.\n \\end{cases}" }, { "math_id": 5, "text": "\\underline{\\alpha} = \\{\\alpha_1, \\alpha_2\\}" }, { "math_id": 6, "text": "G(\\phi_u) = \\begin{cases}\n \\alpha_1, & \\text{if}~ 1 \\le u \\le 3\\\\\n \\alpha_2, & \\text{if}~ 4 \\le u \\le 6.\n \\end{cases}" }, { "math_id": 7, "text": "X=[x_1,\\ldots,x_o]" }, { "math_id": 8, "text": "y=0" }, { "math_id": 9, "text": "y=1" }, { "math_id": 10, "text": "\\bar{x}_k = {\\lnot} {x}_k = 1-x_k" }, { "math_id": 11, "text": "L = \\{x_1,\\ldots,x_o,\\bar{x}_1,\\ldots,\\bar{x}_o\\}" }, { "math_id": 12, "text": "C_j" }, { "math_id": 13, "text": "L_j {\\subseteq} L" }, { "math_id": 14, "text": "C_j (X)=\\bigwedge_{{{l}} {\\in} L_j} l = \\prod_{{{l}} {\\in} L_j} l" }, { "math_id": 15, "text": "C_j(X)=x_1\\land{\\lnot}x_2=x_1 \\bar{x}_2" }, { "math_id": 16, "text": "L_j = \\{x_1,\\bar{x}_2\\}" }, { "math_id": 17, "text": "x_1 = 1" }, { "math_id": 18, "text": "x_2 = 0" }, { "math_id": 19, "text": "u(v) = 1 ~\\text{if}~ v \\ge 0 ~\\text{else}~ 0" }, { "math_id": 20, "text": "\n\\hat{y} = u\\left(\\sum_{j=1}^{n/2} C_j^+(X) - \\sum_{j=1}^{n/2} C_j^-(X)\\right).\n" }, { "math_id": 21, "text": "\\hat{y} = u\\left(x_1 \\bar{x}_2 + \\bar{x}_1 x_2 - x_1 x_2 - \\bar{x}_1 \\bar{x}_2\\right)" }, { "math_id": 22, "text": "\nv = \\sum_{j=1}^{n/2} C_j^+(X) - \\sum_{j=1}^{n/2} C_j^-(X)\n" }, { "math_id": 23, "text": "-T" } ]
https://en.wikipedia.org/wiki?curid=57316019
5731754
Ehresmann connection
Differential geometry construct on fiber bundles In differential geometry, an Ehresmann connection (after the French mathematician Charles Ehresmann who first formalized this concept) is a version of the notion of a connection, which makes sense on any smooth fiber bundle. In particular, it does not rely on the possible vector bundle structure of the underlying fiber bundle, but nevertheless, linear connections may be viewed as a special case. Another important special case of Ehresmann connections are principal connections on principal bundles, which are required to be equivariant in the principal Lie group action. Introduction. A covariant derivative in differential geometry is a linear differential operator which takes the directional derivative of a section of a vector bundle in a covariant manner. It also allows one to formulate a notion of a parallel section of a bundle in the direction of a vector: a section "s" is parallel along a vector formula_0 if formula_1. So a covariant derivative provides at least two things: a differential operator, "and" a notion of what it means to be parallel in each direction. An Ehresmann connection drops the differential operator completely and defines a connection axiomatically in terms of the sections parallel in each direction . Specifically, an Ehresmann connection singles out a vector subspace of each tangent space to the total space of the fiber bundle, called the "horizontal space". A section formula_2 is then horizontal (i.e., parallel) in the direction "formula_0" if formula_3 lies in a horizontal space. Here we are regarding "formula_2" as a function formula_4 from the base "formula_5" to the fiber bundle "formula_6", so that formula_7 is then the pushforward of tangent vectors. The horizontal spaces together form a vector subbundle of formula_8. This has the immediate benefit of being definable on a much broader class of structures than mere vector bundles. In particular, it is well-defined on a general fiber bundle. Furthermore, many of the features of the covariant derivative still remain: parallel transport, curvature, and holonomy. The missing ingredient of the connection, apart from linearity, is "covariance". With the classical covariant derivatives, covariance is an "a posteriori" feature of the derivative. In their construction one specifies the transformation law of the Christoffel symbols – which is not covariant – and then general covariance of the "derivative" follows as a result. For an Ehresmann connection, it is possible to impose a generalized covariance principle from the beginning by introducing a Lie group acting on the fibers of the fiber bundle. The appropriate condition is to require that the horizontal spaces be, in a certain sense, equivariant with respect to the group action. The finishing touch for an Ehresmann connection is that it can be represented as a differential form, in much the same way as the case of a connection form. If the group acts on the fibers and the connection is equivariant, then the form will also be equivariant. Furthermore, the connection form allows for a definition of curvature as a curvature form as well. Formal definition. Let formula_9 be a smooth fiber bundle. Let formula_10 be the vertical bundle consisting of the vectors "tangent to the fibers" of "E", i.e. the fiber of "V" at formula_11 is formula_12. This subbundle of formula_8 is canonically defined even when there is no canonical subspace tangent to the base space "M". (Of course, this asymmetry comes from the very definition of a fiber bundle, which "only has one projection" formula_9 while a product formula_13 would have two.) Definition via horizontal subspaces. An Ehresmann connection on "formula_6" is a smooth subbundle "formula_14" of formula_8, called the horizontal bundle of the connection, which is complementary to "V", in the sense that it defines a direct sum decomposition formula_15. In more detail, the horizontal bundle has the following properties. In more sophisticated terms, such an assignment of horizontal spaces satisfying these properties corresponds precisely to a smooth section of the jet bundle "J"1"E" → "E". Definition via a connection form. Equivalently, let Φ be the projection onto the vertical bundle "V" along "H" (so that "H" = ker Φ). This is determined by the above "direct sum" decomposition of "TE" into horizontal and vertical parts and is sometimes called the connection form of the Ehresmann connection. Thus Φ is a vector bundle homomorphism from "TE" to itself with the following properties (of projections in general): Conversely, if Φ is a vector bundle endomorphism of "TE" satisfying these two properties, then "H" = ker Φ is the horizontal subbundle of an Ehresmann connection. Finally, note that Φ, being a linear mapping of each tangent space into itself, may also be regarded as a "TE"-valued 1-form on "E". This will be a useful perspective in sections to come. Parallel transport via horizontal lifts. An Ehresmann connection also prescribes a manner for lifting curves from the base manifold "M" into the total space of the fiber bundle "E" so that the tangents to the curve are horizontal. These horizontal lifts are a direct analogue of parallel transport for other versions of the connection formalism. Specifically, suppose that "γ"("t") is a smooth curve in "M" through the point "x" = "γ"(0). Let "e" ∈ "E""x" be a point in the fiber over "x". A lift of "γ" through "e" is a curve formula_21 in the total space "E" such that formula_22, and formula_23 A lift is horizontal if, in addition, every tangent of the curve lies in the horizontal subbundle of "TE": formula_24 It can be shown using the rank–nullity theorem applied to "π" and Φ that each vector "X"∈"T""x""M" has a unique horizontal lift to a vector formula_25. In particular, the tangent field to "γ" generates a horizontal vector field in the total space of the pullback bundle "γ"*"E". By the Picard–Lindelöf theorem, this vector field is integrable. Thus, for any curve "γ" and point "e" over "x" = "γ"(0), there exists a "unique horizontal lift" of "γ" through "e" for small time "t". Note that, for general Ehresmann connections, the horizontal lift is path-dependent. When two smooth curves in "M", coinciding at "γ"1(0) = "γ"2(0) = "x"0 and also intersecting at another point "x"1 ∈ "M", are lifted horizontally to "E" through the same "e" ∈ "π"−1("x"0), they will generally pass through different points of "π"−1("x"1). This has important consequences for the differential geometry of fiber bundles: the space of sections of "H" is not a Lie subalgebra of the space of vector fields on "E", because it is not (in general) closed under the Lie bracket of vector fields. This failure of closure under Lie bracket is measured by the "curvature". Properties. Curvature. Let Φ be an Ehresmann connection. Then the curvature of Φ is given by formula_26 where [-,-] denotes the Frölicher-Nijenhuis bracket of Φ ∈ Ω1("E","TE") with itself. Thus "R" ∈ Ω2("E","TE") is the two-form on "E" with values in "TE" defined by formula_27, or, in other terms, formula_28, where "X" = "X"H + "X"V denotes the direct sum decomposition into "H" and "V" components, respectively. From this last expression for the curvature, it is seen to vanish identically if, and only if, the horizontal subbundle is Frobenius integrable. Thus the curvature is the integrability condition for the horizontal subbundle to yield transverse sections of the fiber bundle "E" → "M". The curvature of an Ehresmann connection also satisfies a version of the Bianchi identity: formula_29 where again [-,-] is the Frölicher-Nijenhuis bracket of Φ ∈ Ω1("E","TE") and "R" ∈ Ω2("E","TE"). Completeness. An Ehresmann connection allows curves to have unique horizontal lifts locally. For a complete Ehresmann connection, a curve can be horizontally lifted over its entire domain. Holonomy. Flatness of the connection corresponds locally to the Frobenius integrability of the horizontal spaces. At the other extreme, non-vanishing curvature implies the presence of holonomy of the connection. Special cases. Principal bundles and principal connections. Suppose that "E" is a smooth principal "G"-bundle over "M". Then an Ehresmann connection "H" on "E" is said to be a principal (Ehresmann) connection if it is invariant with respect to the "G" action on "E" in the sense that formula_30 for any "e"∈"E" and "g"∈"G"; here formula_31 denotes the differential of the right action of "g" on "E" at "e". The one-parameter subgroups of "G" act vertically on "E". The differential of this action allows one to identify the subspace formula_32 with the Lie algebra g of group "G", say by map formula_33. The connection form "Φ" of the Ehresmann connection may then be viewed as a 1-form "ω" on "E" with values in g defined by "ω"("X")="ι"("Φ"("X")). Thus reinterpreted, the connection form "ω" satisfies the following two properties: Conversely, it can be shown that such a g-valued 1-form on a principal bundle generates a horizontal distribution satisfying the aforementioned properties. Given a local trivialization one can reduce "ω" to the horizontal vector fields (in this trivialization). It defines a 1-form "ω' " on "M" via pullback. The form "ω determines "ω" completely, but it depends on the choice of trivialization. (This form is often also called a connection form"' and denoted simply by "ω".) Vector bundles and covariant derivatives. Suppose that "E" is a smooth vector bundle over "M". Then an Ehresmann connection "H" on "E" is said to be a linear (Ehresmann) connection if "H""e" depends linearly on "e" ∈ "E""x" for each "x" ∈ "M". To make this precise, let "S""λ" denote scalar multiplication by "λ" on "E". Then "H" is linear if and only if formula_35for any "e" ∈ "E" and scalar λ. Since "E" is a vector bundle, its vertical bundle "V" is isomorphic to "π"*"E". Therefore if "s" is a section of "E", then "Φ"(d"s"):"TM"→"s"*"V"="s"*"π"*"E"="E". It is a vector bundle morphism, and is therefore given by a section ∇"s" of the vector bundle Hom("TM","E"). The fact that the Ehresmann connection is linear implies that in addition it verifies for every function formula_36 on formula_5 the Leibniz rule, i.e. formula_37, and therefore is a covariant derivative of "s". Conversely a covariant derivative "∇" on a vector bundle defines a linear Ehresmann connection by defining "H""e", for "e" ∈ "E" with "x"="π"("e"), to be the image d"s""x"("T""x""M") where "s" is a section of "E" with "s"("x") = "e" and ∇"X""s" = 0 for all "X" ∈ "T""x""M". Note that (for historical reasons) the term "linear" when applied to connections, is sometimes used (like the word "affine" – see Affine connection) to refer to connections defined on the tangent bundle or frame bundle. Associated bundles. An Ehresmann connection on a fiber bundle (endowed with a structure group) sometimes gives rise to an Ehresmann connection on an associated bundle. For instance, a (linear) connection in a vector bundle "E", thought of giving a parallelism of "E" as above, induces a connection on the associated bundle of frames P"E" of "E". Conversely, a connection in P"E" gives rise to a (linear) connection in "E" provided that the connection in P"E" is equivariant with respect to the action of the general linear group on the frames (and thus a principal connection). It is "not always" possible for an Ehresmann connection to induce, in a natural way, a connection on an associated bundle. For example, a non-equivariant Ehresmann connection on a bundle of frames of a vector bundle may not induce a connection on the vector bundle. Suppose that "E" is an associated bundle of "P", so that "E" = "P" ×G "F". A "G"-connection on "E" is an Ehresmann connection such that the parallel transport map τ : "F"x → "F"x′ is given by a "G"-transformation of the fibers (over sufficiently nearby points "x" and "x"′ in "M" joined by a curve). Given a principal connection on "P", one obtains a "G"-connection on the associated fiber bundle "E" = "P" ×G "F" via pullback. Conversely, given a "G"-connection on "E" it is possible to recover the principal connection on the associated principal bundle "P". To recover this principal connection, one introduces the notion of a "frame" on the typical fiber "F". Since "G" is a finite-dimensional Lie group acting effectively on "F", there must exist a finite configuration of points ("y"1...,"y"m) within "F" such that the "G"-orbit "R" = {("gy"1...,"gy"m) | "g" ∈ "G"} is a principal homogeneous space of "G". One can think of "R" as giving a generalization of the notion of a frame for the "G"-action on "F". Note that, since "R" is a principal homogeneous space for "G", the fiber bundle "E"("R") associated to "E" with typical fiber "R" is (equivalent to) the principal bundle associated to "E". But it is also a subbundle of the "m"-fold product bundle of "E" with itself. The distribution of horizontal spaces on "E" induces a distribution of spaces on this product bundle. Since the parallel transport maps associated to the connection are "G"-maps, they preserve the subspace "E"("R"), and so the "G"-connection descends to a principal "G"-connection on "E"("R"). In summary, there is a one-to-one correspondence (up to equivalence) between the descents of principal connections to associated fiber bundles, and "G"-connections on associated fiber bundles. For this reason, in the category of fiber bundles with a structure group "G", the principal connection contains all relevant information for "G"-connections on the associated bundles. Hence, unless there is an overriding reason to consider connections on associated bundles (as there is, for instance, in the case of Cartan connections) one usually works directly with the principal connection.
[ { "math_id": 0, "text": "X" }, { "math_id": 1, "text": "\\nabla_X s = 0" }, { "math_id": 2, "text": "s" }, { "math_id": 3, "text": "{\\rm d}s(X)" }, { "math_id": 4, "text": "s\\colon M\\to E" }, { "math_id": 5, "text": "M" }, { "math_id": 6, "text": "E" }, { "math_id": 7, "text": "{\\rm d}s\\colon TM\\to TE" }, { "math_id": 8, "text": "TE" }, { "math_id": 9, "text": "\\pi\\colon E\\to M" }, { "math_id": 10, "text": "V= \\ker (\\operatorname{d} \\pi \\colon TE\\to TM)" }, { "math_id": 11, "text": "e\\in E" }, { "math_id": 12, "text": "V_e =T_e(E_{\\pi(e)})" }, { "math_id": 13, "text": "E=M\\times F" }, { "math_id": 14, "text": "H" }, { "math_id": 15, "text": "TE=H\\oplus V" }, { "math_id": 16, "text": "H_e" }, { "math_id": 17, "text": "T_e E" }, { "math_id": 18, "text": "e" }, { "math_id": 19, "text": "H_e \\cap V_e = \\{0\\}" }, { "math_id": 20, "text": "T_eE\n=H_e+V_e" }, { "math_id": 21, "text": "\\tilde{\\gamma}(t)" }, { "math_id": 22, "text": "\\tilde{\\gamma}(0) = e" }, { "math_id": 23, "text": "\\pi(\\tilde{\\gamma}(t)) = \\gamma(t)." }, { "math_id": 24, "text": "\\tilde{\\gamma}'(t) \\in H_{\\tilde{\\gamma}(t)}." }, { "math_id": 25, "text": "\\tilde{X} \\in T_e E" }, { "math_id": 26, "text": "R = \\tfrac{1}{2}[\\varPhi,\\varPhi]" }, { "math_id": 27, "text": "R(X,Y) = \\varPhi\\left([(\\mathrm{id} - \\varPhi)X,(\\mathrm{id} - \\varPhi)Y]\\right)" }, { "math_id": 28, "text": "R\\left(X,Y\\right) = \\left[X_H,Y_H\\right]_V" }, { "math_id": 29, "text": "\\left[\\varPhi, R\\right] = 0" }, { "math_id": 30, "text": "H_{eg}=\\mathrm d(R_g)_e (H_{e})" }, { "math_id": 31, "text": "\\mathrm d(R_g)_e" }, { "math_id": 32, "text": "V_e" }, { "math_id": 33, "text": "\\iota\\colon V_e\\to \\mathfrak g" }, { "math_id": 34, "text": "R_h^*\\omega=\\hbox{Ad}(h^{-1})\\omega" }, { "math_id": 35, "text": "H_{\\lambda e} = \\mathrm d(S_{\\lambda})_e (H_{e})" }, { "math_id": 36, "text": "f" }, { "math_id": 37, "text": "\\nabla(f s) = f\\nabla (s) + d(f)\\otimes s" } ]
https://en.wikipedia.org/wiki?curid=5731754
5732075
Arnold's cat map
Chaotic map from the torus into itself In mathematics, Arnold's cat map is a chaotic map from the torus into itself, named after Vladimir Arnold, who demonstrated its effects in the 1960s using an image of a cat, hence the name. It is a simple and pedagogical example for hyperbolic toral automorphisms. Thinking of the torus formula_0 as the quotient space formula_1, Arnold's cat map is the transformation formula_2 given by the formula formula_3 Equivalently, in matrix notation, this is formula_4 That is, with a unit equal to the width of the square image, the image is sheared one unit up, then two units to the right, and all that lies outside that unit square is shifted back by the unit until it is within the square. Name. The map receives its name from Arnold's 1967 manuscript with André Avez, "Problèmes ergodiques de la mécanique classique", in which the outline of a cat was used to illustrate the action of the map on the torus. In the original book it was captioned by a humorous footnote, &lt;templatestyles src="Template:Blockquote/styles.css" /&gt;The Société Protectrice des Animaux has given permission to reproduce this image, as well as others. In Arnold's native Russian, the map is known as "okroshka (cold soup) from a cat" (), in reference to the map's mixing properties, and which forms a play on words. Arnold later wrote that he found the name "Arnold's Cat" by which the map is known in English and other languages to be "strange". The discrete cat map. It is possible to define a discrete analogue of the cat map. One of this map's features is that image being apparently randomized by the transformation but returning to its original state after a number of steps. As can be seen in the adjacent picture, the original image of the cat is sheared and then wrapped around in the first iteration of the transformation. After some iterations, the resulting image appears rather random or disordered, yet after further iterations the image appears to have further order—ghost-like images of the cat, multiple smaller copies arranged in a repeating structure and even upside-down copies of the original image—and ultimately returns to the original image. The discrete cat map describes the phase space flow corresponding to the discrete dynamics of a bead hopping from site "q""t" (0 ≤ "q""t" &lt; "N") to site "q""t"+1 on a circular ring with circumference "N", according to the second order equation: formula_9 Defining the momentum variable "p""t" = "q""t" − "q""t"−1, the above second order dynamics can be re-written as a mapping of the square 0 ≤ "q", "p" &lt; "N" (the phase space of the discrete dynamical system) onto itself: formula_10 formula_11 This Arnold cat mapping shows mixing behavior typical for chaotic systems. However, since the transformation has a determinant equal to unity, it is area-preserving and therefore invertible the inverse transformation being: formula_12 formula_13 For real variables "q" and "p", it is common to set "N" = 1. In that case a mapping of the unit square with periodic boundary conditions onto itself results. When N is set to an integer value, the position and momentum variables can be restricted to integers and the mapping becomes a mapping of a toroidial square grid of points onto itself. Such an integer cat map is commonly used to demonstrate mixing behavior with Poincaré recurrence utilising digital images. The number of iterations needed to restore the image can be shown never to exceed 3N. For an image, the relationship between iterations could be expressed as follows: formula_14 Models. Python code for Arnold's Cat Map. import os from PIL.Image import open as load_pic, new as new_pic def main(path, iterations, keep_all=False, name="arnold_cat-{name}-{index}.png"): Params path:str path to photograph iterations:int number of iterations to compute name:str formattable string to use as template for file names title = os.path.splitext(os.path.split(path)[1])[0] counter = 0 while counter &lt; iterations: with load_pic(path) as image: dim = width, height = image.size with new_pic(image.mode, dim) as canvas: for x in range(width): for y in range(height): nx = (2 * x + y) % width ny = (x + y) % height canvas.putpixel((nx, height-ny-1), image.getpixel((x, height-y-1))) if counter &gt; 0 and not keep_all: os.remove(path) counter += 1 print(counter, end="\r") path = name.format(name=title, index=counter) canvas.save(path) return canvas if __name__ == "__main__": path = input("Enter the path to an image:\n\t") while not os.path.exists(path): path = input("Couldn't find your chosen image, please try again:\n\t") result = main(path, 3) result.show()
[ { "math_id": 0, "text": "\\mathbb{T}^2" }, { "math_id": 1, "text": "\\mathbb{R}^2/\\mathbb{Z}^2" }, { "math_id": 2, "text": "\\Gamma : \\mathbb{T}^2 \\to \\mathbb{T}^2" }, { "math_id": 3, "text": "\\Gamma (x,y) = (2x+y,x+y) \\bmod 1." }, { "math_id": 4, "text": "\\Gamma \\left( \\begin{bmatrix} x \\\\ y \\end{bmatrix} \\right) = \\begin{bmatrix} 2 & 1 \\\\ 1 & 1 \\end{bmatrix} \\begin{bmatrix} x \\\\ y \\end{bmatrix} \\bmod 1 = \\begin{bmatrix} 1 & 1 \\\\ 0 & 1 \\end{bmatrix} \\begin{bmatrix} 1 & 0 \\\\ 1 & 1 \\end{bmatrix} \\begin{bmatrix} x \\\\ y \\end{bmatrix} \\bmod 1." }, { "math_id": 5, "text": "n" }, { "math_id": 6, "text": "|\\lambda_1^n+\\lambda_2^n-2|" }, { "math_id": 7, "text": "\\lambda_1" }, { "math_id": 8, "text": "\\lambda_2" }, { "math_id": 9, "text": "q_{t+1} - 3q_t + q_{t-1} = 0 \\mod N" }, { "math_id": 10, "text": "q_{t+1} = 2q_{t} + p_t \\mod N" }, { "math_id": 11, "text": "p_{t+1} = q_{t} + p_t \\mod N" }, { "math_id": 12, "text": "q_{t-1} = q_t - p_t \\mod N" }, { "math_id": 13, "text": "p_{t-1} = -q_t + 2p_t \\mod N" }, { "math_id": 14, "text": "\n\\begin{array}{rrcl}\nn=0: \\quad & T^0 (x,y) &= & \\text{Input Image}(x,y) \\\\\nn=1: \\quad & T^1 (x,y) &= & T^0 \\left( \\bmod(2x+y, N), \\bmod(x+y, N) \\right) \\\\\n& &\\vdots \\\\\nn=k: \\quad & T^k (x,y) &= & T^{k-1} \\left( \\bmod(2x+y, N), \\bmod(x+y, N) \\right) \\\\\n& &\\vdots \\\\\nn=m: \\quad & \\text{Output Image}(x,y) &=& T^m (x,y)\n\\end{array}\n" } ]
https://en.wikipedia.org/wiki?curid=5732075
5732212
MUSCL scheme
Finite volume method in partial differential equations In the study of partial differential equations, the MUSCL scheme is a finite volume method that can provide highly accurate numerical solutions for a given system, even in cases where the solutions exhibit shocks, discontinuities, or large gradients. MUSCL stands for "Monotonic Upstream-centered Scheme for Conservation Laws" (van Leer, 1979), and the term was introduced in a seminal paper by Bram van Leer (van Leer, 1979). In this paper he constructed the first "high-order", "total variation diminishing" (TVD) scheme where he obtained second order spatial accuracy. The idea is to replace the piecewise constant approximation of Godunov's scheme by reconstructed states, derived from cell-averaged states obtained from the previous time-step. For each cell, slope limited, reconstructed left and right states are obtained and used to calculate fluxes at the cell boundaries (edges). These fluxes can, in turn, be used as input to a "Riemann solver", following which the solutions are averaged and used to advance the solution in time. Alternatively, the fluxes can be used in "Riemann-solver-free" schemes, which are basically Rusanov-like schemes. Linear reconstruction. We will consider the fundamentals of the MUSCL scheme by considering the following simple first-order, scalar, 1D system, which is assumed to have a wave propagating in the positive direction, formula_0 Where formula_1 represents a state variable and formula_2 represents a flux variable. The basic scheme of Godunov uses piecewise constant approximations for each cell, and results in a first-order upwind discretisation of the above problem with cell centres indexed as formula_3. A semi-discrete scheme can be defined as follows, formula_4 This basic scheme is not able to handle shocks or sharp discontinuities as they tend to become smeared. An example of this effect is shown in the diagram opposite, which illustrates a 1D advective equation with a step wave propagating to the right. The simulation was carried out with a mesh of 200 cells and used a 4th order Runge–Kutta time integrator (RK4). To provide higher resolution of discontinuities, Godunov's scheme can be extended to use piecewise linear approximations of each cell, which results in a "central difference" scheme that is "second-order" accurate in space. The piecewise linear approximations are obtained from formula_5 Thus, evaluating fluxes at the cell edges we get the following semi-discrete scheme formula_6 where formula_7 and formula_8 are the piecewise approximate values of cell edge variables, "i.e.", formula_9 formula_10 Although the above second-order scheme provides greater accuracy for smooth solutions, it is not a total variation diminishing (TVD) scheme and introduces spurious oscillations into the solution where discontinuities or shocks are present. An example of this effect is shown in the diagram opposite, which illustrates a 1D advective equation formula_11, with a step wave propagating to the right. This loss of accuracy is to be expected due to Godunov's theorem. The simulation was carried out with a mesh of 200 cells and used RK4 for time integration. MUSCL based numerical schemes extend the idea of using a linear piecewise approximation to each cell by using "slope limited" left and right extrapolated states. This results in the following high resolution, TVD discretisation scheme, formula_12 Which, alternatively, can be written in the more succinct form, formula_13 The numerical fluxes formula_14 correspond to a nonlinear combination of first and second-order approximations to the continuous flux function. The symbols formula_15 and formula_16 represent scheme dependent functions (of the limited extrapolated cell edge variables), "i.e.", formula_17 where, using downwind slopes: formula_18 formula_19 and formula_20 The function formula_21 is a limiter function that limits the slope of the piecewise approximations to ensure the solution is TVD, thereby avoiding the spurious oscillations that would otherwise occur around discontinuities or shocks - see Flux limiter section. The limiter is equal to zero when formula_22 and is equal to unity when formula_23. Thus, the accuracy of a TVD discretization degrades to first order at local extrema, but tends to second order over smooth parts of the domain. The algorithm is straight forward to implement. Once a suitable scheme for formula_24 has been chosen, such as the "Kurganov and Tadmor scheme" (see below), the solution can proceed using standard numerical integration techniques. Kurganov and Tadmor central scheme. A precursor to the "Kurganov and Tadmor" (KT) "central scheme", (Kurganov and Tadmor, 2000), is the "Nessyahu and Tadmor" (NT) a staggered "central scheme", (Nessyahu and Tadmor, 1990). It is a Riemann-solver-free, second-order, high-resolution scheme that uses MUSCL reconstruction. It is a fully discrete method that is straight forward to implement and can be used on scalar and vector problems, and can be viewed as a Rusanov flux (also called the local Lax-Friedrichs flux) supplemented with high order reconstructions. The algorithm is based upon central differences with comparable performance to Riemann type solvers when used to obtain solutions for PDE's describing systems that exhibit high-gradient phenomena. The KT scheme extends the NT scheme and has a smaller amount of numerical viscosity than the original NT scheme. It also has the added advantage that it can be implemented as either a "fully discrete" or "semi-discrete" scheme. Here we consider the semi-discrete scheme. The calculation is shown below: formula_25 formula_26 Where the "local propagation speed", formula_27, is the maximum absolute value of the eigenvalue of the Jacobian of formula_28 over cells formula_29 given by formula_30 and formula_31 represents the spectral radius of formula_32 Beyond these CFL related speeds, no characteristic information is required. The above flux calculation is most frequently called "Lax-Friedrichs flux" (though it's worth mentioning that such flux expression does not appear in Lax, 1954 but rather on Rusanov, 1961). An example of the effectiveness of using a high resolution scheme is shown in the diagram opposite, which illustrates the 1D advective equation formula_33, with a step wave propagating to the right. The simulation was carried out on a mesh of 200 cells, using the Kurganov and Tadmor central scheme with Superbee limiter and used RK-4 for time integration. This simulation result contrasts extremely well against the above first-order upwind and second-order central difference results shown above. This scheme also provides good results when applied to sets of equations - see results below for this scheme applied to the Euler equations. However, care has to be taken in choosing an appropriate limiter because, for example, the Superbee limiter can cause unrealistic sharpening for some smooth waves. The scheme can readily include diffusion terms, if they are present. For example, if the above 1D scalar problem is extended to include a diffusion term, we get formula_34 for which Kurganov and Tadmor propose the following central difference approximation, formula_35 Where, formula_36 formula_37 Full details of the algorithm ("full" and "semi-discrete" versions) and its derivation can be found in the original paper (Kurganov and Tadmor, 2000), along with a number of 1D and 2D examples. Additional information is also available in the earlier related paper by Nessyahu and Tadmor (1990). Note: This scheme was originally presented by Kurganov and Tadmor as a 2nd order scheme based upon "linear extrapolation". A later paper (Kurganov and Levy, 2000) demonstrates that it can also form the basis of a third order scheme. A 1D advective example and an Euler equation example of their scheme, using parabolic reconstruction (3rd order), are shown in the "parabolic reconstruction" and "Euler equation" sections below. Piecewise parabolic reconstruction. It is possible to extend the idea of linear-extrapolation to higher order reconstruction, and an example is shown in the diagram opposite. However, for this case the left and right states are estimated by interpolation of a second-order, upwind biased, difference equation. This results in a parabolic reconstruction scheme that is third-order accurate in space. We follow the approach of Kermani (Kermani, et al., 2003), and present a third-order upwind biased scheme, where the symbols formula_38 and formula_39 again represent scheme dependent functions (of the limited reconstructed cell edge variables). But for this case they are based upon parabolically reconstructed states, "i.e.", formula_40 and formula_41 formula_42 formula_43 formula_44 Where formula_45 = 1/3 and, formula_46 formula_47 and the limiter function formula_48, is the same as above. Parabolic reconstruction is straight forward to implement and can be used with the Kurganov and Tadmor scheme in lieu of the linear extrapolation shown above. This has the effect of raising the spatial solution of the KT scheme to 3rd order. It performs well when solving the Euler equations, see below. This increase in spatial order has certain advantages over 2nd order schemes for smooth solutions, however, for shocks it is more dissipative - compare diagram opposite with above solution obtained using the KT algorithm with linear extrapolation and Superbee limiter. This simulation was carried out on a mesh of 200 cells using the same KT algorithm but with parabolic reconstruction. Time integration was by RK-4, and the alternative form of van Albada limiter, formula_49, was used to avoid spurious oscillations. Example: 1D Euler equations. For simplicity we consider the 1D case without heat transfer and without body force. Therefore, in conservation vector form, the general Euler equations reduce to formula_50 where formula_51 and where formula_52 is a vector of states and formula_53 is a vector of fluxes. The equations above represent conservation of mass, momentum, and energy. There are thus three equations and four unknowns, formula_54 (density) formula_55 (fluid velocity), formula_56 (pressure) and formula_57 (total energy). The total energy is given by, formula_58 where formula_59 represents specific internal energy. In order to close the system an equation of state is required. One that suits our purpose is formula_60 where formula_61 is equal to the ratio of specific heats formula_62 for the fluid. We can now proceed, as shown above in the simple 1D example, by obtaining the left and right extrapolated states for each state variable. Thus, for density we obtain formula_63 where formula_64 formula_65 Similarly, for momentum formula_66, and total energy formula_57. Velocity formula_55, is calculated from momentum, and pressure formula_56, is calculated from the equation of state. Having obtained the limited extrapolated states, we then proceed to construct the edge fluxes using these values. With the edge fluxes known, we can now construct the semi-discrete scheme, "i.e.", formula_67 The solution can now proceed by integration using standard numerical techniques. The above illustrates the basic idea of the MUSCL scheme. However, for a practical solution to the Euler equations, a suitable scheme (such as the above KT scheme), also has to be chosen in order to define the function formula_68. The diagram opposite shows a 2nd order solution to G A Sod's shock tube problem (Sod, 1978) using the above high resolution Kurganov and Tadmor Central Scheme (KT) with Linear Extrapolation and Ospre limiter. This illustrates clearly demonstrates the effectiveness of the MUSCL approach to solving the Euler equations. The simulation was carried out on a mesh of 200 cells using Matlab code (Wesseling, 2001), adapted to use the KT algorithm and Ospre limiter. Time integration was performed by a 4th order SHK (equivalent performance to RK-4) integrator. The following initial conditions (SI units) were used: The diagram opposite shows a 3rd order solution to G A Sod's shock tube problem (Sod, 1978) using the above high resolution Kurganov and Tadmor Central Scheme (KT) but with parabolic reconstruction and van Albada limiter. This again illustrates the effectiveness of the MUSCL approach to solving the Euler equations. The simulation was carried out on a mesh of 200 cells using Matlab code (Wesseling, 2001), adapted to use the KT algorithm with Parabolic Extrapolation and van Albada limiter. The alternative form of van Albada limiter, formula_49, was used to avoid spurious oscillations. Time integration was performed by a 4th order SHK integrator. The same initial conditions were used. Various other high resolution schemes have been developed that solve the Euler equations with good accuracy. Examples of such schemes are, More information on these and other methods can be found in the references below. An open source implementation of the Kurganov and Tadmor central scheme can be found in the external links below. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "u_t + F_x\\left(u \\right)=0. \\, " }, { "math_id": 1, "text": "u" }, { "math_id": 2, "text": "F" }, { "math_id": 3, "text": "i" }, { "math_id": 4, "text": "\\frac{\\mathrm{d} u_i}{\\mathrm{d} t} + \\frac{1}{\\Delta x_i} \\left[ \nF \\left( u_{i} \\right) - F \\left( u_{i-1} \\right) \\right] =0. " }, { "math_id": 5, "text": " u \\left( x \\right) = u_{i} +\n \\frac{\\left( x - x_{i} \\right) }{ \\left( x_{i+1} - x_{i} \\right)}\n \\left( u_{i+1} - u_{i} \\right) \\qquad \\forall x \\in (x_{i}, x_{i+1}]." }, { "math_id": 6, "text": "\\frac{\\mathrm{d} u_i}{\\mathrm{d} t} + \\frac{1}{\\Delta x_i} \\left[ \nF \\left( u_{i + 1/2} \\right) - F \\left( u_{i - 1/2} \\right) \\right] =0, " }, { "math_id": 7, "text": " u_{i + 1/2} " }, { "math_id": 8, "text": " u_{i - 1/2} " }, { "math_id": 9, "text": " u_{i + 1/2} = 0.5 \\left( u_{i} + u_{i + 1} \\right), " }, { "math_id": 10, "text": " u_{i - 1/2} = 0.5 \\left( u_{i-1} + u_{i} \\right). " }, { "math_id": 11, "text": "\\, u_t+u_x=0 " }, { "math_id": 12, "text": "\\frac{\\mathrm{d} u_i}{\\mathrm{d} t} + \\frac{1}{\\Delta x_i} \\left[ \nF \\left( u^*_{i + 1/2} \\right) - F \\left( u^*_{i - 1/2} \\right) \\right] =0. " }, { "math_id": 13, "text": "\\frac{\\mathrm{d} u_i}{\\mathrm{d} t} + \\frac{1}{\\Delta x_i} \\left[ \nF^*_{i + 1/2} - F^*_{i - 1/2} \\right] =0. " }, { "math_id": 14, "text": "F^*_{i \\pm 1/2} " }, { "math_id": 15, "text": " u^*_{i + 1/2} " }, { "math_id": 16, "text": " u^*_{i - 1/2} " }, { "math_id": 17, "text": " u^*_{i + 1/2} = u^*_{i + 1/2} \\left( u^L_{i + 1/2} , u^R_{i + 1/2} \\right),\n\n u^*_{i - 1/2} = u^*_{i - 1/2} \\left( u^L_{i - 1/2} , u^R_{i - 1/2} \\right), " }, { "math_id": 18, "text": " u^L_{i + 1/2} = u_{i} + 0.5 \\phi \\left( r_i \\right) \\left( u_{i+1} - u_{i} \\right),\n u^R_{i + 1/2} = u_{i+1} - 0.5 \\phi \\left( r_{i+1} \\right) \\left( u_{i+2} - u_{i+1} \\right)," }, { "math_id": 19, "text": " u^L_{i - 1/2} = u_{i-1} + 0.5 \\phi \\left( r_{i-1} \\right) \\left( u_{i} - u_{i-1} \\right),\n u^R_{i - 1/2} = u_i - 0.5 \\phi \\left( r_i \\right) \\left( u_{i+1} - u_i \\right)," }, { "math_id": 20, "text": " r_{i} = \\frac{u_i - u_{i-1}}{u_{i+1} - u_i}." }, { "math_id": 21, "text": "\\phi \\left( r_i \\right)" }, { "math_id": 22, "text": "r \\le 0" }, { "math_id": 23, "text": "r = 1" }, { "math_id": 24, "text": "F^*_{i + 1/2}" }, { "math_id": 25, "text": "F^*_{i-\\frac{1}{2}} =\\frac{1}{2} \\left\\{\n\\left[ F \\left(u^R_{i - \\frac{1}{2}} \\right) + F \\left(u^L_{i - \\frac{1}{2}} \\right) \\right]\n- a_{i - \\frac{1}{2} } \\left[u^R_{i - \\frac{1}{2}} - u^L_{i - \\frac{1}{2}} \\right] \\right\\}. " }, { "math_id": 26, "text": "F^*_{i+\\frac{1}{2}} =\\frac{1}{2} \\left\\{\n\\left[ F \\left(u^R_{i + \\frac{1}{2}} \\right) + F \\left(u^L_{i + \\frac{1}{2}} \\right) \\right]\n- a_{i + \\frac{1}{2} } \\left[u^R_{i + \\frac{1}{2}} - u^L_{i + \\frac{1}{2}} \\right] \\right\\}. " }, { "math_id": 27, "text": " a_{i \\pm \\frac{1}{2}} \\ " }, { "math_id": 28, "text": " F \\left( u \\left(x, t \\right) \\right)" }, { "math_id": 29, "text": "{i} , {i \\pm 1}" }, { "math_id": 30, "text": " a_{i + \\frac{1}{2} } \\left( t \\right) = \\max \\left[ \n\\rho \\left( \\frac{\\partial F \\left( u^L_{i+1/2} \\left( t \\right) \\right)}{\\partial u} \\right) ,\n\\rho \\left( \\frac{\\partial F \\left( u^R_{i+1/2} \\left( t \\right) \\right)}{\\partial u} \\right), \n\\right] " }, { "math_id": 31, "text": " \\rho\\left(\\frac{\\partial F \\left( u \\left( t \\right) \\right)}{ \\partial u}\\right) \\ " }, { "math_id": 32, "text": " \\frac{\\partial F \\left( u \\left( t \\right) \\right)}{ \\partial u}. " }, { "math_id": 33, "text": "u_t+u_x=0 \\ " }, { "math_id": 34, "text": "u_t + F_x\\left(u \\right) = Q_x \\left( u , u_x \\right), " }, { "math_id": 35, "text": "\\frac{\\mathrm{d} u_i}{\\mathrm{d} t} = \n- \\frac{1}{\\Delta x_i} \\left[ F^*_{i + \\frac{1}{2}} - F^*_{i - \\frac{1}{2}} \\right] \n+ \\frac{1}{\\Delta x_i} \\left[ P_{i + \\frac{1}{2}} - P_{i - \\frac{1}{2}} \\right]. " }, { "math_id": 36, "text": "P_{i + \\frac{1}{2}} = \\frac{1}{2} \\left[ \nQ \\left( u_{i} , \\frac{u_{i+1} - u_i}{\\Delta x_i} \\right) + \nQ \\left( u_{i+1} , \\frac{u_{i+1} - u_i}{\\Delta x_i} \\right)\n \\right], " }, { "math_id": 37, "text": "P_{i - \\frac{1}{2}} = \\frac{1}{2} \\left[ \nQ \\left( u_{i-1} , \\frac{u_{i} - u_{i-1}}{\\Delta x_{i-1}} \\right) + \nQ \\left( u_{i} , \\frac{u_{i} - u_{i-1}}{\\Delta x_{i-1}} \\right).\n \\right] " }, { "math_id": 38, "text": " u^*_{i + \\frac{1}{2}} " }, { "math_id": 39, "text": " u^*_{i - \\frac{1}{2}} " }, { "math_id": 40, "text": " u^*_{i + \\frac{1}{2}} = f \\left( u^L_{i + \\frac{1}{2}} , u^R_{i + \\frac{1}{2}} \\right),\\quad\n\n u^*_{i - \\frac{1}{2}} = f \\left( u^L_{i - \\frac{1}{2}} , u^R_{i - \\frac{1}{2}} \\right), " }, { "math_id": 41, "text": " u^L_{i + \\frac{1}{2}} = u_{i} + \\frac{\\phi \\left( r_{i} \\right)}{4} \\left[ \n\\left( 1 - \\kappa \\right) \\delta u_{i - \\frac{1}{2} } + \n\\left( 1 + \\kappa \\right) \\delta u_{i + \\frac{1}{2} } \n\\right]," }, { "math_id": 42, "text": "u^R_{i + \\frac{1}{2}} = u_{i+1} - \\frac{\\phi \\left( r_{i+1} \\right)}{4} \\left[ \n\\left( 1 - \\kappa \\right) \\delta u_{i + \\frac{3}{2} } + \n\\left( 1 + \\kappa \\right) \\delta u_{i + \\frac{1}{2} } \n\\right], " }, { "math_id": 43, "text": " u^L_{i - \\frac{1}{2}} = u_{i-1} + \\frac{\\phi \\left( r_{i-1} \\right)}{4} \\left[ \n\\left( 1 - \\kappa \\right) \\delta u_{i - \\frac{3}{2}} + \n\\left( 1 + \\kappa \\right) \\delta u_{i - \\frac{1}{2} } \n\\right]," }, { "math_id": 44, "text": "u^R_{i - \\frac{1}{2}} = u_{i} - \\frac{\\phi \\left( r_{i} \\right)}{4} \\left[ \n\\left( 1 - \\kappa \\right) \\delta u_{i + \\frac{1}{2} } + \n\\left( 1 + \\kappa \\right) \\delta u_{i - \\frac{1}{2} } \n\\right]." }, { "math_id": 45, "text": " \\kappa \\ " }, { "math_id": 46, "text": " \\delta u_{i + \\frac{1}{2} } = \\left( u_{i+1} - u_{i} \\right) ,\\quad \n \\delta u_{i - \\frac{1}{2} } = \\left( u_{i} - u_{i-1} \\right)," }, { "math_id": 47, "text": " \\delta u_{i + \\frac{3}{2} } = \\left( u_{i+2} - u_{i+1} \\right) ,\\quad \n \\delta u_{i - \\frac{3}{2} } = \\left( u_{i-1} - u_{i-2} \\right)," }, { "math_id": 48, "text": " \\phi \\left( r \\right)\\ " }, { "math_id": 49, "text": " \\phi_{va} (r) = \\frac{2 r}{1 + r^2 } \\ " }, { "math_id": 50, "text": " \n\\frac{\\partial \\mathbf{U}}{\\partial t}+\n\\frac{\\partial \\mathbf{F}}{\\partial x}=0,\n" }, { "math_id": 51, "text": "\n\\mathbf{U}=\\begin{pmatrix}\\rho \\\\ \\rho u \\\\ E\\end{pmatrix}\\qquad\n\\mathbf{F}=\\begin{pmatrix}\\rho u\\\\p+\\rho u^2\\\\ u(E+p)\\end{pmatrix},\\qquad\n" }, { "math_id": 52, "text": " \\mbox{U} " }, { "math_id": 53, "text": " \\mbox{F} " }, { "math_id": 54, "text": " \\rho " }, { "math_id": 55, "text": " u " }, { "math_id": 56, "text": " p " }, { "math_id": 57, "text": " E " }, { "math_id": 58, "text": "E=\\rho e + \\frac{1}{2} \\rho u^2," }, { "math_id": 59, "text": " e\\ " }, { "math_id": 60, "text": "p=\\rho \\left(\\gamma-1 \\right)e," }, { "math_id": 61, "text": " \\gamma\\ " }, { "math_id": 62, "text": " \\left[ c_p/c_v \\right] " }, { "math_id": 63, "text": " \\rho^*_{i + \\frac{1}{2}} = \\rho^*_{i + \\frac{1}{2}} \\left( \\rho^L_{i + \\frac{1}{2}} , \\rho^R_{i + \\frac{1}{2}} \\right), \\quad\n\n \\rho^*_{i - \\frac{1}{2}} = \\rho^*_{i - \\frac{1}{2}} \\left( \\rho^L_{i - \\frac{1}{2}} , \\rho^R_{i - \\frac{1}{2}} \\right), " }, { "math_id": 64, "text": " \\rho^L_{i + \\frac{1}{2}} = \\rho_{i} + 0.5 \\phi \\left( r_{i} \\right) \\left( \\rho_{i} - \\rho_{i-1} \\right), \\quad \n \\rho^R_{i + \\frac{1}{2}} = \\rho_{i+1} - 0.5 \\phi \\left( r_{i+1} \\right) \\left( \\rho_{i+1} - \\rho_{i} \\right)," }, { "math_id": 65, "text": " \\rho^L_{i - \\frac{1}{2}} = \\rho_{i-1} + 0.5 \\phi \\left( r_{i-1} \\right) \\left( \\rho_{i} - \\rho_{i-1} \\right), \\quad\n \\rho^R_{i - \\frac{1}{2}} = \\rho_{i} - 0.5 \\phi \\left( r_{i} \\right) \\left( \\rho_{i+1} - \\rho_{i} \\right)." }, { "math_id": 66, "text": " \\rho u " }, { "math_id": 67, "text": "\\frac{\\mathrm{d} \\mathbf{U}_i}{\\mathrm{d} t} = - \\frac{1}{\\Delta x_i} \\left[ \n\\mathbf{F}^*_{i + \\frac{1}{2} } - \\mathbf{F}^*_{i - \\frac{1}{2}} \\right]. " }, { "math_id": 68, "text": "\\mathbf{F}^*_{i \\pm \\frac{1}{2} } " } ]
https://en.wikipedia.org/wiki?curid=5732212
57322715
Epitaxial graphene growth on silicon carbide
Epitaxial graphene growth on silicon carbide (SiC) by thermal decomposition is a method to produce large-scale few-layer graphene (FLG). Graphene is one of the most promising nanomaterials for the future because of its various characteristics, like strong stiffness and high electric and thermal conductivity. Still, reproducible production of Graphene is difficult, thus many different techniques have been developed. The main advantage of epitaxial graphene growth on silicon carbide over other techniques is to obtain graphene layers directly on a semiconducting or semi-insulating substrate which is commercially available. History. The thermal decomposition of bulk SiC was first reported in 1965 by Badami. He annealed the SiC in vacuum to around 2180 °C for an hour to obtain a graphite lattice. In 1975, Bommel et al. then achieved to form monolayer graphite on the C-face as well as the Si-face of hexagonal SiC. The experiment was carried out under UHV with a temperature of 800 °C and hints for a graphene structure could be found in LEED patterns and the change in the carbon Auger peak from a carbide character to a graphite character. New insights in the electronic and physical properties of graphene like the Dirac nature of the charge carriers, half-integer quantum Hall effect or the observation of the 2D electron gas behaviour were first measured on multilayer graphene from de Heer et al. at the Georgia Institute of Technology in 2004. Still, the Nobel Prize in Physics ″for groundbreaking experiments regarding the two-dimensional material graphene″ in 2010 was awarded to Andre Geim and Konstantin Novoselov. An official online document of the Royal Swedish Academy of Sciences about this awarding got under fire. Walter de Heer mentions several objections about the work of Geim and Novoselov who apparently have measured on many-layer graphene, also called graphite, which has different electronic and mechanical properties. Emtsev et al. improved the whole procedure in 2009 by annealing the SiC-samples at high temperatures over 1650 °C in an argon environment to obtain morphologically superior graphene. Process. The underlying process is the desorption of atoms from an annealed surface, in this case a SiC-sample. Due to the fact that the vapor pressure of carbon is negligible compared to the one of silicon, the Si atoms desorb at high temperatures and leave behind the carbon atoms which form graphitic layers, also called few-layer graphene (FLG). Different heating mechanisms like e-beam heating or resistive heating lead to the same result. The heating process takes place in a vacuum to avoid contamination. Approximately three bilayers of SiC are necessary to set free enough carbon atoms needed for the formation of one graphene layer. This number can be calculated out of the molar densities. Today's challenge is to improve this process for industrial fabrication. The FLG obtained so far has a non-uniform thickness distribution which leads to different electronic properties. Because of this, there's a demand for growing uniform large-area FLG with the desired thickness in a reproducible way. Also, the impact of the SiC substrate on the physical properties of FLG is not totally understood yet. The thermal decomposition process of SiC in high / ultra high vacuum works out well and appears promising for large-scale production of devices on graphene basis. But still, there are some problems that have to be solved. Using this technique, the resulting graphene consists of small grains with varying thickness (30–200 nm). These grains occur due to morphological changes of the SiC surface under high temperatures. On the other side, at relatively low temperatures, poor quality occurs due to the high sublimation rate. The growth procedure was improved to a more controllable technique by annealing the SiC-samples at high temperatures over 1650 °C in an argon environment. The desorbed silicon atoms from the surface collide with the argon atoms and a few are reflected back to the surface. This leads to a decrease of the Si evaporation rate. Carrying out the experiment under high temperatures further enhances surface diffusion. This leads to a restructuring of the surface which is completed before the formation of the graphene layer. As an additional advantage, the graphene domains are larger in size than in the initial process (3 x 50 μm2) up to 50 x 50 μm2 . Of course, the technology always undergoes changes to improve the graphene quality. One of them is the so-called confinement controlled sublimation (CCS) method. Here, the SiC sample is placed in a graphite enclosure equipped with a small leak. By controlling the evaporation rate of the silicon through this leak, a regulation of the graphene growth rate is possible. Therefore, high-quality graphene layers are obtained in a near-equilibrium environment. The quality of the graphene can also be controlled by annealing in the presence of an external silicon flux. By using disilane gas, the silicon vapor pressure can be controlled. Crystallographic orientation between the SiC and graphene layers. SiC is bipolar and therefore the growth can take place on both the SiC(0001) (silicon-terminated) or SiC(0001) (carbon-terminated) faces of 4H-SiC and 6H-SiC wafers. The different faces result in different growth rates and electronic properties. Silicon-terminated face. On the SiC(0001) face, large-area single crystalline monolayer graphene with a low growth rate can be grown. These graphene layers do have a good reproducibility. In this case, the graphene layer grows not directly on top of the substrate but on a complex formula_0 structure. This structure is non-conducting, rich of carbon and partially covalently bonded to the underlying SiC substrate and provides, therefore, a template for subsequent graphene growth and works as an electronic ″buffer layer″. This buffer layer forms a non-interacting interface with the graphene layer on top of it. Therefore, the monolayer graphene grown an SiC(0001) is electronically identical to a freestanding monolayer of graphene. Changing the growth parameters such as annealing temperature and time, the number of graphene layers on the SiC(0001) can be controlled . The graphene always maintains its epitaxial relationship with the SiC substrate and the topmost graphene, which originates from the initial buffer layer, is continuous everywhere across the substrate steps and across the boundary between regions with different numbers of graphene layers. The buffer layer does not exhibit the intrinsic electronic structure of graphene but induces considerable n-doping in the overlying monolayer graphene film. This is a source of electronic scattering and leads therefore to major problems for future electronic device applications based on SiC-supported graphene structures. This buffer layer can be transformed into monolayer graphene by decoupling it from the SiC substrate using an intercalation process. It is also possible to grow off axis on 6H-SiC(0001) wafers. Ouerghi obtained a perfect uniform graphene monolayer at the terraces by limiting the silicon sublimation rate with N2 and silicon fluxes in UHV at an annealing temperature of 1300 °C. A growth on the 3C-SiC(111) face is also possible. Therefore, annealing temperatures over 1200 °C are necessary. First, the SiC loses silicon atoms and the top layer rearranges in a SiCformula_1 structure. A loss of further silicon atoms leads to a new intermediate distorted stage of SiCformula_2 which matches almost the graphene (2 x 2) structure. Losing the residual silicon atoms, this evolves into graphene. The first four layers of cubic SiC(111) are arranged in the same order as SiC(0001) so the findings are applicable to both structures. Carbon-terminated face. The growth on the SiC(0001) face is much faster than on the SiC(0001) face . Also the number of layers is higher, around 5 to 100 layers and a polycrystalline nature appear. In early reports, the regions of graphene growths have been described as ″islands″ since they appear on microscopy images as pockets of graphene on the substrate surface. Hite et al. however found out, that these islands are positioned at a lower level than the surrounding surface and referred them as graphene covered basins (GCBs). The suggestion is, that crystallographic defects in the substrate act as nucleation sites for these GCBs. During the growth of the graphene layers, the GCBs coalesce with each. Because of their different possible orientations, sizes and thickness, the resulting graphene film contains misoriented grains with varying thickness. This leads to large oriental disorder. Growing graphene on the carbon-terminated face, every layer is rotated against the previous one with angles between 0° and 30° relative to the substrate. Due to this, the symmetry between the atoms in the unit cell is not broken in multilayers and every layer has the electronic properties of an isolated monolayer of graphene. Evaluation of number of graphene layers. To optimize the growth conditions, it is important to know the number of graphene layers. This number can be determined by using the quantized oscillations of the electron reflectivity. Electrons have a wave character. If they are shot on the graphene surface, they can be reflected either from the graphene surface or from the graphene-SiC interface. The reflected electrons (waves) can interfere with each other. The electron reflectivity itself changes periodically as a function of the incident electron energy and the FLG thickness. For example, thinner FLG provides longer oscillation periods. The most suitable technique for these measurements is the low-energy electron microscopy (LEEM). A fast method to evaluate the number of layers is using optical microscope in combination with contrast-enhancing techniques. Single-layer graphene domains and substrate terraces can be resolved on the surface of SiC. The method is particularly suitable for quick evaluation of the surface. Applications. Furthermore, epitaxial graphene on SiC is considered as a potential material for high-end electronics. It is considered to surpass silicon in terms of key parameters like feature size, speed and power consumption and is therefore one of the most promising materials for future applications. Saturable absorber. Using a two-inch 6H-SiC wafer as substrate, the graphene grown by thermal decomposition can be used to modulate a large energy pulse laser. Because of its saturable properties, the graphene can be used as a passive Q-switcher. Metrology. The quantum Hall effect in epitaxial graphene can serve as a practical standard for electrical resistance. The potential of epitaxial graphene on SiC for quantum metrology has been shown since 2010, displaying quantum Hall resistance quantization accuracy of three parts per billion in monolayer epitaxial graphene. Over the years precisions of parts-per-trillion in the Hall resistance quantization and giant quantum Hall plateaus have been demonstrated. Developments in encapsulation and doping of epitaxial graphene have led to the commercialisation of epitaxial graphene quantum resistance standards Other. The graphene on SiC can be also an ideal platform for structured graphene (transducers, membranes). Open problems. Limitations in terms of wafer sizes, wafer costs and availability of micromachining processes have to be taken into account when using SiC wafers. Another problem is directly coupled with the advantage of growing the graphene directly on a semiconducting or semi-insulating substrate which is commercially available. But there's no perfect method yet to transfer the graphene to other substrates. For this application, epitaxial growth on copper is a promising method. The carbon's solubility into copper is extremely low and therefore mainly surface diffusion and nucleation of carbon atoms are involved. Because of this and the growth kinetics, the graphene thickness is limited to predominantly a monolayer. The big advantage is that the graphene can be grown on Cu foil and subsequently transferred to for example SiO2. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "(6 \\cdot \\sqrt{3} \\times 6 \\cdot \\sqrt{3}) \\mathrm{R}30^\\circ" }, { "math_id": 1, "text": "(\\sqrt{3} \\times \\sqrt{3}) \\mathrm{R}30^\\circ" }, { "math_id": 2, "text": "(\\frac{3}{2} \\times \\sqrt{3}) \\mathrm{R}30^\\circ" } ]
https://en.wikipedia.org/wiki?curid=57322715
5732433
Curved mirror
Mirror with a curved reflecting surface A curved mirror is a mirror with a curved reflecting surface. The surface may be either "convex" (bulging outward) or "concave" (recessed inward). Most curved mirrors have surfaces that are shaped like part of a sphere, but other shapes are sometimes used in optical devices. The most common non-spherical type are parabolic reflectors, found in optical devices such as reflecting telescopes that need to image distant objects, since spherical mirror systems, like spherical lenses, suffer from spherical aberration. Distorting mirrors are used for entertainment. They have convex and concave regions that produce deliberately distorted images. They also provide highly magnified or highly diminished (smaller) images when the object is placed at certain distances. Convex mirrors. A convex mirror or diverging mirror is a curved mirror in which the reflective surface bulges towards the light source. Convex mirrors reflect light outwards, therefore they are not used to focus light. Such mirrors always form a virtual image, since the focal point ("F") and the centre of curvature ("2F") are both imaginary points "inside" the mirror, that cannot be reached. As a result, images formed by these mirrors cannot be projected on a screen, since the image is inside the mirror. The image is smaller than the object, but gets larger as the object approaches the mirror. A collimated (parallel) beam of light diverges (spreads out) after reflection from a convex mirror, since the normal to the surface differs at each spot on the mirror. Uses of convex mirrors. The passenger-side mirror on a car is typically a convex mirror. In some countries, these are labeled with the safety warning "Objects in mirror are closer than they appear", to warn the driver of the convex mirror's distorting effects on distance perception. Convex mirrors are preferred in vehicles because they give an upright (not inverted), though diminished (smaller), image and because they provide a wider field of view as they are curved outwards. These mirrors are often found in the hallways of various buildings (commonly known as "hallway safety mirrors"), including hospitals, hotels, schools, stores, and apartment buildings. They are usually mounted on a wall or ceiling where hallways intersect each other, or where they make sharp turns. They are useful for people to look at any obstruction they will face on the next hallway or after the next turn. They are also used on roads, driveways, and alleys to provide safety for road users where there is a lack of visibility, especially at curves and turns. Convex mirrors are used in some automated teller machines as a simple and handy security feature, allowing the users to see what is happening behind them. Similar devices are sold to be attached to ordinary computer monitors. Convex mirrors make everything seem smaller but cover a larger area of surveillance. Round convex mirrors called "Oeil de Sorcière" (French for "sorcerer's eye") were a popular luxury item from the 15th century onwards, shown in many depictions of interiors from that time. With 15th century technology, it was easier to make a regular curved mirror (from blown glass) than a perfectly flat one. They were also known as "bankers' eyes" due to the fact that their wide field of vision was useful for security. Famous examples in art include the "Arnolfini Portrait" by Jan van Eyck and the left wing of the "Werl Altarpiece" by Robert Campin. Convex mirror image. The image on a convex mirror is always "virtual" (rays haven't actually passed through the image; their extensions do, like in a regular mirror), "diminished" (smaller), and "upright" (not inverted). As the object gets closer to the mirror, the image gets larger, until approximately the size of the object, when it touches the mirror. As the object moves away, the image diminishes in size and gets gradually closer to the focus, until it is reduced to a point in the focus when the object is at an infinite distance. These features make convex mirrors very useful: since everything appears smaller in the mirror, they cover a wider field of view than a normal plane mirror, so useful for looking at cars behind a driver's car on a road, watching a wider area for surveillance, etc. Concave mirrors. A concave mirror, or converging mirror, has a reflecting surface that is recessed inward (away from the incident light). Concave mirrors reflect light inward to one focal point. They are used to focus light. Unlike convex mirrors, concave mirrors show different image types depending on the distance between the object and the mirror. The mirrors are called "converging mirrors" because they tend to collect light that falls on them, refocusing parallel incoming rays toward a focus. This is because the light is reflected at different angles at different spots on the mirror as the normal to the mirror surface differs at each spot. Uses of concave mirrors. Concave mirrors are used in reflecting telescopes. They are also used to provide a magnified image of the face for applying make-up or shaving. In illumination applications, concave mirrors are used to gather light from a small source and direct it outward in a beam as in torches, headlamps and spotlights, or to collect light from a large area and focus it into a small spot, as in concentrated solar power. Concave mirrors are used to form optical cavities, which are important in laser construction. Some dental mirrors use a concave surface to provide a magnified image. The mirror landing aid system of modern aircraft carriers also uses a concave mirror. Mirror shape. Most curved mirrors have a spherical profile. These are the simplest to make, and it is the best shape for general-purpose use. Spherical mirrors, however, suffer from spherical aberration—parallel rays reflected from such mirrors do not focus to a single point. For parallel rays, such as those coming from a very distant object, a parabolic reflector can do a better job. Such a mirror can focus incoming parallel rays to a much smaller spot than a spherical mirror can. A toroidal reflector is a form of parabolic reflector which has a different focal distance depending on the angle of the mirror. Analysis. Mirror equation, magnification, and focal length. The Gaussian mirror equation, also known as the mirror and lens equation, relates the object distance formula_0 and image distance formula_1 to the focal length formula_2: formula_3. The sign convention used here is that the focal length is positive for concave mirrors and negative for convex ones, and formula_0 and formula_1 are positive when the object and image are in front of the mirror, respectively. (They are positive when the object or image is real.) For convex mirrors, if one moves the formula_4 term to the right side of the equation to solve for formula_5, then the result is always a negative number, meaning that the image distance is negative—the image is virtual, located "behind" the mirror. This is consistent with the behavior described above. For concave mirrors, whether the image is virtual or real depends on how large the object distance is compared to the focal length. If the formula_6 term is larger than the formula_4 term, then formula_5 is positive and the image is real. Otherwise, the term is negative and the image is virtual. Again, this validates the behavior described above. The magnification of a mirror is defined as the height of the image divided by the height of the object: formula_7. By convention, if the resulting magnification is positive, the image is upright. If the magnification is negative, the image is inverted (upside down). Ray tracing. The image location and size can also be found by graphical ray tracing, as illustrated in the figures above. A ray drawn from the top of the object to the mirror surface vertex (where the optical axis meets the mirror) will form an angle with the optical axis. The reflected ray has the same angle to the axis, but on the opposite side (See Specular reflection). A second ray can be drawn from the top of the object, parallel to the optical axis. This ray is reflected by the mirror and passes through its focal point. The point at which these two rays meet is the image point corresponding to the top of the object. Its distance from the optical axis defines the height of the image, and its location along the axis is the image location. The mirror equation and magnification equation can be derived geometrically by considering these two rays. A ray that goes from the top of the object through the focal point can be considered instead. Such a ray reflects parallel to the optical axis and also passes through the image point corresponding to the top of the object. Ray transfer matrix of spherical mirrors. The mathematical treatment is done under the paraxial approximation, meaning that under the first approximation a spherical mirror is a parabolic reflector. The ray matrix of a concave spherical mirror is shown here. The formula_8 element of the matrix is formula_9, where formula_2 is the focal point of the optical device. Boxes 1 and 3 feature summing the angles of a triangle and comparing to π radians (or 180°). Box 2 shows the Maclaurin series of formula_10 up to order 1. The derivations of the ray matrices of a convex spherical mirror and a thin lens are very similar. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "d_\\mathrm{o}" }, { "math_id": 1, "text": "d_\\mathrm{i}" }, { "math_id": 2, "text": "f" }, { "math_id": 3, "text": "\\frac{1}{d_\\mathrm{o}}+ \\frac{1}{d_\\mathrm{i}} = \\frac{1}{f}" }, { "math_id": 4, "text": "1/d_\\mathrm{o}" }, { "math_id": 5, "text": "1/d_\\mathrm{i}" }, { "math_id": 6, "text": "1/f" }, { "math_id": 7, "text": "m \\equiv \\frac{h_\\mathrm{i}}{h_\\mathrm{o}} = - \\frac{d_\\mathrm{i}}{d_\\mathrm{o}}" }, { "math_id": 8, "text": "C" }, { "math_id": 9, "text": "-\\frac{1}{f}" }, { "math_id": 10, "text": "\\arccos\\left(-\\frac{r}{R}\\right)" } ]
https://en.wikipedia.org/wiki?curid=5732433
5732549
Gingerbreadman map
Chaotic map In dynamical systems theory, the Gingerbreadman map is a chaotic two-dimensional map. It is given by the piecewise linear transformation: formula_0 References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\n\\begin{cases}\nx_{n+1} = 1 - y_n + |x_n|\\\\\ny_{n+1} = x_n\n\\end{cases}\n" } ]
https://en.wikipedia.org/wiki?curid=5732549
57326
De Moivre's formula
Theorem: (cos x + i sin x)^n = cos nx + i sin nx In mathematics, de Moivre's formula (also known as de Moivre's theorem and de Moivre's identity) states that for any real number x and integer n it holds that formula_0 where i is the imaginary unit ("i"2 −1). The formula is named after Abraham de Moivre, although he never stated it in his works. The expression cos "x" + "i" sin "x" is sometimes abbreviated to cis "x". The formula is important because it connects complex numbers and trigonometry. By expanding the left hand side and then comparing the real and imaginary parts under the assumption that x is real, it is possible to derive useful expressions for cos "nx" and sin "nx" in terms of cos "x" and sin "x". As written, the formula is not valid for non-integer powers n. However, there are generalizations of this formula valid for other exponents. These can be used to give explicit expressions for the nth roots of unity, that is, complex numbers z such that "zn" 1. Using the standard extensions of the sine and cosine functions to complex numbers, the formula is valid even when x is an arbitrary complex number. Example. For formula_1 and formula_2, de Moivre's formula asserts that formula_3 or equivalently that formula_4 In this example, it is easy to check the validity of the equation by multiplying out the left side. Relation to Euler's formula. De Moivre's formula is a precursor to Euler's formula formula_5 with x expressed in radians rather than degrees, which establishes the fundamental relationship between the trigonometric functions and the complex exponential function. One can derive de Moivre's formula using Euler's formula and the exponential law for integer powers formula_6 since Euler's formula implies that the left side is equal to formula_7 while the right side is equal to formula_8 Proof by induction. The truth of de Moivre's theorem can be established by using mathematical induction for natural numbers, and extended to all integers from there. For an integer n, call the following statement S("n"): formula_9 For "n" &gt; 0, we proceed by mathematical induction. S(1) is clearly true. For our hypothesis, we assume S("k") is true for some natural k. That is, we assume formula_10 Now, considering S("k" + 1): formula_11 See angle sum and difference identities. We deduce that S("k") implies S("k" + 1). By the principle of mathematical induction it follows that the result is true for all natural numbers. Now, S(0) is clearly true since cos(0"x") + "i" sin(0"x") 1 + 0"i" 1. Finally, for the negative integer cases, we consider an exponent of −"n" for natural n. formula_12 The equation (*) is a result of the identity formula_13 for "z" cos "nx" + "i" sin "nx". Hence, S("n") holds for all integers n. Formulae for cosine and sine individually. For an equality of complex numbers, one necessarily has equality both of the real parts and of the imaginary parts of both members of the equation. If x, and therefore also cos "x" and sin "x", are real numbers, then the identity of these parts can be written using binomial coefficients. This formula was given by 16th century French mathematician François Viète: formula_14 In each of these two equations, the final trigonometric function equals one or minus one or zero, thus removing half the entries in each of the sums. These equations are in fact valid even for complex values of x, because both sides are entire (that is, holomorphic on the whole complex plane) functions of x, and two such functions that coincide on the real axis necessarily coincide everywhere. Here are the concrete instances of these equations for "n" 2 and "n" 3: formula_15 The right-hand side of the formula for cos "nx" is in fact the value "T""n"(cos "x") of the Chebyshev polynomial "T""n" at cos "x". Failure for non-integer powers, and generalization. De Moivre's formula does not hold for non-integer powers. The derivation of de Moivre's formula above involves a complex number raised to the integer power n. If a complex number is raised to a non-integer power, the result is multiple-valued (see failure of power and logarithm identities). Roots of complex numbers. A modest extension of the version of de Moivre's formula given in this article can be used to find the n-th roots of a complex number for a non-zero integer n. (This is equivalent to raising to a power of 1 / "n"). If z is a complex number, written in polar form as formula_16 then the n-th roots of z are given by formula_17 where k varies over the integer values from 0 to . This formula is also sometimes known as de Moivre's formula. Complex numbers raised to an arbitrary power. Generally, if formula_18 (in polar form) and w are arbitrary complex numbers, then the set of possible values is formula_19 (Note that if w is a rational number that equals "p" / "q" in lowest terms then this set will have exactly q distinct values rather than infinitely many. In particular, if w is an integer then the set will have exactly one value, as previously discussed.) In contrast, de Moivre's formula gives formula_20 which is just the single value from this set corresponding to "k" = 0. Analogues in other settings. Hyperbolic trigonometry. Since cosh "x" + sinh "x" "ex", an analog to de Moivre's formula also applies to the hyperbolic trigonometry. For all integers n, formula_21 If n is a rational number (but not necessarily an integer), then cosh "nx" + sinh "nx" will be one of the values of (cosh "x" + sinh "x")"n". Extension to complex numbers. For any integer n, the formula holds for any complex number formula_22 formula_23 where formula_24 Quaternions. To find the roots of a quaternion there is an analogous form of de Moivre's formula. A quaternion in the form formula_25 can be represented in the form formula_26 In this representation, formula_27 and the trigonometric functions are defined as formula_28 In the case that "a"2 + "b"2 + "c"2 ≠ 0, formula_29 that is, the unit vector. This leads to the variation of De Moivre's formula: formula_30 Example. To find the cube roots of formula_31 write the quaternion in the form formula_32 Then the cube roots are given by: formula_33 2 × 2 matrices. With matrices, formula_34 when n is an integer. This is a direct consequence of the isomorphism between the matrices of type formula_35 and the complex plane. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\big(\\cos x + i \\sin x\\big)^n = \\cos nx + i \\sin nx," }, { "math_id": 1, "text": " x = 30^\\circ" }, { "math_id": 2, "text": " n = 2" }, { "math_id": 3, "text": "\\left(\\cos(30^\\circ) + i \\sin(30^\\circ)\\right)^2 = \\cos(2 \\cdot 30^\\circ) + i \\sin (2 \\cdot 30^\\circ)," }, { "math_id": 4, "text": "\\left(\\frac{\\sqrt{3}}{2} + \\frac{i}{2}\\right)^2 = \\frac{1}{2} + \\frac{i\\sqrt{3}}{2}." }, { "math_id": 5, "text": "e^{ix} = \\cos x + i\\sin x," }, { "math_id": 6, "text": "\\left( e^{ix} \\right)^n = e^{inx}, " }, { "math_id": 7, "text": "\\left(\\cos x + i\\sin x\\right)^n" }, { "math_id": 8, "text": "\\cos nx + i\\sin nx." }, { "math_id": 9, "text": "(\\cos x + i \\sin x)^n = \\cos nx + i \\sin nx." }, { "math_id": 10, "text": "\\left(\\cos x + i \\sin x\\right)^k = \\cos kx + i \\sin kx. " }, { "math_id": 11, "text": "\\begin{alignat}{2}\n \\left(\\cos x+i\\sin x\\right)^{k+1} & = \\left(\\cos x+i\\sin x\\right)^{k} \\left(\\cos x+i\\sin x\\right)\\\\\n & = \\left(\\cos kx + i\\sin kx \\right) \\left(\\cos x+i\\sin x\\right) &&\\qquad \\text{by the induction hypothesis}\\\\\n & = \\cos kx \\cos x - \\sin kx \\sin x + i \\left(\\cos kx \\sin x + \\sin kx \\cos x\\right)\\\\\n & = \\cos ((k+1)x) + i\\sin ((k+1)x) &&\\qquad \\text{by the trigonometric identities}\n\\end{alignat}" }, { "math_id": 12, "text": "\\begin{align}\n \\left(\\cos x + i\\sin x\\right)^{-n} & = \\big( \\left(\\cos x + i\\sin x\\right)^n \\big)^{-1} \\\\\n & = \\left(\\cos nx + i\\sin nx\\right)^{-1} \\\\\n & = \\cos nx - i\\sin nx \\qquad\\qquad(*)\\\\\n & = \\cos(-nx) + i\\sin (-nx).\\\\\n\\end{align}" }, { "math_id": 13, "text": "z^{-1} = \\frac{\\bar z}{|z|^2}," }, { "math_id": 14, "text": "\\begin{align}\n\\sin nx &= \\sum_{k=0}^n \\binom{n}{k} (\\cos x)^k\\,(\\sin x)^{n-k}\\,\\sin\\frac{(n-k)\\pi}{2} \\\\\n\\cos nx &= \\sum_{k=0}^n \\binom{n}{k} (\\cos x)^k\\,(\\sin x)^{n-k}\\,\\cos\\frac{(n-k)\\pi}{2}.\n\\end{align}" }, { "math_id": 15, "text": "\\begin{alignat}{2}\n \\cos 2x &= \\left(\\cos x\\right)^2 +\\left(\\left(\\cos x\\right)^2-1\\right) &{}={}& 2\\left(\\cos x\\right)^2-1 \\\\\n \\sin 2x &= 2\\left(\\sin x\\right)\\left(\\cos x\\right) & & \\\\\n \\cos 3x &= \\left(\\cos x\\right)^3 +3\\cos x\\left(\\left(\\cos x\\right)^2-1\\right) &{}={}& 4\\left(\\cos x\\right)^3-3\\cos x \\\\\n \\sin 3x &= 3\\left(\\cos x\\right)^2\\left(\\sin x\\right)-\\left(\\sin x\\right)^3 &{}={}& 3\\sin x-4\\left(\\sin x\\right)^3.\n\\end{alignat}" }, { "math_id": 16, "text": "z=r\\left(\\cos x+i\\sin x\\right)," }, { "math_id": 17, "text": "r^\\frac1n \\left( \\cos \\frac{x+2\\pi k}{n} + i\\sin \\frac{x+2\\pi k}{n} \\right)" }, { "math_id": 18, "text": "z=r\\left(\\cos x+i\\sin x\\right)" }, { "math_id": 19, "text": "z^w = r^w \\left(\\cos x + i\\sin x\\right)^w = \\lbrace r^w \\cos(xw + 2\\pi kw) + i r^w \\sin(xw + 2\\pi kw) | k \\in \\mathbb{Z}\\rbrace\\,." }, { "math_id": 20, "text": "r^w (\\cos xw + i\\sin xw)\\,," }, { "math_id": 21, "text": "(\\cosh x + \\sinh x)^n = \\cosh nx + \\sinh nx." }, { "math_id": 22, "text": "z=x+iy" }, { "math_id": 23, "text": "( \\cos z + i \\sin z)^n = \\cos {nz} + i \\sin {nz}." }, { "math_id": 24, "text": "\\begin{align} \\cos z = \\cos(x + iy) &= \\cos x \\cosh y - i \\sin x \\sinh y\\, , \\\\\n\\sin z = \\sin(x + iy) &= \\sin x \\cosh y + i \\cos x \\sinh y\\, . \\end{align}" }, { "math_id": 25, "text": "d + a\\mathbf{\\hat i} + b\\mathbf{\\hat j} + c\\mathbf{\\hat k}" }, { "math_id": 26, "text": "q = k(\\cos \\theta + \\varepsilon \\sin \\theta) \\qquad \\mbox{for } 0 \\leq \\theta < 2 \\pi." }, { "math_id": 27, "text": "k = \\sqrt{d^2 + a^2 + b^2 + c^2}," }, { "math_id": 28, "text": "\\cos \\theta = \\frac{d}{k} \\quad \\mbox{and} \\quad \\sin \\theta = \\pm \\frac{\\sqrt{a^2 + b^2 + c^2}}{k}." }, { "math_id": 29, "text": "\\varepsilon = \\pm \\frac{a\\mathbf{\\hat i} + b\\mathbf{\\hat j} + c\\mathbf{\\hat k}}{\\sqrt{a^2 + b^2 + c^2}}," }, { "math_id": 30, "text": "q^n = k^n(\\cos n \\theta + \\varepsilon \\sin n \\theta)." }, { "math_id": 31, "text": "Q = 1 + \\mathbf{\\hat i} + \\mathbf{\\hat j}+ \\mathbf{\\hat k}," }, { "math_id": 32, "text": "Q = 2\\left(\\cos \\frac{\\pi}{3} + \\varepsilon \\sin \\frac{\\pi}{3}\\right) \\qquad \\mbox{where } \\varepsilon = \\frac{\\mathbf{\\hat i} + \\mathbf{\\hat j}+ \\mathbf{\\hat k}}{\\sqrt 3}." }, { "math_id": 33, "text": "\\sqrt[3]{Q} = \\sqrt[3]{2}(\\cos \\theta + \\varepsilon \\sin \\theta) \\qquad \\mbox{for } \\theta = \\frac{\\pi}{9}, \\frac{7\\pi}{9}, \\frac{13\\pi}{9}." }, { "math_id": 34, "text": "\\begin{pmatrix}\\cos\\phi & -\\sin\\phi \\\\ \\sin\\phi & \\cos\\phi \\end{pmatrix}^n=\\begin{pmatrix}\\cos n\\phi & -\\sin n\\phi \\\\ \\sin n\\phi & \\cos n\\phi \\end{pmatrix}" }, { "math_id": 35, "text": "\\begin{pmatrix}a & -b \\\\ b & a \\end{pmatrix}" } ]
https://en.wikipedia.org/wiki?curid=57326
57327
Abraham de Moivre
French mathematician (1667–1754) Abraham de Moivre FRS (; 26 May 1667 – 27 November 1754) was a French mathematician known for de Moivre's formula, a formula that links complex numbers and trigonometry, and for his work on the normal distribution and probability theory. He moved to England at a young age due to the religious persecution of Huguenots in France which reached a climax in 1685 with the Edict of Fontainebleau. He was a friend of Isaac Newton, Edmond Halley, and James Stirling. Among his fellow Huguenot exiles in England, he was a colleague of the editor and translator Pierre des Maizeaux. De Moivre wrote a book on probability theory, "The Doctrine of Chances", said to have been prized by gamblers. De Moivre first discovered Binet's formula, the closed-form expression for Fibonacci numbers linking the "n"th power of the golden ratio "φ" to the "n"th Fibonacci number. He also was the first to postulate the central limit theorem, a cornerstone of probability theory. Life. Early years. Abraham de Moivre was born in Vitry-le-François in Champagne on 26 May 1667. His father, Daniel de Moivre, was a surgeon who believed in the value of education. Though Abraham de Moivre's parents were Protestant, he first attended Christian Brothers' Catholic school in Vitry, which was unusually tolerant given religious tensions in France at the time. When he was eleven, his parents sent him to the Protestant Academy at Sedan, where he spent four years studying Greek under Jacques du Rondel. The Protestant Academy of Sedan had been founded in 1579 at the initiative of Françoise de Bourbon, the widow of Henri-Robert de la Marck. In 1682 the Protestant Academy at Sedan was suppressed, and de Moivre enrolled to study logic at Saumur for two years. Although mathematics was not part of his course work, de Moivre read several works on mathematics on his own, including Éléments des mathématiques by the French Oratorian priest and mathematician Jean Prestet and a short treatise on games of chance, "De Ratiociniis in Ludo Aleae", by Christiaan Huygens the Dutch physicist, mathematician, astronomer and inventor. In 1684, de Moivre moved to Paris to study physics, and for the first time had formal mathematics training with private lessons from Jacques Ozanam. Religious persecution in France became severe when King Louis XIV issued the Edict of Fontainebleau in 1685, which revoked the Edict of Nantes, that had given substantial rights to French Protestants. It forbade Protestant worship and required that all children be baptised by Catholic priests. De Moivre was sent to Prieuré Saint-Martin-des-Champs, a school that the authorities sent Protestant children to for indoctrination into Catholicism. It is unclear when de Moivre left the Prieure de Saint-Martin and moved to England, since the records of the Prieure de Saint-Martin indicate that he left the school in 1688, but de Moivre and his brother presented themselves as Huguenots admitted to the Savoy Church in London on 28 August 1687. Middle years. By the time he arrived in London, de Moivre was a competent mathematician with a good knowledge of many of the standard texts. To make a living, de Moivre became a private tutor of mathematics, visiting his pupils or teaching in the coffee houses of London. De Moivre continued his studies of mathematics after visiting the Earl of Devonshire and seeing Newton's recent book, "Principia Mathematica". Looking through the book, he realised that it was far deeper than the books that he had studied previously, and he became determined to read and understand it. However, as he was required to take extended walks around London to travel between his students, de Moivre had little time for study, so he tore pages from the book and carried them around in his pocket to read between lessons. According to a possibly apocryphal story, Newton, in the later years of his life, used to refer people posing mathematical questions to him to de Moivre, saying, "He knows all these things better than I do." By 1692, de Moivre became friends with Edmond Halley and soon after with Isaac Newton himself. In 1695, Halley communicated de Moivre's first mathematics paper, which arose from his study of fluxions in the "Principia Mathematica", to the Royal Society. This paper was published in the "Philosophical Transactions" that same year. Shortly after publishing this paper, de Moivre also generalised Newton's noteworthy binomial theorem into the multinomial theorem. The Royal Society became apprised of this method in 1697, and it elected de Moivre a Fellow on 30 November 1697. After de Moivre had been accepted, Halley encouraged him to turn his attention to astronomy. In 1705, de Moivre discovered, intuitively, that "the centripetal force of any planet is directly related to its distance from the centre of the forces and reciprocally related to the product of the diameter of the evolute and the cube of the perpendicular on the tangent." In other words, if a planet, M, follows an elliptical orbit around a focus F and has a point P where PM is tangent to the curve and FPM is a right angle so that FP is the perpendicular to the tangent, then the centripetal force at point P is proportional to FM/(R*(FP)3) where R is the radius of the curvature at M. The mathematician Johann Bernoulli proved this formula in 1710. Despite these successes, de Moivre was unable to obtain an appointment to a chair of mathematics at any university, which would have released him from his dependence on time-consuming tutoring that burdened him more than it did most other mathematicians of the time. At least a part of the reason was a bias against his French origins. In November 1697 he was elected a Fellow of the Royal Society and in 1712 was appointed to a commission set up by the society, alongside MM. Arbuthnot, Hill, Halley, Jones, Machin, Burnet, Robarts, Bonet, Aston, and Taylor to review the claims of Newton and Leibniz as to who discovered calculus. The full details of the controversy can be found in the Leibniz and Newton calculus controversy article. Throughout his life de Moivre remained poor. It is reported that he was a regular customer of old Slaughter's Coffee House, St. Martin's Lane at Cranbourn Street, where he earned a little money from playing chess. Later years. De Moivre continued studying the fields of probability and mathematics until his death in 1754 and several additional papers were published after his death. As he grew older, he became increasingly lethargic and needed longer sleeping hours. It is a common claim that De Moivre noted he was sleeping an extra 15 minutes each night and correctly calculated the date of his death as the day when the sleep time reached 24 hours, 27 November 1754. On that day he did in fact die, in London and his body was buried at St Martin-in-the-Fields, although his body was later moved. The claim of him predicting his own death, however, has been disputed as not having been documented anywhere at the time of its occurrence. Probability. De Moivre pioneered the development of analytic geometry and the theory of probability by expanding upon the work of his predecessors, particularly Christiaan Huygens and several members of the Bernoulli family. He also produced the second textbook on probability theory, "The Doctrine of Chances: a method of calculating the probabilities of events in play". (The first book about games of chance, "Liber de ludo aleae" ("On Casting the Die"), was written by Girolamo Cardano in the 1560s, but it was not published until 1663.) This book came out in four editions, 1711 in Latin, and in English in 1718, 1738, and 1756. In the later editions of his book, de Moivre included his unpublished result of 1733, which is the first statement of an approximation to the binomial distribution in terms of what we now call the normal or Gaussian function. This was the first method of finding the probability of the occurrence of an error of a given size when that error is expressed in terms of the variability of the distribution as a unit, and the first identification of the calculation of probable error. In addition, he applied these theories to gambling problems and actuarial tables. An expression commonly found in probability is "n"! but before the days of calculators calculating "n"! for a large "n" was time-consuming. In 1733 de Moivre proposed the formula for estimating a factorial as "n"! = "cn"("n"+1/2)"e"−"n". He obtained an approximate expression for the constant "c" but it was James Stirling who found that c was √. De Moivre also published an article called "Annuities upon Lives" in which he revealed the normal distribution of the mortality rate over a person's age. From this he produced a simple formula for approximating the revenue produced by annual payments based on a person's age. This is similar to the types of formulas used by insurance companies today. Priority regarding the Poisson distribution. Some results on the Poisson distribution were first introduced by de Moivre in "De Mensura Sortis seu; de Probabilitate Eventuum in Ludis a Casu Fortuito Pendentibus" in Philosophical Transactions of the Royal Society, p. 219. As a result, some authors have argued that the Poisson distribution should bear the name of de Moivre. De Moivre's formula. In 1707, de Moivre derived an equation from which one can deduce: formula_0 which he was able to prove for all positive integers "n". In 1722, he presented equations from which one can deduce the better known form of de Moivre's Formula: formula_1 In 1749 Euler proved this formula for any real n using Euler's formula, which makes the proof quite straightforward. This formula is important because it relates complex numbers and trigonometry. Additionally, this formula allows the derivation of useful expressions for cos("nx") and sin("nx") in terms of cos("x") and sin("x"). Stirling's approximation. De Moivre had been studying probability, and his investigations required him to calculate binomial coefficients, which in turn required him to calculate factorials. In 1730 de Moivre published his book "Miscellanea Analytica de Seriebus et Quadraturis" [Analytic Miscellany of Series and Integrals], which included tables of log ("n"!). For large values of "n", de Moivre approximated the coefficients of the terms in a binomial expansion. Specifically, given a positive integer "n", where "n" is even and large, then the coefficient of the middle term of (1 + 1)"n" is approximated by the equation: formula_2 On June 19, 1729, James Stirling sent to de Moivre a letter, which illustrated how he calculated the coefficient of the middle term of a binomial expansion (a + b)n for large values of n. In 1730, Stirling published his book "Methodus Differentialis" [The Differential Method], in which he included his series for log("n"!): formula_3 so that for large formula_4, formula_5. On November 12, 1733, de Moivre privately published and distributed a pamphlet – "Approximatio ad Summam Terminorum Binomii (a + b)n in Seriem expansi" [Approximation of the Sum of the Terms of the Binomial (a + b)n expanded into a Series] – in which he acknowledged Stirling's letter and proposed an alternative expression for the central term of a binomial expansion. Notes. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": " \\cos x = \\tfrac{1}{2} (\\cos(nx) + i\\sin(nx))^{1/n} + \\tfrac{1}{2}(\\cos(nx) - i\\sin(nx))^{1/n} " }, { "math_id": 1, "text": " (\\cos x + i\\sin x)^n = \\cos(nx) + i\\sin(nx). \\, " }, { "math_id": 2, "text": "{n \\choose n/2} = \\frac {n!}{((\\frac {n}{2})!)^2} \\approx {2^n} \\frac{{2}\\frac{21}{125} {(n-1)}^{n-\\frac{1}{2}}}{{n}^n}" }, { "math_id": 3, "text": "\\log_{10} (n + \\frac {1}{2})! \\approx \\log_{10} \\sqrt{2\\pi} + n \\log_{10} n - \\frac {n} {\\ln 10}," }, { "math_id": 4, "text": "n" }, { "math_id": 5, "text": "n! \\approx \\sqrt{2\\pi} \\left(\\frac {n}{e}\\right)^n" } ]
https://en.wikipedia.org/wiki?curid=57327
5732881
Stewart's theorem
Geometric relation between a triangle's side lengths and cevian length In geometry, Stewart's theorem yields a relation between the lengths of the sides and the length of a cevian in a triangle. Its name is in honour of the Scottish mathematician Matthew Stewart, who published the theorem in 1746. Statement. Let a, b, c be the lengths of the sides of a triangle. Let d be the length of a cevian to the side of length a. If the cevian divides the side of length a into two segments of length m and n, with m adjacent to c and n adjacent to b, then Stewart's theorem states that formula_0 A common mnemonic used by students to memorize this equation (after rearranging the terms) is: formula_1 The theorem may be written more symmetrically using signed lengths of segments. That is, take the length to be positive or negative according to whether A is to the left or right of B in some fixed orientation of the line. In this formulation, the theorem states that if A, B, C are collinear points, and P is any point, then formula_2 In the special case that the cevian is the median (that is, it divides the opposite side into two segments of equal length), the result is known as Apollonius' theorem. Proof. The theorem can be proved as an application of the law of cosines. Let θ be the angle between m and d and θ' the angle between n and d. Then θ' is the supplement of θ, and so cos "θ' "= −cos "θ". Applying the law of cosines in the two small triangles using angles θ and θ' produces formula_3 Multiplying the first equation by n and the third equation by m and adding them eliminates cos "θ". One obtains formula_4 which is the required equation. Alternatively, the theorem can be proved by drawing a perpendicular from the vertex of the triangle to the base and using the Pythagorean theorem to write the distances b, c, d in terms of the altitude. The left and right hand sides of the equation then reduce algebraically to the same expression. History. According to , Stewart published the result in 1746 when he was a candidate to replace Colin Maclaurin as Professor of Mathematics at the University of Edinburgh. state that the result was probably known to Archimedes around 300 B.C.E. They go on to say (mistakenly) that the first known proof was provided by R. Simson in 1751. state that the result is used by Simson in 1748 and by Simpson in 1752, and its first appearance in Europe given by Lazare Carnot in 1803.
[ { "math_id": 0, "text": "b^2m + c^2n = a(d^2 + mn)." }, { "math_id": 1, "text": "\\underset{\\text{A }man\\text{ and his }dad}{man\\ +\\ dad} = \\!\\!\\!\\!\\!\\! \\underset{\\text{put a }bomb\\text{ in the }sink.}{bmb\\ +\\ cnc}" }, { "math_id": 2, "text": "\\left(\\overline{PA}^2\\cdot \\overline{BC}\\right) + \\left(\\overline{PB}^2\\cdot \\overline{CA}\\right) + \\left(\\overline{PC}^2\\cdot \\overline{AB}\\right) + \\left(\\overline{AB}\\cdot \\overline{BC}\\cdot \\overline{CA}\\right) =0." }, { "math_id": 3, "text": "\\begin{align}\nc^2 &= m^2 + d^2 - 2dm\\cos\\theta, \\\\\nb^2 &= n^2 + d^2 - 2dn\\cos\\theta' \\\\\n&= n^2 + d^2 + 2dn\\cos\\theta.\n\\end{align}" }, { "math_id": 4, "text": "\\begin{align}\nb^2m + c^2n &= nm^2 + n^2m + (m+n)d^2 \\\\\n&= (m+n)(mn + d^2) \\\\\n&= a(mn + d^2), \\\\\n\\end{align}" } ]
https://en.wikipedia.org/wiki?curid=5732881
57330071
Tigzirt District
The Tigzirt district is an Algerian administrative district in the Tizi-Ouzou province and the region of Kabylie . Its chief town is located on the eponymous namesake of Tigzirt. Communes. The district is composed of three communes: The total population of the district is 35,743 inhabitants for an area of formula_0. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "166.38 km^2" } ]
https://en.wikipedia.org/wiki?curid=57330071
5733096
List of British banknotes and coins
List of British banknotes and coins, with commonly used terms. Coins. Pre-decimal. Prior to decimalisation in 1971, there were 12 pence (written as 12d) in a shilling (written as 1s or 1/-) and 20 shillings in a pound, written as £1 (occasionally "L" was used instead of the pound sign, £). There were therefore 240 pence in a pound. For example, 2 pounds 14 shillings and 5 pence could have been written as £2 14s 5d or £2145. The origin of £/formula_0, s, and d were the Latin terms Libra, meaning a pound weight (with the £ sign developing as an elaborate L), solidus (pl. solidi), 20 of which made up one Libra, and denarius (pl. denarii), 240 of which made up one Libra with 12 being equal to one solidus. These terms and divisions of currency were in use from the 7th century. The value of some coins fluctuated, particularly in the reigns of James I and Charles I. The value of a guinea fluctuated between 20 and 30 shillings before being fixed at 21 shillings in December 1717. These are denominations of British, or earlier English, coins – Scottish coins had different values. "Notes:" &lt;templatestyles src="Reflist/styles.css" /&gt; Decimal. Since decimalisation on "Decimal Day", 15 February 1971, the pound has been divided into 100 pence. Originally the term "new pence" was used; the word "new" was dropped from the coinage in 1983. The old shilling equated to five (new) pence, and, for example, £2 10s 6d became £2.52. The symbol for the (old) penny, "d", was replaced by "p" (or initially sometimes "np", for "n"ew "p"ence). Thus 72 pence can be written as £0.72 or 72p; both were commonly read as "seventy-two pee". "Main articles: Banknotes of the pound sterling and Bank of England note issues." Banknotes. Note: The description of banknotes given here relates to notes issued by the Bank of England. Three banks in Scotland and four banks in Northern Ireland also issue notes, in some or all of the denominations: £1, £5, £10, £20, £50, £100. Bank of England notes are periodically redesigned and reissued, with the old notes being withdrawn from circulation and destroyed. Each redesign is allocated a "series". Currently the £50 note is "series F" issue whilst the £5, £10 and £20 notes are "series G" issue. Series G is the latest round of redesign, which commenced in September 2016 with the polymer £5 note, September 2017 with the polymer £10 note, and February 2020 with the polymer £20 note. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\mathfrak{L}" } ]
https://en.wikipedia.org/wiki?curid=5733096
57338111
Golomb graph
Undirected unit-distance graph requiring four colors In graph theory, the Golomb graph is a polyhedral graph with 10 vertices and 18 edges. It is named after Solomon W. Golomb, who constructed it (with a non-planar embedding) as a unit distance graph that requires four colors in any graph coloring. Thus, like the simpler Moser spindle, it provides a lower bound for the Hadwiger–Nelson problem: coloring the points of the Euclidean plane so that each unit line segment has differently-colored endpoints requires at least four colors. Construction. The method of construction of the Golomb graph as a unit distance graph, by drawing an outer regular polygon connected to an inner twisted polygon or star polygon, has also been used for unit distance representations of the Petersen graph and of generalized Petersen graphs. As with the Moser spindle, the coordinates of the unit-distance embedding of the Golomb graph can be represented in the quadratic field formula_0. Fractional coloring. The fractional chromatic number of the Golomb graph is 10/3. The fact that this number is at least this large follows from the fact that the graph has 10 vertices, at most three of which can be in any independent set. The fact that the number is at most this large follows from the fact that one can find 10 three-vertex independent sets, such that each vertex is in exactly three of these sets. This fractional chromatic number is less than the number 7/2 for the Moser spindle and less than the fractional chromatic number of the unit distance graph of the plane, which is bounded between 3.6190 and 4.3599. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\mathbb{Q}[\\sqrt{33}]" } ]
https://en.wikipedia.org/wiki?curid=57338111
57340621
Odd cycle transversal
In graph theory, an odd cycle transversal of an undirected graph is a set of vertices of the graph that has a nonempty intersection with every odd cycle in the graph. Removing the vertices of an odd cycle transversal from a graph leaves a bipartite graph as the remaining induced subgraph. Relation to vertex cover. A given formula_0-vertex graph formula_1 has an odd cycle transversal of size formula_2, if and only if the Cartesian product of graphs formula_3 (a graph consisting of two copies of formula_1, with corresponding vertices of each copy connected by the edges of a perfect matching) has a vertex cover of size formula_4. The odd cycle transversal can be transformed into a vertex cover by including both copies of each vertex from the transversal and one copy of each remaining vertex, selected from the two copies according to which side of the bipartition contains it. In the other direction, a vertex cover of formula_3 can be transformed into an odd cycle transversal by keeping only the vertices for which both copies are in the cover. The vertices outside of the resulting transversal can be bipartitioned according to which copy of the vertex was used in the cover. Algorithms and complexity. The problem of finding the smallest odd cycle transversal, or equivalently the largest bipartite induced subgraph, is also called odd cycle transversal, and abbreviated as OCT. It is NP-hard, as a special case of the problem of finding the largest induced subgraph with a hereditary property (as the property of being bipartite is hereditary). All such problems for nontrivial properties are NP-hard. The equivalence between the odd cycle transversal and vertex cover problems has been used to develop fixed-parameter tractable algorithms for odd cycle transversal, meaning that there is an algorithm whose running time can be bounded by a polynomial function of the size of the graph multiplied by a larger function of formula_2. The development of these algorithms led to the method of iterative compression, a more general tool for many other parameterized algorithms. The parameterized algorithms known for these problems take nearly-linear time for any fixed value of formula_2. Alternatively, with polynomial dependence on the graph size, the dependence on formula_2 can be made as small as formula_5. In contrast, the analogous problem for directed graphs does not admit a fixed-parameter tractable algorithm under standard complexity-theoretic assumptions. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "n" }, { "math_id": 1, "text": "G" }, { "math_id": 2, "text": "k" }, { "math_id": 3, "text": "G\\square K_2" }, { "math_id": 4, "text": "n+k" }, { "math_id": 5, "text": "2.3146^k" } ]
https://en.wikipedia.org/wiki?curid=57340621
57342784
Emptiness problem
In theoretical computer science and formal language theory, a formal language is empty if its set of valid sentences is the empty set. The emptiness problem is the question of determining whether a language is empty given some representation of it, such as a finite-state automaton. For an automaton having formula_0 states, this is a decision problem that can be solved in formula_1 time, or in time formula_2 if the automaton has "n" states and "m" transitions. However, variants of that question, such as the emptiness problem for non-erasing stack automata, are PSPACE-complete. The emptiness problem is undecidable for context-sensitive grammars, a fact that follows from the undecidability of the halting problem. It is, however, decidable for context-free grammars. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "n" }, { "math_id": 1, "text": "O(n^2)" }, { "math_id": 2, "text": "O(n+m)" } ]
https://en.wikipedia.org/wiki?curid=57342784
573452
Trapezoidal rule
Numerical integration method In calculus, the trapezoidal rule (also known as the trapezoid rule or trapezium rule) is a technique for numerical integration, i.e., approximating the definite integral: formula_0 The trapezoidal rule works by approximating the region under the graph of the function formula_1 as a trapezoid and calculating its area. It follows that formula_2 The trapezoidal rule may be viewed as the result obtained by averaging the left and right Riemann sums, and is sometimes defined this way. The integral can be even better approximated by partitioning the integration interval, applying the trapezoidal rule to each subinterval, and summing the results. In practice, this "chained" (or "composite") trapezoidal rule is usually what is meant by "integrating with the trapezoidal rule". Let formula_3 be a partition of formula_4 such that formula_5 and formula_6 be the length of the formula_7-th subinterval (that is, formula_8), then formula_9 When the partition has a regular spacing, as is often the case, that is, when all the formula_6 have the same value formula_10 the formula can be simplified for calculation efficiency by factoring formula_11 out:. formula_12 The approximation becomes more accurate as the resolution of the partition increases (that is, for larger formula_13, all formula_6 decrease). As discussed below, it is also possible to place error bounds on the accuracy of the value of a definite integral estimated using a trapezoidal rule. History. A 2016 "Science" paper reports that the trapezoid rule was in use in Babylon before 50 BCE for integrating the velocity of Jupiter along the ecliptic. Numerical implementation. Non-uniform grid. When the grid spacing is non-uniform, one can use the formula formula_14 wherein formula_15 Uniform grid. For a domain discretized into formula_13 equally spaced panels, considerable simplification may occur. Let formula_16 the approximation to the integral becomes formula_17 Error analysis. The error of the composite trapezoidal rule is the difference between the value of the integral and the numerical result: formula_18 There exists a number "ξ" between "a" and "b", such that formula_19 It follows that if the integrand is concave up (and thus has a positive second derivative), then the error is negative and the trapezoidal rule overestimates the true value. This can also be seen from the geometric picture: the trapezoids include all of the area under the curve and extend over it. Similarly, a concave-down function yields an underestimate because area is unaccounted for under the curve, but none is counted above. If the interval of the integral being approximated includes an inflection point, the sign of the error is harder to identify. An asymptotic error estimate for "N" → ∞ is given by formula_20 Further terms in this error estimate are given by the Euler–Maclaurin summation formula. Several techniques can be used to analyze the error, including: It is argued that the speed of convergence of the trapezoidal rule reflects and can be used as a definition of classes of smoothness of the functions. Proof. First suppose that formula_21 and formula_22. Let formula_23 be the function such that formula_24 is the error of the trapezoidal rule on one of the intervals, formula_25. Then formula_26 and formula_27 Now suppose that formula_28 which holds if formula_29 is sufficiently smooth. It then follows that formula_30 which is equivalent to formula_31, or formula_32 Since formula_33 and formula_34, formula_35 and formula_36 Using these results, we find formula_37 and formula_38 Letting formula_39 we find formula_40 Summing all of the local error terms we find formula_41 But we also have formula_42 and formula_43 so that formula_44 Therefore the total error is bounded by formula_45 Periodic and peak functions. The trapezoidal rule converges rapidly for periodic functions. This is an easy consequence of the Euler-Maclaurin summation formula, which says that if formula_46 is formula_47 times continuously differentiable with period formula_48 formula_49 where formula_50 and formula_51 is the periodic extension of the formula_47th Bernoulli polynomial. Due to the periodicity, the derivatives at the endpoint cancel and we see that the error is formula_52. A similar effect is available for peak-like functions, such as Gaussian, Exponentially modified Gaussian and other functions with derivatives at integration limits that can be neglected. The evaluation of the full integral of a Gaussian function by trapezoidal rule with 1% accuracy can be made using just 4 points. Simpson's rule requires 1.8 times more points to achieve the same accuracy. Although some effort has been made to extend the Euler-Maclaurin summation formula to higher dimensions, the most straightforward proof of the rapid convergence of the trapezoidal rule in higher dimensions is to reduce the problem to that of convergence of Fourier series. This line of reasoning shows that if formula_46 is periodic on a formula_53-dimensional space with formula_47 continuous derivatives, the speed of convergence is formula_54. For very large dimension, the shows that Monte-Carlo integration is most likely a better choice, but for 2 and 3 dimensions, equispaced sampling is efficient. This is exploited in computational solid state physics where equispaced sampling over primitive cells in the reciprocal lattice is known as "Monkhorst-Pack integration". "Rough" functions. For functions that are not in "C"2, the error bound given above is not applicable. Still, error bounds for such rough functions can be derived, which typically show a slower convergence with the number of function evaluations formula_13 than the formula_55 behaviour given above. Interestingly, in this case the trapezoidal rule often has sharper bounds than Simpson's rule for the same number of function evaluations. Applicability and alternatives. The trapezoidal rule is one of a family of formulas for numerical integration called Newton–Cotes formulas, of which the midpoint rule is similar to the trapezoid rule. Simpson's rule is another member of the same family, and in general has faster convergence than the trapezoidal rule for functions which are twice continuously differentiable, though not in all specific cases. However, for various classes of rougher functions (ones with weaker smoothness conditions), the trapezoidal rule has faster convergence in general than Simpson's rule. Moreover, the trapezoidal rule tends to become extremely accurate when periodic functions are integrated over their periods, which can be analyzed in various ways. A similar effect is available for peak functions. For non-periodic functions, however, methods with unequally spaced points such as Gaussian quadrature and Clenshaw–Curtis quadrature are generally far more accurate; Clenshaw–Curtis quadrature can be viewed as a change of variables to express arbitrary integrals in terms of periodic integrals, at which point the trapezoidal rule can be applied accurately. Example. The following integral is given: formula_56 Solution Notes. &lt;templatestyles src="Reflist/styles.css" /&gt; References. &lt;templatestyles src="Refbegin/styles.css" /&gt;
[ { "math_id": 0, "text": "\\int_a^b f(x) \\, dx." }, { "math_id": 1, "text": "f(x)" }, { "math_id": 2, "text": "\\int_{a}^{b} f(x) \\, dx \\approx (b-a) \\cdot \\tfrac{1}{2}(f(a)+f(b))." }, { "math_id": 3, "text": "\\{x_k\\}" }, { "math_id": 4, "text": "[a,b]" }, { "math_id": 5, "text": "a=x_0 < x_1 < \\cdots < x_{N-1} < x_N = b" }, { "math_id": 6, "text": "\\Delta x_k" }, { "math_id": 7, "text": "k" }, { "math_id": 8, "text": "\\Delta x_k = x_k - x_{k-1}" }, { "math_id": 9, "text": "\\int_a^b f(x) \\, dx \\approx \\sum_{k=1}^N \\frac{f(x_{k-1}) + f(x_k)}{2} \\Delta x_k." }, { "math_id": 10, "text": "\\Delta x," }, { "math_id": 11, "text": "\\Delta x" }, { "math_id": 12, "text": "\\int_a^b f(x) \\, dx \\approx \\frac{\\Delta x}{2} \\left(f(x_0) + 2f(x_1) + 2f(x_2) + 2f(x_3) + 2f(x_4) + \\cdots + 2f(x_{N-1}) + f(x_N)\\right)." }, { "math_id": 13, "text": "N" }, { "math_id": 14, "text": " \\int_{a}^{b} f(x)\\, dx \\approx \\sum_{k=1}^N \\frac{f(x_{k-1}) + f(x_k)}{2} \\Delta x_k ," }, { "math_id": 15, "text": "\\Delta x_k = x_{k} - x_{k-1} ." }, { "math_id": 16, "text": "\\Delta x_k = \\Delta x = \\frac{b-a}{N}" }, { "math_id": 17, "text": "\\begin{align}\n\\int_{a}^{b} f(x)\\, dx\n&\\approx \\frac{\\Delta x}{2} \\sum_{k=1}^{N} \\left( f(x_{k-1}) + f(x_{k}) \\right) \\\\[1ex]\n&= \\frac{\\Delta x}{2} \\Biggl( f(x_0) + 2f(x_1) + 2f(x_2) + 2f(x_3) + \\dotsb + 2f(x_{N-1}) + f(x_N) \\Biggr) \\\\[1ex]\n&= \\Delta x \\left( \\frac{f(x_N) + f(x_0) }{2} + \\sum_{k=1}^{N-1} f(x_k) \\right) .\n\\end{align}" }, { "math_id": 18, "text": " \\text{E} = \\int_a^b f(x)\\,dx - \\frac{b-a}{N} \\left[ {f(a) + f(b) \\over 2} + \\sum_{k=1}^{N-1} f \\left( a+k \\frac{b-a}{N} \\right) \\right]" }, { "math_id": 19, "text": " \\text{E} = -\\frac{(b-a)^3}{12N^2} f''(\\xi)" }, { "math_id": 20, "text": " \\text{E} = -\\frac{(b-a)^2}{12N^2} \\big[ f'(b)-f'(a) \\big] + O(N^{-3}). " }, { "math_id": 21, "text": "h=\\frac{b-a}{N}" }, { "math_id": 22, "text": "a_k=a+(k-1)h" }, { "math_id": 23, "text": " g_k(t) = \\frac{1}{2} t[f(a_k)+f(a_k+t)] - \\int_{a_k}^{a_k+t} f(x) \\, dx" }, { "math_id": 24, "text": " |g_k(h)| " }, { "math_id": 25, "text": " [a_k, a_k+h] " }, { "math_id": 26, "text": " {dg_k \\over dt}={1 \\over 2}[f(a_k)+f(a_k+t)]+{1\\over2}t\\cdot f'(a_k+t)-f(a_k+t)," }, { "math_id": 27, "text": " {d^2g_k \\over dt^2}={1\\over 2}t\\cdot f''(a_k+t)." }, { "math_id": 28, "text": " \\left| f''(x) \\right| \\leq \\left| f''(\\xi) \\right|, " }, { "math_id": 29, "text": " f " }, { "math_id": 30, "text": " \\left| f''(a_k+t) \\right| \\leq f''(\\xi)" }, { "math_id": 31, "text": " -f''(\\xi) \\leq f''(a_k+t) \\leq f''(\\xi)" }, { "math_id": 32, "text": " -\\frac{f''(\\xi)t}{2} \\leq g_k''(t) \\leq \\frac{f''(\\xi)t}{2}." }, { "math_id": 33, "text": " g_k'(0)=0" }, { "math_id": 34, "text": " g_k(0)=0" }, { "math_id": 35, "text": " \\int_0^t g_k''(x) dx = g_k'(t)" }, { "math_id": 36, "text": " \\int_0^t g_k'(x) dx = g_k(t)." }, { "math_id": 37, "text": " -\\frac{f''(\\xi)t^2}{4} \\leq g_k'(t) \\leq \\frac{f''(\\xi)t^2}{4}" }, { "math_id": 38, "text": " -\\frac{f''(\\xi)t^3}{12} \\leq g_k(t) \\leq \\frac{f''(\\xi)t^3}{12}" }, { "math_id": 39, "text": " t = h " }, { "math_id": 40, "text": " -\\frac{f''(\\xi)h^3}{12} \\leq g_k(h) \\leq \\frac{f''(\\xi)h^3}{12}." }, { "math_id": 41, "text": " \\sum_{k=1}^{N} g_k(h) = \\frac{b-a}{N} \\left[ {f(a) + f(b) \\over 2} + \\sum_{k=1}^{N-1} f \\left( a+k \\frac{b-a}{N} \\right) \\right] - \\int_a^b f(x)dx." }, { "math_id": 42, "text": " - \\sum_{k=1}^N \\frac{f''(\\xi)h^3}{12} \\leq \\sum_{k=1}^N g_k(h) \\leq \\sum_{k=1}^N \\frac{f''(\\xi)h^3}{12}" }, { "math_id": 43, "text": " \\sum_{k=1}^N \\frac{f''(\\xi)h^3}{12}=\\frac{f''(\\xi)h^3N}{12}," }, { "math_id": 44, "text": " -\\frac{f''(\\xi)h^3N}{12} \\leq \\frac{b-a}{N} \\left[ {f(a) + f(b) \\over 2} + \\sum_{k=1}^{N-1} f \\left( a+k \\frac{b-a}{N} \\right) \\right]-\\int_a^bf(x)dx \\leq \\frac{f''(\\xi)h^3N}{12}." }, { "math_id": 45, "text": " \\text{error} = \\int_a^b f(x)\\,dx - \\frac{b-a}{N} \\left[ {f(a) + f(b) \\over 2} + \\sum_{k=1}^{N-1} f \\left( a+k \\frac{b-a}{N} \\right) \\right] = \\frac{f''(\\xi)h^3N}{12}=\\frac{f''(\\xi)(b-a)^3}{12N^2}." }, { "math_id": 46, "text": "f" }, { "math_id": 47, "text": "p" }, { "math_id": 48, "text": "T" }, { "math_id": 49, "text": "\\sum_{k=0}^{N-1} f(kh)h =\n \\int_0^T f(x)\\,dx +\n \\sum_{k=1}^{\\lfloor p/2\\rfloor} \\frac{B_{2k}}{(2k)!} (f^{(2k - 1)}(T) - f^{(2k - 1)}(0)) - (-1)^p h^p \\int_0^T\\tilde{B}_{p}(x/T)f^{(p)}(x) \\, dx\n" }, { "math_id": 50, "text": "h:=T/N" }, { "math_id": 51, "text": "\\tilde{B}_{p}" }, { "math_id": 52, "text": "O(h^p)" }, { "math_id": 53, "text": "n" }, { "math_id": 54, "text": "O(h^{p/d})" }, { "math_id": 55, "text": "O(N^{-2})" }, { "math_id": 56, "text": " \\int_{0.1}^{1.3}{5xe^{- 2x}{dx}} " } ]
https://en.wikipedia.org/wiki?curid=573452
5735440
Virial stress
Virial stress is a measure of mechanical stress on an atomic scale for homogeneous systems. The name is derived from the Latin word "vis", meaning force: "Virial is then derived from Latin as well, stemming from the word virias (plural of vis) meaning forces." The expression of the (local) virial stress can be derived as the functional derivative of the free energy of a molecular system with respect to the deformation tensor. Volume averaged Definition. The instantaneous volume averaged virial stress is given by formula_0 where At zero kelvin, all velocities are zero so we have formula_9. This can be thought of as follows. The τ11 component of stress is the force in the "x"1-direction divided by the area of a plane perpendicular to that direction. Consider two adjacent volumes separated by such a plane. The 11-component of stress on that interface is the sum of all pairwise forces between atoms on the two sides. The volume averaged virial stress is then the ensemble average of the instantaneous volume averaged virial stress. In a three dimensional, isotropic system, at equilibrium the "instantaneous" atomic pressure is usually defined as the average over the diagonals of the negative stress tensor: formula_10 The pressure then is the ensemble average of the instantaneous pressure formula_11 This pressure is the average pressure in the volume formula_3. Equivalent Definition. It's worth noting that some articles and textbook use a slightly different but equivalent version of the equation formula_12 where formula_13 is the "i"th component of the vector oriented from the formula_2th atoms to the "k"th calculated via the difference formula_14 Both equation being strictly equivalent, the definition of the vector can still lead to confusion. Derivation. The virial pressure can be derived, using the virial theorem and splitting forces between particles and the container or, alternatively, via direct application of the defining equation formula_15 and using scaled coordinates in the calculation. Inhomogeneous Systems. If the system is not homogeneous in a given volume the above (volume averaged) pressure is not a good measure for the pressure. In inhomogeneous systems the pressure depends on the position and orientation of the surface on which the pressure acts. Therefore, in inhomogeneous systems a definition of a local pressure is needed. As a general example for a system with inhomogeneous pressure you can think of the pressure in the atmosphere of the earth which varies with height. Instantaneous local virial stress. The (local) instantaneous virial stress is given by: formula_16 Measuring the virial pressure in molecular simulations. The virial pressure can be measured via the formulas above or using volume rescaling trial moves. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\tau_{ij} = \\frac{1}{\\Omega} \\sum_{k \\in \\Omega} \\left(-m^{(k)} (u_i^{(k)}- \\bar{u}_i) (u_j^{(k)}- \\bar{u}_j) + \\frac{1}{2} \\sum_{\\ell \\in \\Omega} ( x_i^{(\\ell)} - x_i^{(k)}) f_j^{(k\\ell)}\\right)" }, { "math_id": 1, "text": "k" }, { "math_id": 2, "text": "\\ell" }, { "math_id": 3, "text": "\\Omega" }, { "math_id": 4, "text": "m^{(k)}" }, { "math_id": 5, "text": "u_i^{(k)}" }, { "math_id": 6, "text": "\\bar{u}_j" }, { "math_id": 7, "text": "x_i^{(k)}" }, { "math_id": 8, "text": "f_i^{(k\\ell)}" }, { "math_id": 9, "text": "\\tau_{ij} = \\frac{1}{2\\Omega} \\sum_{k,\\ell \\in \\Omega} ( x_i^{(\\ell)} - x_i^{(k)}) f_j^{(k\\ell)}" }, { "math_id": 10, "text": "\\mathcal{P}_{at} = -\\frac{1}{3}Tr(\\tau)." }, { "math_id": 11, "text": "P_{at} =\\langle \\mathcal{P}_{at} \\rangle." }, { "math_id": 12, "text": "\\tau_{ij} = \\frac{1}{\\Omega} \\sum_{k \\in \\Omega} \\left(-m^{(k)} (u_i^{(k)}- \\bar{u}_i) (u_j^{(k)}- \\bar{u}_j) - \\frac{1}{2} \\sum_{\\ell \\in \\Omega} x_i^{(k\\ell)} f_j^{(k\\ell)}\\right)" }, { "math_id": 13, "text": " x_i^{(k\\ell)} " }, { "math_id": 14, "text": " x_i^{k\\ell} = x_i^{(k)} - x_i^{(\\ell)} " }, { "math_id": 15, "text": "P=-\\frac{\\partial F(N,V,T)}{\\partial V}" }, { "math_id": 16, "text": "\\tau_{ab}(\\vec{r})=- \\sum_{i=1}^N \\delta(\\vec{r}-\\vec{r}^{(i)}) \\left(m^{(i)} u^{(i)}_a u^{(i)}_b + \\frac{1}{2} \\sum_{j=1, j \\neq i}^{N} (\\vec{r}^{(i)}-\\vec{r}^{(j)})_a \\vec{f}^{(ij)}_b \\right)," } ]
https://en.wikipedia.org/wiki?curid=5735440
5735510
Hydrostatic stress
Component of mechanical stress without shear In continuum mechanics, hydrostatic stress, also known as isotropic stress or volumetric stress, is a component of stress which contains uniaxial stresses, but not shear stresses. A specialized case of hydrostatic stress contains isotropic compressive stress, which changes only in volume, but not in shape. Pure hydrostatic stress can be experienced by a point in a fluid such as water. It is often used interchangeably with "mechanical pressure" and is also known as confining stress, particularly in the field of geomechanics. Hydrostatic stress is equivalent to the average of the uniaxial stresses along three orthogonal axes, so it is one third of the first invariant of the stress tensor (i.e. the trace of the stress tensor): formula_0 For example in cartesian coordinates (x,y,z) the hydrostatic stress is simply: formula_1 Hydrostatic stress and thermodynamic pressure. In the particular case of an incompressible fluid, the thermodynamic pressure coincides with the mechanical pressure (i.e. the opposite of the hydrostatic stress): formula_2 In the general case of a compressible fluid, the thermodynamic pressure "p" is no more proportional to the isotropic stress term (the mechanical pressure), since there is an additional term dependent on the trace of the strain rate tensor: formula_3 where the coefficient formula_4 is the bulk viscosity&gt; The trace of the strain rate tensor corresponds to the flow compression (the divergence of the flow velocity): formula_5 So the expression for the thermodynamic pressure is usually expressed as: formula_6 where the mechanical pressure has been denoted with formula_7. In some cases, the second viscosity formula_8 can be assumed to be constant in which case, the effect of the volume viscosity formula_8 is that the mechanical pressure is not equivalent to the thermodynamic pressure as stated above. formula_9 However, this difference is usually neglected most of the time (that is whenever we are not dealing with processes such as sound absorption and attenuation of shock waves, where second viscosity coefficient becomes important) by explicitly assuming formula_10. The assumption of setting formula_10 is called as the Stokes hypothesis. The validity of Stokes hypothesis can be demonstrated for monoatomic gas both experimentally and from the kinetic theory; for other gases and liquids, Stokes hypothesis is generally incorrect. Potential external field in a fluid. Its magnitude in a fluid, formula_11, can be given by Stevin's Law: formula_12 where For example, the magnitude of the hydrostatic stress felt at a point under ten meters of fresh water would be formula_16 where the index w indicates "water". Because the hydrostatic stress is isotropic, it acts equally in all directions. In tensor form, the hydrostatic stress is equal to formula_17 where formula_18 is the 3-by-3 identity matrix. Hydrostatic compressive stress is used for the determination of the bulk modulus for materials. Notes. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\sigma_h = \\frac{I_i}{3}= \\frac 1 3 \\operatorname{tr} (\\boldsymbol \\sigma)" }, { "math_id": 1, "text": "\\sigma_h = \\frac{\\sigma_{xx} + \\sigma_{yy} + \\sigma_{zz}}{3}" }, { "math_id": 2, "text": "p = - \\sigma_h = - \\frac 1 3 \\operatorname{tr} (\\boldsymbol \\sigma)" }, { "math_id": 3, "text": "p = - \\frac 1 3 \\operatorname{tr} (\\boldsymbol \\sigma) + \\zeta \\operatorname{tr} (\\boldsymbol \\epsilon)" }, { "math_id": 4, "text": "\\zeta" }, { "math_id": 5, "text": "\\operatorname{tr} (\\boldsymbol \\epsilon) = \\operatorname{tr} \\left(\\frac 1 2 (\\nabla \\mathbf u + (\\nabla \\mathbf u)^T) \\right) = \\nabla\\cdot\\mathbf{u}" }, { "math_id": 6, "text": "p = - \\sigma_h + \\zeta \\nabla\\cdot\\mathbf{u} = \\bar p + \\zeta \\nabla\\cdot\\mathbf{u}" }, { "math_id": 7, "text": "\\bar p" }, { "math_id": 8, "text": "\\zeta" }, { "math_id": 9, "text": " \\bar{p} \\equiv p - \\zeta \\, \\nabla \\cdot \\mathbf{u} ," }, { "math_id": 10, "text": "\\zeta = 0" }, { "math_id": 11, "text": "\\sigma_h" }, { "math_id": 12, "text": "\\sigma_h = \\displaystyle\\sum_{i=1}^n \\rho_i g h_i" }, { "math_id": 13, "text": "\\rho_i" }, { "math_id": 14, "text": "g" }, { "math_id": 15, "text": "h_i" }, { "math_id": 16, "text": "\\sigma_h = \\rho_w g h_w =1000 \\,\\text{kg m}^{-3} \\cdot 9.8 \\,\\text{m s}^{-2} \\cdot 10 \\,\\text{m} =9.8 \\cdot {10^4} \\text{ kg m}^{-1} \\text{s}^{-2} =9.8 \\cdot 10^4 \\text{ N m}^{-2} " }, { "math_id": 17, "text": "\\sigma_h \\cdot I_3 =\n\\sigma_h \\left[ \\begin{array}{ccc}\n1 & 0 & 0 \\\\\n0 & 1 & 0 \\\\\n0 & 0 & 1 \\end{array} \\right] = \n\\left[ \\begin{array}{ccc}\n\\sigma_h & 0 & 0 \\\\\n0 & \\sigma_h & 0 \\\\\n0 & 0 & \\sigma_h \\end{array} \\right]\n" }, { "math_id": 18, "text": "I_3" } ]
https://en.wikipedia.org/wiki?curid=5735510
5736076
Brauer–Nesbitt theorem
In mathematics, the Brauer–Nesbitt theorem can refer to several different theorems proved by Richard Brauer and Cecil J. Nesbitt in the representation theory of finite groups. In modular representation theory, the Brauer–Nesbitt theorem on blocks of defect zero states that a character whose order is divisible by the highest power of a prime "p" dividing the order of a finite group remains irreducible when reduced mod "p" and vanishes on all elements whose order is divisible by "p". Moreover, it belongs to a block of defect zero. A block of defect zero contains only one ordinary character and only one modular character. Another version states that if "k" is a field of characteristic zero, "A" is a "k"-algebra, "V", "W" are semisimple "A"-modules which are finite dimensional over "k", and Tr"V" = Tr"W" as elements of Homk("A",k), then "V" and "W" are isomorphic as "A"-modules. Let formula_0 be a group and formula_1 be some field. If formula_2 are two finite-dimensional semisimple representations such that the characteristic polynomials of formula_3 and formula_4 coincide for all formula_5, then formula_6 and formula_7 are isomorphic representations. If formula_8 or formula_9, then the condition on the characteristic polynomials can be changed to the condition that Trformula_3=Trformula_4 for all formula_5. As a consequence, let formula_10 be a semisimple (continuous) formula_11-adic representations of the absolute Galois group of some field formula_12, unramified outside some finite set of primes formula_13. Then the representation is uniquely determined by the values of the traces of formula_14 for formula_15 (also using the Chebotarev density theorem).
[ { "math_id": 0, "text": "G" }, { "math_id": 1, "text": "E" }, { "math_id": 2, "text": "\\rho_i:G\\to GL_n(E),i=1,2" }, { "math_id": 3, "text": "\\rho_1(g)" }, { "math_id": 4, "text": "\\rho_2(g)" }, { "math_id": 5, "text": "g\\in G" }, { "math_id": 6, "text": "\\rho_1" }, { "math_id": 7, "text": "\\rho_2" }, { "math_id": 8, "text": "char(E)=0" }, { "math_id": 9, "text": "char(E)>n" }, { "math_id": 10, "text": "\\rho:Gal(K^{\\rm{sep}}/K)\\to GL_n(\\overline{\\mathbb{Q}}_l)" }, { "math_id": 11, "text": "l" }, { "math_id": 12, "text": "K" }, { "math_id": 13, "text": "S\\subset M_K" }, { "math_id": 14, "text": "\\rho(Frob_p)" }, { "math_id": 15, "text": "p\\in M_K^0-S" } ]
https://en.wikipedia.org/wiki?curid=5736076
57371589
Kleiman's theorem
In algebraic geometry, Kleiman's theorem, introduced by , concerns dimension and smoothness of scheme-theoretic intersection after some perturbation of factors in the intersection. Precisely, it states: given a connected algebraic group "G" acting transitively on an algebraic variety "X" over an algebraically closed field "k" and formula_0 morphisms of varieties, "G" contains a nonempty open subset such that for each "g" in the set, Statement 1 establishes a version of Chow's moving lemma: after some perturbation of cycles on "X", their intersection has expected dimension. Sketch of proof. We write formula_6 for formula_7. Let formula_8 be the composition that is formula_9 followed by the group action formula_10. Let formula_11 be the fiber product of formula_12 and formula_13; its set of closed points is formula_14. We want to compute the dimension of formula_15. Let formula_16 be the projection. It is surjective since formula_17 acts transitively on "X". Each fiber of "p" is a coset of stabilizers on "X" and so formula_18. Consider the projection formula_19; the fiber of "q" over "g" is formula_20 and has the expected dimension unless empty. This completes the proof of Statement 1. For Statement 2, since "G" acts transitively on "X" and the smooth locus of "X" is nonempty (by characteristic zero), "X" itself is smooth. Since "G" is smooth, each geometric fiber of "p" is smooth and thus formula_21 is a smooth morphism. It follows that a general fiber of formula_22 is smooth by generic smoothness. formula_23 Notes. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "V_i \\to X, i = 1, 2" }, { "math_id": 1, "text": "gV_1 \\times_X V_2" }, { "math_id": 2, "text": "\\dim V_1 + \\dim V_2 - \\dim X" }, { "math_id": 3, "text": "g V_1" }, { "math_id": 4, "text": "V_1 \\to X \\overset{g}\\to X" }, { "math_id": 5, "text": "V_i" }, { "math_id": 6, "text": "f_i" }, { "math_id": 7, "text": "V_i \\to X" }, { "math_id": 8, "text": "h: G \\times V_1 \\to X" }, { "math_id": 9, "text": "(1_G, f_1): G \\times V_1 \\to G \\times X" }, { "math_id": 10, "text": "\\sigma: G \\times X \\to X" }, { "math_id": 11, "text": "\\Gamma = (G \\times V_1) \\times_X V_2 " }, { "math_id": 12, "text": "h" }, { "math_id": 13, "text": "f_2: V_2 \\to X" }, { "math_id": 14, "text": "\\Gamma = \\{ (g, v, w) | g \\in G, v \\in V_1, w \\in V_2, g \\cdot f_1(v) = f_2(w) \\}" }, { "math_id": 15, "text": "\\Gamma" }, { "math_id": 16, "text": "p: \\Gamma \\to V_1 \\times V_2" }, { "math_id": 17, "text": "G" }, { "math_id": 18, "text": "\\dim \\Gamma = \\dim V_1 + \\dim V_2 + \\dim G - \\dim X" }, { "math_id": 19, "text": "q: \\Gamma \\to G" }, { "math_id": 20, "text": "g V_1 \\times_X V_2" }, { "math_id": 21, "text": "p_0 : \\Gamma_0 := (G \\times V_{1, \\text{sm}}) \\times_X V_{2, \\text{sm}} \\to V_{1, \\text{sm}} \\times V_{2, \\text{sm}}" }, { "math_id": 22, "text": "q_0 : \\Gamma_0 \\to G" }, { "math_id": 23, "text": "\\square" } ]
https://en.wikipedia.org/wiki?curid=57371589
57373293
Matching wildcards
Algorithm to compare text strings using wildcard syntax In computer science, an algorithm for matching wildcards (also known as globbing) is useful in comparing text strings that may contain wildcard syntax. Common uses of these algorithms include command-line interfaces, e.g. the Bourne shell or Microsoft Windows command-line or text editor or file manager, as well as the interfaces for some search engines and databases. Wildcard matching is a subset of the problem of matching regular expressions and string matching in general. The problem. A wildcard matcher tests a wildcard pattern "p" against an input string "s". It performs an "anchored" match, returns true only when "p" matches the entirety of "s". The pattern can be based on any common syntax (see globbing), but on Windows programmers tend to only discuss a simplified syntax supported by the native C runtime: This article mainly discusses the Windows formulation of the problem, unless otherwise stated. Definition. Stated in zero-based indices, the wildcard-matching problem can be defined recursively as: formula_0 where "mij" is the result of matching the pattern "p" against the text "t" truncated at "i" and "j" characters respectively. This is the formulation used by Richter's algorithm and the "Snippets" algorithm found in Cantatore's collection. This description is similar to the Levenshtein distance. Related problems. Directly related problems in computer science include: History. Early algorithms for matching wildcards often relied on recursion, but the technique was criticized on grounds of performance and reliability considerations. Non-recursive algorithms for matching wildcards have gained favor in light of these considerations. Among both recursive and non-recursive algorithms, strategies for performing the pattern matching operation vary widely, as evidenced among the variety of example algorithms referenced below. Test case development and performance optimization techniques have been demonstrably brought to bear on certain algorithms, particularly those developed by critics of the recursive algorithms. Recursive algorithms. The recursion generally happens on matching codice_0 when there is more suffix to match against. This is a form of backtracking, also done by some regular expression matchers. The general form of these algorithms are the same. On recursion the algorithm slices the input into substrings, and considers a match to have happened when ONE of the substrings return a positive match. For , it would greedily call , , and . They usually differ by less-important things like support for features and by more important factors such as minor but highly effective optimizations. Some of them include: Martin Richter's algorithm is an exception to this pattern, although the overall operation is equivalent. On * it recurses into increasing "either" of the indexes, following the dynamic programming formulation of the problem. The "ABORT" technique is applicable to it as well. On typical patterns (as tested by Cantatore) it is slower than the greedy-call implementations. The recursive algorithms are in general easier to reason about, and with the ABORT modification they perform acceptably in terms of worst-case complexity. On strings without * they take linear-to-string-size time to match since there is a fixed one-to-one relation. Non-recursive algorithms. The following are developed by critics of the recursive algorithms: The following is not: The iterative functions above implement backtracking by saving an old set of pattern/text pointers, and reverting to it should a match fails. According to Kurt, since only one successful match is required, only one such set needs to be saved. In addition, the problem of wildcard matching can be converted into regular expression matching using a naive text-replacement approach. Although non-recursive regular expression matchers such as Thompson's construction are less used in practice due to lack of backreference support, wildcard matching in general does not come with a similarly rich set of features. (In fact, many of the algorithms above only has support for and .) The Russ Cox implementation of Thompson NFA can be trivially modified for such. Gustavo Navarro's BDM-based nrgrep algorithm provides a more streamlined implementation with emphasis on efficient suffixes. See also . References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\n\\begin{aligned}\nm_{00} &= (p_{0} = t_{0}) \\\\\nm_{0j} &= (p_{j-1} = \\text{‘*’}) \\land m_{0,j-1}\\\\\nm_{i0} & = \\text{false} \\\\\nm_{ij} &=\n\\begin{cases}\n m_{i-1, j-1} & \\text{for}\\; p_{i-1} = t_{j-1} \\lor p_{i-1} = \\text{‘?’}\\\\\n m_{i, j-1} \\lor m_{i-1, j} & \\text{for}\\; p_{i-1} = \\text{‘*’}\\\\\n \\text{false} & \\text{for}\\; p_{i-1} \\neq t_{j-1}\n\\end{cases} & & \\quad \\text{for}\\; 1 \\leq i \\le |p|, 1 \\leq j \\le |t|.\n\\end{aligned}\n" } ]
https://en.wikipedia.org/wiki?curid=57373293
57374297
26-fullerene graph
Polyhedral graph with 26 vertices and 39 edges In the mathematical field of graph theory, the 26-fullerene graph is a polyhedral graph with "V" = 26 vertices and "E" = 39 edges. Its planar embedding has three hexagonal faces (including the one shown as the external face of the illustration) and twelve pentagonal faces. As a planar graph with only pentagonal and hexagonal faces, meeting in three faces per vertex, this graph is a fullerene. The existence of this fullerene has been known since at least 1968. Properties. The 26-fullerene graph has formula_0 prismatic symmetry, the same group of symmetries as the triangular prism. This symmetry group has 12 elements; it has six symmetries that arbitrarily permute the three hexagonal faces of the graph and preserve the orientation of its planar embedding, and another six orientation-reversing symmetries. The number of fullerenes with a given even number of vertices grows quickly in the number of vertices; 26 is the largest number of vertices for which the fullerene structure is unique. The only two smaller fullerenes are the graph of the regular dodecahedron (a fullerene with 20 vertices) and the graph of the truncated hexagonal trapezohedron (a 24-vertex fullerene), which are the two types of cells in the Weaire–Phelan structure. The 26-fullerene graph has many perfect matchings. One must remove at least five edges from the graph in order to obtain a subgraph that has exactly one perfect matching. This is a unique property of this graph among fullerenes in the sense that, for every other number of vertices of a fullerene, there exists at least one fullerene from which one can remove four edges to obtain a subgraph with a unique perfect matching. The vertices of the 26-fullerene graph can be labeled with sequences of 12 bits, in such a way that distance in the graph equals half of the Hamming distance between these bitvectors. This can also be interpreted as an isometric embedding from the graph into a 12-dimensional taxicab geometry. The 26-fullerene graph is one of only five fullerenes with such an embedding. In popular culture. In 2009, "The New York Times" published a puzzle involving Hamiltonian paths in this graph, taking advantage of the correspondence between its 26 vertices and the 26 letters of the English alphabet. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "D_{3h}" } ]
https://en.wikipedia.org/wiki?curid=57374297
57375305
Attributable fraction for the population
Epidemiology statistic In epidemiology, attributable fraction for the population (AFp) is the proportion of incidents in the population that are attributable to the risk factor. The term attributable risk percent for the population is used if the fraction is expressed as a percentage. It is calculated as formula_0, where formula_1 is the incidence in the population, and formula_2 is the incidence in the unexposed group. Equivalently it can be calculated as formula_3, where formula_4is the exposed proportion of the population and formula_5 is the relative risk not adjusted for confounders. It is used when an exposure increases the risk, as opposed to reducing it, in which case its symmetrical notion is preventable fraction for the population. Synonyms. Multiple synonyms of the attributable fraction for the population are in use: attributable proportion for the population, population attributable proportion, Levin's attributable risk, population attributable risk, and population attributable fraction. Similarly, population attributable risk percent (PAR) is used as a synonym for the attributable risk percent for the population. Interpretation. Attributable fraction for the population combines both the relative risk of an incident with respect to the factor, as well as the prevalence of the factor in the population. Values of AFp close to 1 indicate that both the relative risk is high, and that the risk factor is prevalent. In such case, removal of the risk factor will greatly reduce the number of the incidents in the population. The values of AFp close to 0, on the other hand, indicate that either the relative risk is low, or that the factor is not prevalent (or both). Removal of such factor from the population will have little effect. Because of this interpretation, AFp is considered useful for guiding public health policy. For example, in 1953 Levin's paper estimated that lung cancer has a relative risk of 3.6–13.4 in smokers compared to non-smokers, and that the proportion of the population exposed to smoking was 0.5–0.96, resulting in the high AFp value of 0.56–0.92. Recently, it has been shown that the population attributable fraction for anthropogenic risk factors strongly correlates with the number of oncogenic mutations in multiple cancer types, both sexes, and three countries – US, UK and Australia. Generalizations. Attributable fraction for the population can be generalized to the case where the multilevel exposure to the risk factor. In such case formula_6where formula_7is the proportion of the population exposed to the level formula_8, formula_9is the desired (ideal) proportion of the population exposed to the level formula_8, and formula_10is the relative risk at exposure level formula_8. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "AF_p = (I_p - I_u)/I_p" }, { "math_id": 1, "text": "I_p" }, { "math_id": 2, "text": "I_u" }, { "math_id": 3, "text": "AF_p = \\frac{P_e(RR - 1)}{1 + P_e(RR-1)}" }, { "math_id": 4, "text": "P_e" }, { "math_id": 5, "text": "RR" }, { "math_id": 6, "text": "AF_p = \\frac{\\sum_i P_i RR_i - \\sum_i P_i'RR_i}{\\sum_i P_i RR_i}" }, { "math_id": 7, "text": "P_i" }, { "math_id": 8, "text": "i" }, { "math_id": 9, "text": "P_i'" }, { "math_id": 10, "text": "RR_i" } ]
https://en.wikipedia.org/wiki?curid=57375305
57381799
Dynamic causal modeling
Statistical modeling framework Dynamic causal modeling (DCM) is a framework for specifying models, fitting them to data and comparing their evidence using Bayesian model comparison. It uses nonlinear state-space models in continuous time, specified using stochastic or ordinary differential equations. DCM was initially developed for testing hypotheses about neural dynamics. In this setting, differential equations describe the interaction of neural populations, which directly or indirectly give rise to functional neuroimaging data e.g., functional magnetic resonance imaging (fMRI), magnetoencephalography (MEG) or electroencephalography (EEG). Parameters in these models quantify the directed influences or effective connectivity among neuronal populations, which are estimated from the data using Bayesian statistical methods. Procedure. DCM is typically used to estimate the coupling among brain regions and the changes in coupling due to experimental changes (e.g., time or context). A model of interacting neural populations is specified, with a level of biological detail dependent on the hypotheses and available data. This is coupled with a forward model describing how neural activity gives rise to measured responses. Estimating the generative model identifies the parameters (e.g. connection strengths) from the observed data. Bayesian model comparison is used to compare models based on their evidence, which can then be characterised in terms of parameters. DCM studies typically involve the following stages: The key stages are briefly reviewed below. Experimental design. Functional neuroimaging experiments are typically either task-based or examine brain activity at rest (resting state). In task-based experiments, brain responses are evoked by known deterministic inputs (experimentally controlled stimuli). These experimental variables can change neural activity through direct influences on specific brain regions, such as evoked potentials in the early visual cortex, or via a modulation of coupling among neural populations; for example, the influence of attention. These two types of input - driving and modulatory - are parameterized separately in DCM. To enable efficient estimation of driving and modulatory effects, a 2x2 factorial experimental design is often used - with one factor serving as the driving input and the other as the modulatory input. Resting state experiments have no experimental manipulations within the period of the neuroimaging recording. Instead, hypotheses are tested about the coupling of endogenous fluctuations in neuronal activity, or in the differences in connectivity between sessions or subjects. The DCM framework includes models and procedures for analysing resting state data, described in the next section. Model specification. All models in DCM have the following basic form: formula_0 The first equality describes the change in neural activity formula_1 with respect to time (i.e. formula_2), which cannot be directly observed using non-invasive functional imaging modalities. The evolution of neural activity over time is controlled by a neural function formula_3 with parameters formula_4 and experimental inputs formula_5. The neural activity in turn causes the timeseries formula_6 (second equality), which are generated via an observation function formula_7 with parameters formula_8. Additive observation noise formula_9 completes the observation model. Usually, the neural parameters formula_4 are of key interest, which for example represent connection strengths that may change under different experimental conditions. Specifying a DCM requires selecting a neural model formula_3 and observation model formula_7 and setting appropriate priors over the parameters; e.g. selecting which connections should be switched on or off. Functional MRI. The neural model in DCM for fMRI is a Taylor approximation that captures the gross causal influences between brain regions and their change due to experimental inputs (see picture). This is coupled with a detailed biophysical model of the generation of the blood oxygen level dependent (BOLD) response and the MRI signal, based on the Balloon model of Buxton et al., which was supplemented with a model of neurovascular coupling. Additions to the neural model have included interactions between excitatory and inhibitory neural populations and non-linear influences of neural populations on the coupling between other populations. DCM for resting state studies was first introduced in Stochastic DCM, which estimates both neural fluctuations and connectivity parameters in the time domain, using Generalized Filtering. A more efficient scheme for resting state data was subsequently introduced which operates in the frequency domain, called DCM for Cross-Spectral Density (CSD). Both of these can be applied to large-scale brain networks by constraining the connectivity parameters based on the functional connectivity. Another recent development for resting state analysis is Regression DCM implemented in the Tapas software collection (see Software implementations). Regression DCM operates in the frequency domain, but linearizes the model under certain simplifications, such as having a fixed (canonical) haemodynamic response function. The enables rapid estimation of large-scale brain networks. EEG / MEG. DCM for EEG and MEG data use more biologically detailed neural models than fMRI, due to the higher temporal resolution of these measurement techniques. These can be classed into physiological models, which recapitulate neural circuitry, and phenomenological models, which focus on reproducing particular data features. The physiological models can be further subdivided into two classes. Conductance-based models derive from the equivalent circuit representation of the cell membrane developed by Hodgkin and Huxley in the 1950s. Convolution models were introduced by Wilson &amp; Cowan and Freeman in the 1970s and involve a convolution of pre-synaptic input by a synaptic kernel function. Some of the specific models used in DCM are as follows: Model estimation. Model inversion or estimation is implemented in DCM using variational Bayes under the Laplace assumption. This provides two useful quantities: the log marginal likelihood or model evidence formula_10is the probability of observing of the data under a given model. Generally, this cannot be calculated explicitly and is approximated by a quantity called the negative variational free energy formula_11, referred to in machine learning as the Evidence Lower Bound (ELBO). Hypotheses are tested by comparing the evidence for different models based on their free energy, a procedure called Bayesian model comparison. Model estimation also provides estimates of the parameters formula_12, for example connection strengths, which maximise the free energy. Where models differ only in their priors, Bayesian Model Reduction can be used to derive the evidence and parameters of nested or reduced models analytically and efficiently. Model comparison. Neuroimaging studies typically investigate effects that are conserved at the group level, or which differ between subjects. There are two predominant approaches for group-level analysis: random effects Bayesian Model Selection (BMS) and Parametric Empirical Bayes (PEB). Random Effects BMS posits that subjects differ in terms of which model generated their data - e.g. drawing a random subject from the population, there might be a 25% chance that their brain is structured like model 1 and a 75% chance that it is structured like model 2. The analysis pipeline for the BMS approach procedure follows a series of steps: Alternatively, Parametric Empirical Bayes (PEB) can be used, which specifies a hierarchical model over parameters (e.g., connection strengths). It eschews the notion of different models at the level of individual subjects, and assumes that people differ in the (parametric) strength of connections. The PEB approach models distinct sources of variability in connection strengths across subjects using fixed effects and between-subject variability (random effects). The PEB procedure is as follows: Validation. Developments in DCM have been validated using different approaches: Limitations / drawbacks. DCM is a hypothesis-driven approach for investigating the interactions among pre-defined regions of interest. It is not ideally suited for exploratory analyses. Although methods have been implemented for automatically searching over reduced models (Bayesian Model Reduction) and for modelling large-scale brain networks, these methods require an explicit specification of model space. In neuroimaging, approaches such as psychophysiological interaction (PPI) analysis may be more appropriate for exploratory use; especially for discovering key nodes for subsequent DCM analysis. The variational Bayesian methods used for model estimation in DCM are based on the Laplace assumption, which treats the posterior over parameters as Gaussian. This approximation can fail in the context of highly non-linear models, where local minima may preclude the free energy from serving as a tight bound on log model evidence. Sampling approaches provide the gold standard; however, they are time-consuming and have typically been used to validate the variational approximations in DCM. Software implementations. DCM is implemented in the Statistical Parametric Mapping software package, which serves as the canonical or reference implementation (http://www.fil.ion.ucl.ac.uk/spm/software/spm12/). It has been re-implemented and developed in the Tapas software collection (https://www.tnu.ethz.ch/en/software/tapas.html ) and the VBA toolbox (https://mbb-team.github.io/VBA-toolbox/). References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\begin{align}\n\\dot{z}&=f(z,u,\\theta^{(n)}) \\\\\ny&=g(z,\\theta^{(h)})+\\epsilon\n\\end{align}" }, { "math_id": 1, "text": "z" }, { "math_id": 2, "text": "\\dot{z}" }, { "math_id": 3, "text": "f" }, { "math_id": 4, "text": "\\theta^{(n)}" }, { "math_id": 5, "text": "u" }, { "math_id": 6, "text": "y" }, { "math_id": 7, "text": "g" }, { "math_id": 8, "text": "\\theta^{(h)}" }, { "math_id": 9, "text": "\\epsilon" }, { "math_id": 10, "text": "\\ln{p(y|m)}" }, { "math_id": 11, "text": "F" }, { "math_id": 12, "text": "p(\\theta|y)" } ]
https://en.wikipedia.org/wiki?curid=57381799
573875
Measurement in quantum mechanics
Interaction of a quantum system with a classical observer In quantum physics, a measurement is the testing or manipulation of a physical system to yield a numerical result. A fundamental feature of quantum theory is that the predictions it makes are probabilistic. The procedure for finding a probability involves combining a quantum state, which mathematically describes a quantum system, with a mathematical representation of the measurement to be performed on that system. The formula for this calculation is known as the Born rule. For example, a quantum particle like an electron can be described by a quantum state that associates to each point in space a complex number called a probability amplitude. Applying the Born rule to these amplitudes gives the probabilities that the electron will be found in one region or another when an experiment is performed to locate it. This is the best the theory can do; it cannot say for certain where the electron will be found. The same quantum state can also be used to make a prediction of how the electron will be "moving," if an experiment is performed to measure its momentum instead of its position. The uncertainty principle implies that, whatever the quantum state, the range of predictions for the electron's position and the range of predictions for its momentum cannot both be narrow. Some quantum states imply a near-certain prediction of the result of a position measurement, but the result of a momentum measurement will be highly unpredictable, and vice versa. Furthermore, the fact that nature violates the statistical conditions known as Bell inequalities indicates that the unpredictability of quantum measurement results cannot be explained away as due to ignorance about "local hidden variables" within quantum systems. Measuring a quantum system generally changes the quantum state that describes that system. This is a central feature of quantum mechanics, one that is both mathematically intricate and conceptually subtle. The mathematical tools for making predictions about what measurement outcomes may occur, and how quantum states can change, were developed during the 20th century and make use of linear algebra and functional analysis. Quantum physics has proven to be an empirical success and to have wide-ranging applicability. However, on a more philosophical level, debates continue about the meaning of the measurement concept. Mathematical formalism. "Observables" as self-adjoint operators. In quantum mechanics, each physical system is associated with a Hilbert space, each element of which represents a possible state of the physical system. The approach codified by John von Neumann represents a measurement upon a physical system by a self-adjoint operator on that Hilbert space termed an "observable". These observables play the role of measurable quantities familiar from classical physics: position, momentum, energy, angular momentum and so on. The dimension of the Hilbert space may be infinite, as it is for the space of square-integrable functions on a line, which is used to define the quantum physics of a continuous degree of freedom. Alternatively, the Hilbert space may be finite-dimensional, as occurs for spin degrees of freedom. Many treatments of the theory focus on the finite-dimensional case, as the mathematics involved is somewhat less demanding. Indeed, introductory physics texts on quantum mechanics often gloss over mathematical technicalities that arise for continuous-valued observables and infinite-dimensional Hilbert spaces, such as the distinction between bounded and unbounded operators; questions of convergence (whether the limit of a sequence of Hilbert-space elements also belongs to the Hilbert space), exotic possibilities for sets of eigenvalues, like Cantor sets; and so forth. These issues can be satisfactorily resolved using spectral theory; the present article will avoid them whenever possible. Projective measurement. The eigenvectors of a von Neumann observable form an orthonormal basis for the Hilbert space, and each possible outcome of that measurement corresponds to one of the vectors comprising the basis. A density operator is a positive-semidefinite operator on the Hilbert space whose trace is equal to 1. For each measurement that can be defined, the probability distribution over the outcomes of that measurement can be computed from the density operator. The procedure for doing so is the Born rule, which states that formula_0 where formula_1 is the density operator, and formula_2 is the projection operator onto the basis vector corresponding to the measurement outcome formula_3. The average of the eigenvalues of a von Neumann observable, weighted by the Born rule probabilities, is the expectation value of that observable. For an observable formula_4, the expectation value given a quantum state formula_1 is formula_5 A density operator that is a rank-1 projection is known as a "pure" quantum state, and all quantum states that are not pure are designated "mixed". Pure states are also known as "wavefunctions". Assigning a pure state to a quantum system implies certainty about the outcome of some measurement on that system (i.e., formula_6 for some outcome formula_7). Any mixed state can be written as a convex combination of pure states, though not in a unique way. The state space of a quantum system is the set of all states, pure and mixed, that can be assigned to it. The Born rule associates a probability with each unit vector in the Hilbert space, in such a way that these probabilities sum to 1 for any set of unit vectors comprising an orthonormal basis. Moreover, the probability associated with a unit vector is a function of the density operator and the unit vector, and not of additional information like a choice of basis for that vector to be embedded in. Gleason's theorem establishes the converse: all assignments of probabilities to unit vectors (or, equivalently, to the operators that project onto them) that satisfy these conditions take the form of applying the Born rule to some density operator. Generalized measurement (POVM). In functional analysis and quantum measurement theory, a positive-operator-valued measure (POVM) is a measure whose values are positive semi-definite operators on a Hilbert space. POVMs are a generalisation of projection-valued measures (PVMs) and, correspondingly, quantum measurements described by POVMs are a generalisation of quantum measurement described by PVMs. In rough analogy, a POVM is to a PVM what a mixed state is to a pure state. Mixed states are needed to specify the state of a subsystem of a larger system (see Schrödinger–HJW theorem); analogously, POVMs are necessary to describe the effect on a subsystem of a projective measurement performed on a larger system. POVMs are the most general kind of measurement in quantum mechanics, and can also be used in quantum field theory. They are extensively used in the field of quantum information. In the simplest case, of a POVM with a finite number of elements acting on a finite-dimensional Hilbert space, a POVM is a set of positive semi-definite matrices formula_8 on a Hilbert space formula_9 that sum to the identity matrix, formula_10 In quantum mechanics, the POVM element formula_11 is associated with the measurement outcome formula_12, such that the probability of obtaining it when making a measurement on the quantum state formula_1 is given by formula_13, where formula_14 is the trace operator. When the quantum state being measured is a pure state formula_15 this formula reduces to formula_16. State change due to measurement. A measurement upon a quantum system will generally bring about a change of the quantum state of that system. Writing a POVM does not provide the complete information necessary to describe this state-change process. To remedy this, further information is specified by decomposing each POVM element into a product: formula_17 The Kraus operators formula_18, named for Karl Kraus, provide a specification of the state-change process. They are not necessarily self-adjoint, but the products formula_19 are. If upon performing the measurement the outcome formula_20 is obtained, then the initial state formula_1 is updated to formula_21 An important special case is the Lüders rule, named for Gerhart Lüders. If the POVM is itself a PVM, then the Kraus operators can be taken to be the projectors onto the eigenspaces of the von Neumann observable: formula_22 If the initial state formula_1 is pure, and the projectors formula_2 have rank 1, they can be written as projectors onto the vectors formula_15 and formula_23, respectively. The formula simplifies thus to formula_24 Lüders rule has historically been known as the "reduction of the wave packet" or the "collapse of the wavefunction". The pure state formula_23 implies a probability-one prediction for any von Neumann observable that has formula_23 as an eigenvector. Introductory texts on quantum theory often express this by saying that if a quantum measurement is repeated in quick succession, the same outcome will occur both times. This is an oversimplification, since the physical implementation of a quantum measurement may involve a process like the absorption of a photon; after the measurement, the photon does not exist to be measured again. We can define a linear, trace-preserving, completely positive map, by summing over all the possible post-measurement states of a POVM without the normalisation: formula_25 It is an example of a quantum channel, and can be interpreted as expressing how a quantum state changes if a measurement is performed but the result of that measurement is lost. Examples. The prototypical example of a finite-dimensional Hilbert space is a qubit, a quantum system whose Hilbert space is 2-dimensional. A pure state for a qubit can be written as a linear combination of two orthogonal basis states formula_26 and formula_27 with complex coefficients: formula_28 A measurement in the formula_29 basis will yield outcome formula_26 with probability formula_30 and outcome formula_27 with probability formula_31, so by normalization, formula_32 An arbitrary state for a qubit can be written as a linear combination of the Pauli matrices, which provide a basis for formula_33 self-adjoint matrices: formula_34 where the real numbers formula_35 are the coordinates of a point within the unit ball and formula_36 POVM elements can be represented likewise, though the trace of a POVM element is not fixed to equal 1. The Pauli matrices are traceless and orthogonal to one another with respect to the Hilbert–Schmidt inner product, and so the coordinates formula_35 of the state formula_1 are the expectation values of the three von Neumann measurements defined by the Pauli matrices. If such a measurement is applied to a qubit, then by the Lüders rule, the state will update to the eigenvector of that Pauli matrix corresponding to the measurement outcome. The eigenvectors of formula_37 are the basis states formula_38 and formula_39, and a measurement of formula_37 is often called a measurement in the "computational basis." After a measurement in the computational basis, the outcome of a formula_40 or formula_41 measurement is maximally uncertain. A pair of qubits together form a system whose Hilbert space is 4-dimensional. One significant von Neumann measurement on this system is that defined by the Bell basis, a set of four maximally entangled states: formula_42 A common and useful example of quantum mechanics applied to a continuous degree of freedom is the quantum harmonic oscillator. This system is defined by the Hamiltonian formula_43 where formula_44, the momentum operator formula_45 and the position operator formula_46 are self-adjoint operators on the Hilbert space of square-integrable functions on the real line. The energy eigenstates solve the time-independent Schrödinger equation: formula_47 These eigenvalues can be shown to be given by formula_48 and these values give the possible numerical outcomes of an energy measurement upon the oscillator. The set of possible outcomes of a "position" measurement on a harmonic oscillator is continuous, and so predictions are stated in terms of a probability density function formula_49 that gives the probability of the measurement outcome lying in the infinitesimal interval from formula_7 to formula_50. History of the measurement concept. The "old quantum theory". The old quantum theory is a collection of results from the years 1900–1925 which predate modern quantum mechanics. The theory was never complete or self-consistent, but was rather a set of heuristic corrections to classical mechanics. The theory is now understood as a semi-classical approximation to modern quantum mechanics. Notable results from this period include Planck's calculation of the blackbody radiation spectrum, Einstein's explanation of the photoelectric effect, Einstein and Debye's work on the specific heat of solids, Bohr and van Leeuwen's proof that classical physics cannot account for diamagnetism, Bohr's model of the hydrogen atom and Arnold Sommerfeld's extension of the Bohr model to include relativistic effects. The Stern–Gerlach experiment, proposed in 1921 and implemented in 1922, became a prototypical example of a quantum measurement having a discrete set of possible outcomes. In the original experiment, silver atoms were sent through a spatially varying magnetic field, which deflected them before they struck a detector screen, such as a glass slide. Particles with non-zero magnetic moment are deflected, due to the magnetic field gradient, from a straight path. The screen reveals discrete points of accumulation, rather than a continuous distribution, owing to the particles' quantized spin. Transition to the “new” quantum theory. A 1925 paper by Heisenberg, known in English as "Quantum theoretical re-interpretation of kinematic and mechanical relations", marked a pivotal moment in the maturation of quantum physics. Heisenberg sought to develop a theory of atomic phenomena that relied only on "observable" quantities. At the time, and in contrast with the later standard presentation of quantum mechanics, Heisenberg did not regard the position of an electron bound within an atom as "observable". Instead, his principal quantities of interest were the frequencies of light emitted or absorbed by atoms. The uncertainty principle dates to this period. It is frequently attributed to Heisenberg, who introduced the concept in analyzing a thought experiment where one attempts to measure an electron's position and momentum simultaneously. However, Heisenberg did not give precise mathematical definitions of what the "uncertainty" in these measurements meant. The precise mathematical statement of the position-momentum uncertainty principle is due to Kennard, Pauli, and Weyl, and its generalization to arbitrary pairs of noncommuting observables is due to Robertson and Schrödinger. Writing formula_46 and formula_45 for the self-adjoint operators representing position and momentum respectively, a standard deviation of position can be defined as formula_51 and likewise for the momentum: formula_52 The Kennard–Pauli–Weyl uncertainty relation is formula_53 This inequality means that no preparation of a quantum particle can imply simultaneously precise predictions for a measurement of position and for a measurement of momentum. The Robertson inequality generalizes this to the case of an arbitrary pair of self-adjoint operators formula_4 and formula_54. The commutator of these two operators is formula_55 and this provides the lower bound on the product of standard deviations: formula_56 Substituting in the canonical commutation relation formula_57, an expression first postulated by Max Born in 1925, recovers the Kennard–Pauli–Weyl statement of the uncertainty principle. From uncertainty to no-hidden-variables. The existence of the uncertainty principle naturally raises the question of whether quantum mechanics can be understood as an approximation to a more exact theory. Do there exist "hidden variables", more fundamental than the quantities addressed in quantum theory itself, knowledge of which would allow more exact predictions than quantum theory can provide? A collection of results, most significantly Bell's theorem, have demonstrated that broad classes of such hidden-variable theories are in fact incompatible with quantum physics. Bell published the theorem now known by his name in 1964, investigating more deeply a thought experiment originally proposed in 1935 by Einstein, Podolsky and Rosen. According to Bell's theorem, if nature actually operates in accord with any theory of "local" hidden variables, then the results of a Bell test will be constrained in a particular, quantifiable way. If a Bell test is performed in a laboratory and the results are "not" thus constrained, then they are inconsistent with the hypothesis that local hidden variables exist. Such results would support the position that there is no way to explain the phenomena of quantum mechanics in terms of a more fundamental description of nature that is more in line with the rules of classical physics. Many types of Bell test have been performed in physics laboratories, often with the goal of ameliorating problems of experimental design or set-up that could in principle affect the validity of the findings of earlier Bell tests. This is known as "closing loopholes in Bell tests". To date, Bell tests have found that the hypothesis of local hidden variables is inconsistent with the way that physical systems behave. Quantum systems as measuring devices. The Robertson–Schrödinger uncertainty principle establishes that when two observables do not commute, there is a tradeoff in predictability between them. The Wigner–Araki–Yanase theorem demonstrates another consequence of non-commutativity: the presence of a conservation law limits the accuracy with which observables that fail to commute with the conserved quantity can be measured. Further investigation in this line led to the formulation of the Wigner–Yanase skew information. Historically, experiments in quantum physics have often been described in semiclassical terms. For example, the spin of an atom in a Stern–Gerlach experiment might be treated as a quantum degree of freedom, while the atom is regarded as moving through a magnetic field described by the classical theory of Maxwell's equations. But the devices used to build the experimental apparatus are themselves physical systems, and so quantum mechanics should be applicable to them as well. Beginning in the 1950s, Rosenfeld, von Weizsäcker and others tried to develop consistency conditions that expressed when a quantum-mechanical system could be treated as a measuring apparatus. One proposal for a criterion regarding when a system used as part of a measuring device can be modeled semiclassically relies on the Wigner function, a quasiprobability distribution that can be treated as a probability distribution on phase space in those cases where it is everywhere non-negative. Decoherence. A quantum state for an imperfectly isolated system will generally evolve to be entangled with the quantum state for the environment. Consequently, even if the system's initial state is pure, the state at a later time, found by taking the partial trace of the joint system-environment state, will be mixed. This phenomenon of entanglement produced by system-environment interactions tends to obscure the more exotic features of quantum mechanics that the system could in principle manifest. Quantum decoherence, as this effect is known, was first studied in detail during the 1970s. (Earlier investigations into how classical physics might be obtained as a limit of quantum mechanics had explored the subject of imperfectly isolated systems, but the role of entanglement was not fully appreciated.) A significant portion of the effort involved in quantum computing is to avoid the deleterious effects of decoherence. To illustrate, let formula_58 denote the initial state of the system, formula_59 the initial state of the environment and formula_60 the Hamiltonian specifying the system-environment interaction. The density operator formula_59 can be diagonalized and written as a linear combination of the projectors onto its eigenvectors: formula_61 Expressing time evolution for a duration formula_62 by the unitary operator formula_63, the state for the system after this evolution is formula_64 which evaluates to formula_65 The quantities surrounding formula_58 can be identified as Kraus operators, and so this defines a quantum channel. Specifying a form of interaction between system and environment can establish a set of "pointer states," states for the system that are (approximately) stable, apart from overall phase factors, with respect to environmental fluctuations. A set of pointer states defines a preferred orthonormal basis for the system's Hilbert space. Quantum information and computation. Quantum information science studies how information science and its application as technology depend on quantum-mechanical phenomena. Understanding measurement in quantum physics is important for this field in many ways, some of which are briefly surveyed here. Measurement, entropy, and distinguishability. The von Neumann entropy is a measure of the statistical uncertainty represented by a quantum state. For a density matrix formula_1, the von Neumann entropy is formula_66 writing formula_1 in terms of its basis of eigenvectors, formula_67 the von Neumann entropy is formula_68 This is the Shannon entropy of the set of eigenvalues interpreted as a probability distribution, and so the von Neumann entropy is the Shannon entropy of the random variable defined by measuring in the eigenbasis of formula_1. Consequently, the von Neumann entropy vanishes when formula_1 is pure. The von Neumann entropy of formula_1 can equivalently be characterized as the minimum Shannon entropy for a measurement given the quantum state formula_1, with the minimization over all POVMs with rank-1 elements. Many other quantities used in quantum information theory also find motivation and justification in terms of measurements. For example, the trace distance between quantum states is equal to the largest "difference in probability" that those two quantum states can imply for a measurement outcome: formula_69 Similarly, the fidelity of two quantum states, defined by formula_70 expresses the probability that one state will pass a test for identifying a successful preparation of the other. The trace distance provides bounds on the fidelity via the Fuchs–van de Graaf inequalities: formula_71 Quantum circuits. Quantum circuits are a model for quantum computation in which a computation is a sequence of quantum gates followed by measurements. The gates are reversible transformations on a quantum mechanical analog of an "n"-bit register. This analogous structure is referred to as an "n"-qubit register. Measurements, drawn on a circuit diagram as stylized pointer dials, indicate where and how a result is obtained from the quantum computer after the steps of the computation are executed. Without loss of generality, one can work with the standard circuit model, in which the set of gates are single-qubit unitary transformations and controlled NOT gates on pairs of qubits, and all measurements are in the computational basis. Measurement-based quantum computation. Measurement-based quantum computation (MBQC) is a model of quantum computing in which the answer to a question is, informally speaking, created in the act of measuring the physical system that serves as the computer. Quantum tomography. Quantum state tomography is a process by which, given a set of data representing the results of quantum measurements, a quantum state consistent with those measurement results is computed. It is named by analogy with tomography, the reconstruction of three-dimensional images from slices taken through them, as in a CT scan. Tomography of quantum states can be extended to tomography of quantum channels and even of measurements. Quantum metrology. Quantum metrology is the use of quantum physics to aid the measurement of quantities that, generally, had meaning in classical physics, such as exploiting quantum effects to increase the precision with which a length can be measured. A celebrated example is the introduction of squeezed light into the LIGO experiment, which increased its sensitivity to gravitational waves. Laboratory implementations. The range of physical procedures to which the mathematics of quantum measurement can be applied is very broad. In the early years of the subject, laboratory procedures involved the recording of spectral lines, the darkening of photographic film, the observation of scintillations, finding tracks in cloud chambers, and hearing clicks from Geiger counters. Language from this era persists, such as the description of measurement outcomes in the abstract as "detector clicks". The double-slit experiment is a prototypical illustration of quantum interference, typically described using electrons or photons. The first interference experiment to be carried out in a regime where both wave-like and particle-like aspects of photon behavior are significant was G. I. Taylor's test in 1909. Taylor used screens of smoked glass to attenuate the light passing through his apparatus, to the extent that, in modern language, only one photon would be illuminating the interferometer slits at a time. He recorded the interference patterns on photographic plates; for the dimmest light, the exposure time required was roughly three months. In 1974, the Italian physicists Pier Giorgio Merli, Gian Franco Missiroli, and Giulio Pozzi implemented the double-slit experiment using single electrons and a television tube. A quarter-century later, a team at the University of Vienna performed an interference experiment with buckyballs, in which the buckyballs that passed through the interferometer were ionized by a laser, and the ions then induced the emission of electrons, emissions which were in turn amplified and detected by an electron multiplier. Modern quantum optics experiments can employ single-photon detectors. For example, in the "BIG Bell test" of 2018, several of the laboratory setups used single-photon avalanche diodes. Another laboratory setup used superconducting qubits. The standard method for performing measurements upon superconducting qubits is to couple a qubit with a resonator in such a way that the characteristic frequency of the resonator shifts according to the state for the qubit, and detecting this shift by observing how the resonator reacts to a probe signal. Interpretations of quantum mechanics. Despite the consensus among scientists that quantum physics is in practice a successful theory, disagreements persist on a more philosophical level. Many debates in the area known as quantum foundations concern the role of measurement in quantum mechanics. Recurring questions include which interpretation of probability theory is best suited for the probabilities calculated from the Born rule; and whether the apparent randomness of quantum measurement outcomes is fundamental, or a consequence of a deeper deterministic process. Worldviews that present answers to questions like these are known as "interpretations" of quantum mechanics; as the physicist N. David Mermin once quipped, "New interpretations appear every year. None ever disappear." A central concern within quantum foundations is the "quantum measurement problem," though how this problem is delimited, and whether it should be counted as one question or multiple separate issues, are contested topics. Of primary interest is the seeming disparity between apparently distinct types of time evolution. Von Neumann declared that quantum mechanics contains "two fundamentally different types" of quantum-state change. First, there are those changes involving a measurement process, and second, there is unitary time evolution in the absence of measurement. The former is stochastic and discontinuous, writes von Neumann, and the latter deterministic and continuous. This dichotomy has set the tone for much later debate. Some interpretations of quantum mechanics find the reliance upon two different types of time evolution distasteful and regard the ambiguity of when to invoke one or the other as a deficiency of the way quantum theory was historically presented. To bolster these interpretations, their proponents have worked to derive ways of regarding "measurement" as a secondary concept and deducing the seemingly stochastic effect of measurement processes as approximations to more fundamental deterministic dynamics. However, consensus has not been achieved among proponents of the correct way to implement this program, and in particular how to justify the use of the Born rule to calculate probabilities. Other interpretations regard quantum states as statistical information about quantum systems, thus asserting that abrupt and discontinuous changes of quantum states are not problematic, simply reflecting updates of the available information. Of this line of thought, Bell asked, ""Whose" information? Information about "what"?" Answers to these questions vary among proponents of the informationally-oriented interpretations. See also. &lt;templatestyles src="Div col/styles.css"/&gt; Notes. &lt;templatestyles src="Reflist/styles.css" /&gt; References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "P(x_i) = \\operatorname{tr}(\\Pi_i \\rho)," }, { "math_id": 1, "text": "\\rho" }, { "math_id": 2, "text": "\\Pi_i" }, { "math_id": 3, "text": "x_i" }, { "math_id": 4, "text": "A" }, { "math_id": 5, "text": " \\langle A \\rangle = \\operatorname{tr} (A\\rho)." }, { "math_id": 6, "text": "P(x) = 1" }, { "math_id": 7, "text": "x" }, { "math_id": 8, "text": "\\{F_i\\} " }, { "math_id": 9, "text": " \\mathcal{H} " }, { "math_id": 10, "text": "\\sum_{i=1}^n F_i = \\operatorname{I}." }, { "math_id": 11, "text": "F_i" }, { "math_id": 12, "text": "i" }, { "math_id": 13, "text": "\\text{Prob}(i) = \\operatorname{tr}(\\rho F_i) " }, { "math_id": 14, "text": "\\operatorname{tr}" }, { "math_id": 15, "text": "|\\psi\\rangle" }, { "math_id": 16, "text": "\\text{Prob}(i) = \\operatorname{tr}(|\\psi\\rangle\\langle\\psi| F_i) = \\langle\\psi|F_i|\\psi\\rangle" }, { "math_id": 17, "text": "E_i = A^\\dagger_{i} A_{i}." }, { "math_id": 18, "text": "A_{i}" }, { "math_id": 19, "text": "A^\\dagger_{i} A_{i}" }, { "math_id": 20, "text": "E_i" }, { "math_id": 21, "text": "\\rho \\to \\rho' = \\frac{A_{i} \\rho A^\\dagger_{i}}{\\mathrm{Prob}(i)} = \\frac{A_{i} \\rho A^\\dagger_{i}}{\\operatorname{tr} (\\rho E_i)}." }, { "math_id": 22, "text": "\\rho \\to \\rho' = \\frac{\\Pi_i \\rho \\Pi_i}{\\operatorname{tr} (\\rho \\Pi_i)}." }, { "math_id": 23, "text": "|i\\rangle" }, { "math_id": 24, "text": "\\rho = |\\psi\\rangle\\langle\\psi| \\to \\rho' = \\frac{|i\\rangle\\langle i | \\psi\\rangle\\langle\\psi | i \\rangle\\langle i|}{|\\langle i |\\psi \\rangle|^2} = |i\\rangle\\langle i|." }, { "math_id": 25, "text": "\\rho \\to \\sum_i A_i \\rho A^\\dagger_i." }, { "math_id": 26, "text": "|0 \\rangle " }, { "math_id": 27, "text": "|1 \\rangle " }, { "math_id": 28, "text": "| \\psi \\rangle = \\alpha |0 \\rangle + \\beta |1 \\rangle " }, { "math_id": 29, "text": "(|0\\rangle, |1\\rangle)" }, { "math_id": 30, "text": "| \\alpha |^2" }, { "math_id": 31, "text": "| \\beta |^2" }, { "math_id": 32, "text": "| \\alpha |^2 + | \\beta |^2 = 1." }, { "math_id": 33, "text": "2 \\times 2" }, { "math_id": 34, "text": "\\rho = \\tfrac{1}{2}\\left(I + r_x \\sigma_x + r_y \\sigma_y + r_z \\sigma_z\\right)," }, { "math_id": 35, "text": "(r_x, r_y, r_z)" }, { "math_id": 36, "text": "\n \\sigma_x =\n \\begin{pmatrix}\n 0&1\\\\\n 1&0\n \\end{pmatrix}, \\quad\n \\sigma_y =\n \\begin{pmatrix}\n 0&-i\\\\\n i&0\n \\end{pmatrix}, \\quad\n \\sigma_z =\n \\begin{pmatrix}\n 1&0\\\\\n 0&-1\n \\end{pmatrix} ." }, { "math_id": 37, "text": "\\sigma_z" }, { "math_id": 38, "text": "|0\\rangle" }, { "math_id": 39, "text": "|1\\rangle" }, { "math_id": 40, "text": "\\sigma_x" }, { "math_id": 41, "text": "\\sigma_y" }, { "math_id": 42, "text": "\\begin{align}\n|\\Phi^+\\rangle &= \\frac{1}{\\sqrt{2}} (|0\\rangle_A \\otimes |0\\rangle_B + |1\\rangle_A \\otimes |1\\rangle_B) \\\\\n|\\Phi^-\\rangle &= \\frac{1}{\\sqrt{2}} (|0\\rangle_A \\otimes |0\\rangle_B - |1\\rangle_A \\otimes |1\\rangle_B) \\\\\n|\\Psi^+\\rangle &= \\frac{1}{\\sqrt{2}} (|0\\rangle_A \\otimes |1\\rangle_B + |1\\rangle_A \\otimes |0\\rangle_B) \\\\\n|\\Psi^-\\rangle &= \\frac{1}{\\sqrt{2}} (|0\\rangle_A \\otimes |1\\rangle_B - |1\\rangle_A \\otimes |0\\rangle_B)\n\\end{align}" }, { "math_id": 43, "text": "{H} = \\frac{{p}^2}{2m} + \\frac{1}{2}m\\omega^2 {x}^2," }, { "math_id": 44, "text": "{H}" }, { "math_id": 45, "text": "{p}" }, { "math_id": 46, "text": "{x}" }, { "math_id": 47, "text": "{H} |n\\rangle = E_n |n\\rangle." }, { "math_id": 48, "text": "E_n = \\hbar\\omega\\left(n + \\tfrac{1}{2}\\right)," }, { "math_id": 49, "text": "P(x)" }, { "math_id": 50, "text": "x + dx" }, { "math_id": 51, "text": "\\sigma_x=\\sqrt{\\langle {x}^2 \\rangle-\\langle {x}\\rangle^2}," }, { "math_id": 52, "text": "\\sigma_p=\\sqrt{\\langle {p}^2 \\rangle-\\langle {p}\\rangle^2}." }, { "math_id": 53, "text": "\\sigma_x \\sigma_p \\geq \\frac{\\hbar}{2}." }, { "math_id": 54, "text": "B" }, { "math_id": 55, "text": "[A,B]=AB-BA," }, { "math_id": 56, "text": "\\sigma_A \\sigma_B \\geq \\left| \\frac{1}{2i}\\langle[A,B]\\rangle \\right| = \\frac{1}{2}\\left|\\langle[A,B]\\rangle \\right|." }, { "math_id": 57, "text": "[{x},{p}] = i\\hbar" }, { "math_id": 58, "text": "\\rho_S" }, { "math_id": 59, "text": "\\rho_E" }, { "math_id": 60, "text": "H" }, { "math_id": 61, "text": "\\rho_E = \\sum_i p_i |\\psi_i\\rangle\\langle \\psi_i|." }, { "math_id": 62, "text": "t" }, { "math_id": 63, "text": "U = e^{-iHt/\\hbar}" }, { "math_id": 64, "text": "\\rho_S' = {\\rm tr}_E U \\left[\\rho_S \\otimes \\left(\\sum_i p_i |\\psi_i\\rangle\\langle \\psi_i|\\right)\\right] U^\\dagger," }, { "math_id": 65, "text": "\\rho_S' = \\sum_{ij} \\sqrt{p_i} \\langle \\psi_j | U | \\psi_i \\rangle \\rho_S \\sqrt{p_i}\\langle \\psi_i | U^\\dagger | \\psi_j \\rangle." }, { "math_id": 66, "text": "S(\\rho) = -{\\rm tr}(\\rho \\log \\rho);" }, { "math_id": 67, "text": "\\rho = \\sum_i \\lambda_i |i\\rangle\\langle i|," }, { "math_id": 68, "text": "S(\\rho) = -\\sum_i \\lambda_i \\log \\lambda_i." }, { "math_id": 69, "text": "\\frac{1}{2}||\\rho-\\sigma|| = \\max_{0\\leq E \\leq I} [{\\rm tr}(E \\rho) - {\\rm tr}(E \\sigma)]." }, { "math_id": 70, "text": "F(\\rho, \\sigma) = \\left(\\operatorname{Tr} \\sqrt{\\sqrt{\\rho} \\sigma \\sqrt{\\rho}}\\right)^2," }, { "math_id": 71, "text": "1 - \\sqrt{F(\\rho,\\sigma)} \\leq \\frac{1}{2}||\\rho-\\sigma|| \\leq \\sqrt{1 - F(\\rho,\\sigma)}." } ]
https://en.wikipedia.org/wiki?curid=573875
57389784
Presheaf with transfers
In algebraic geometry, a presheaf with transfers is, roughly, a presheaf that, like cohomology theory, comes with pushforwards, “transfer” maps. Precisely, it is, by definition, a contravariant additive functor from the category of finite correspondences (defined below) to the category of abelian groups (in category theory, “presheaf” is another term for a contravariant functor). When a presheaf "F" with transfers is restricted to the subcategory of smooth separated schemes, it can be viewed as a presheaf on the category with "extra" maps formula_0, not coming from morphisms of schemes but also from finite correspondences from "X" to "Y" A presheaf "F" with transfers is said to be formula_1-homotopy invariant if formula_2 for every "X". For example, Chow groups as well as motivic cohomology groups form presheaves with transfers. Finite correspondence. Let formula_3 be algebraic schemes (i.e., separated and of finite type over a field) and suppose formula_4 is smooth. Then an elementary correspondence is an irreducible closed subscheme formula_5, formula_6 some connected component of "X", such that the projection formula_7 is finite and surjective. Let formula_8 be the free abelian group generated by elementary correspondences from "X" to "Y"; elements of formula_8 are then called finite correspondences. The category of finite correspondences, denoted by formula_9, is the category where the objects are smooth algebraic schemes over a field; where a Hom set is given as: formula_10 and where the composition is defined as in intersection theory: given elementary correspondences formula_11 from formula_4 to formula_12 and formula_13 from formula_12 to formula_14, their composition is: formula_15 where formula_16 denotes the intersection product and formula_17, etc. Note that the category formula_9 is an additive category since each Hom set formula_8 is an abelian group. This category contains the category formula_18 of smooth algebraic schemes as a subcategory in the following sense: there is a faithful functor formula_19 that sends an object to itself and a morphism formula_20 to the graph of formula_21. With the product of schemes taken as the monoid operation, the category formula_9 is a symmetric monoidal category. Sheaves with transfers. The basic notion underlying all of the different theories are presheaves with transfers. These are contravariant additive functorsformula_22and their associated category is typically denoted formula_23, or just formula_24 if the underlying field is understood. Each of the categories in this section are abelian categories, hence they are suitable for doing homological algebra. Etale sheaves with transfers. These are defined as presheaves with transfers such that the restriction to any scheme formula_4 is an etale sheaf. That is, if formula_25 is an etale cover, and formula_26 is a presheaf with transfers, it is an Etale sheaf with transfers if the sequenceformula_27is exact and there is an isomorphismformula_28for any fixed smooth schemes formula_29. Nisnevich sheaves with transfers. There is a similar definition for Nisnevich sheaf with transfers, where the Etale topology is switched with the Nisnevich topology. Examples. Units. The sheaf of units formula_30 is a presheaf with transfers. Any correspondence formula_31 induces a finite map of degree formula_32 over formula_4, hence there is the induced morphismformula_33showing it is a presheaf with transfers. Representable functors. One of the basic examples of presheaves with transfers are given by representable functors. Given a smooth scheme formula_4 there is a presheaf with transfers formula_34 sending formula_35. Representable functor associated to a point. The associated presheaf with transfers of formula_36 is denoted formula_37. Pointed schemes. Another class of elementary examples comes from pointed schemes formula_38 with formula_39. This morphism induces a morphism formula_40 whose cokernel is denoted formula_41. There is a splitting coming from the structure morphism formula_42, so there is an induced map formula_43, hence formula_44. Representable functor associated to A1-0. There is a representable functor associated to the pointed scheme formula_45 denoted formula_46. Smash product of pointed schemes. Given a finite family of pointed schemes formula_47 there is an associated presheaf with transfers formula_48, also denoted formula_49 from their Smash product. This is defined as the cokernel offormula_50For example, given two pointed schemes formula_51, there is the associated presheaf with transfers formula_52 equal to the cokernel offormula_53This is analogous to the smash product in topology since formula_54 where the equivalence relation mods out formula_55. Wedge of single space. A finite wedge of a pointed space formula_38 is denoted formula_56. One example of this construction is formula_57, which is used in the definition of the motivic complexes formula_58 used in Motivic cohomology. Homotopy invariant sheaves. A presheaf with transfers formula_26 is homotopy invariant if the projection morphism formula_59 induces an isomorphism formula_60 for every smooth scheme formula_4. There is a construction associating a homotopy invariant sheaf for every presheaf with transfers formula_26 using an analogue of simplicial homology. Simplicial homology. There is a schemeformula_61giving a cosimplicial scheme formula_62, where the morphisms formula_63 are given by formula_64. That is,formula_65gives the induced morphism formula_66. Then, to a presheaf with transfers formula_26, there is an associated complex of presheaves with transfers formula_67 sendingformula_68and has the induced chain morphismsformula_69giving a complex of presheaves with transfers. The homology invaritant presheaves with transfers formula_70 are homotopy invariant. In particular, formula_71 is the universal homotopy invariant presheaf with transfers associated to formula_26. Relation with Chow group of zero cycles. Denote formula_72. There is an induced surjection formula_73 which is an isomorphism for formula_4 projective. Zeroth homology of Ztr(X). The zeroth homology of formula_74 is formula_75 where homotopy equivalence is given as follows. Two finite correspondences formula_76 are formula_1-homotopy equivalent if there is a morphism formula_77 such that formula_78 and formula_79. Motivic complexes. For Voevodsky's category of mixed motives, the motive formula_80 associated to formula_4, is the class of formula_81 in formula_82. One of the elementary motivic complexes are formula_58 for formula_83, defined by the class offormula_84For an abelian group formula_85, such as formula_86, there is a motivic complex formula_87. These give the motivic cohomology groups defined byformula_88since the motivic complexes formula_58 restrict to a complex of Zariksi sheaves of formula_4. These are called the formula_89-th motivic cohomology groups of weight formula_90. They can also be extended to any abelian group formula_85,formula_91giving motivic cohomology with coefficients in formula_85 of weight formula_90. Special cases. There are a few special cases which can be analyzed explicitly. Namely, when formula_92. These results can be found in the fourth lecture of the Clay Math book. Z(0). In this case, formula_93 which is quasi-isomorphic to formula_37 (top of page 17), hence the weight formula_94 cohomology groups are isomorphic toformula_95where formula_96. Since an open cover Z(1). This case requires more work, but the end result is a quasi-isomorphism between formula_97 and formula_98. This gives the two motivic cohomology groupsformula_99where the middle cohomology groups are Zariski cohomology. General case: Z(n). In general, over a perfect field formula_100, there is a nice description of formula_101 in terms of presheaves with transfer formula_102. There is a quasi-ismorphismformula_103henceformula_104which is found using splitting techniques along with a series of quasi-isomorphisms. The details are in lecture 15 of the Clay Math book. References. &lt;templatestyles src="Reflist/styles.css" /&gt; &lt;templatestyles src="Refbegin/styles.css" /&gt;
[ { "math_id": 0, "text": "F(Y) \\to F(X)" }, { "math_id": 1, "text": "\\mathbb{A}^1" }, { "math_id": 2, "text": "F(X) \\simeq F(X \\times \\mathbb{A}^1)" }, { "math_id": 3, "text": "X, Y" }, { "math_id": 4, "text": "X" }, { "math_id": 5, "text": "W \\subset X_i \\times Y" }, { "math_id": 6, "text": "X_i" }, { "math_id": 7, "text": "\\operatorname{Supp}(W) \\to X_i" }, { "math_id": 8, "text": "\\operatorname{Cor}(X, Y)" }, { "math_id": 9, "text": "Cor" }, { "math_id": 10, "text": "\\operatorname{Hom}(X, Y) = \\operatorname{Cor}(X, Y)" }, { "math_id": 11, "text": "\\alpha" }, { "math_id": 12, "text": "Y" }, { "math_id": 13, "text": "\\beta" }, { "math_id": 14, "text": "Z" }, { "math_id": 15, "text": "\\beta \\circ \\alpha = p_{{13}, *}(p^*_{12} \\alpha \\cdot p^*_{23} \\beta)" }, { "math_id": 16, "text": "\\cdot" }, { "math_id": 17, "text": "p_{12}: X \\times Y \\times Z \\to X \\times Y" }, { "math_id": 18, "text": "\\textbf{Sm}" }, { "math_id": 19, "text": "\\textbf{Sm} \\to Cor" }, { "math_id": 20, "text": "f: X \\to Y" }, { "math_id": 21, "text": "f" }, { "math_id": 22, "text": "F:\\text{Cor}_k \\to \\text{Ab}" }, { "math_id": 23, "text": "\\mathbf{PST}(k)" }, { "math_id": 24, "text": "\\mathbf{PST}" }, { "math_id": 25, "text": "U \\to X" }, { "math_id": 26, "text": "F" }, { "math_id": 27, "text": "0 \\to F(X) \\xrightarrow{\\text{diag}} F(U) \\xrightarrow{(+,-)} F(U\\times_XU)" }, { "math_id": 28, "text": "F(X\\coprod Y) = F(X)\\oplus F(Y)" }, { "math_id": 29, "text": "X,Y" }, { "math_id": 30, "text": "\\mathcal{O}^*" }, { "math_id": 31, "text": "W \\subset X \\times Y" }, { "math_id": 32, "text": "N" }, { "math_id": 33, "text": "\\mathcal{O}^*(Y) \\to \\mathcal{O}^*(W) \\xrightarrow{N} \\mathcal{O}^*(X)" }, { "math_id": 34, "text": "\\mathbb{Z}_{tr}(X)" }, { "math_id": 35, "text": "U \\mapsto \\text{Hom}_{Cor}(U,X)" }, { "math_id": 36, "text": "\\text{Spec}(k)" }, { "math_id": 37, "text": "\\mathbb{Z}" }, { "math_id": 38, "text": "(X,x)" }, { "math_id": 39, "text": "x: \\text{Spec}(k) \\to X" }, { "math_id": 40, "text": "x_*:\\mathbb{Z} \\to \\mathbb{Z}_{tr}(X)" }, { "math_id": 41, "text": "\\mathbb{Z}_{tr}(X,x)" }, { "math_id": 42, "text": "X \\to \\text{Spec}(k)" }, { "math_id": 43, "text": "\\mathbb{Z}_{tr}(X) \\to \\mathbb{Z}" }, { "math_id": 44, "text": "\\mathbb{Z}_{tr}(X) \\cong \\mathbb{Z}\\oplus\\mathbb{Z}_{tr}(X,x)" }, { "math_id": 45, "text": "\\mathbb{G}_m = (\\mathbb{A}^1-\\{0\\},1)" }, { "math_id": 46, "text": "\\mathbb{Z}_{tr}(\\mathbb{G}_m)" }, { "math_id": 47, "text": "(X_i, x_i)" }, { "math_id": 48, "text": "\\mathbb{Z}_{tr}((X_1,x_1)\\wedge\\cdots\\wedge(X_n,x_n))" }, { "math_id": 49, "text": "\\mathbb{Z}_{tr}(X_1\\wedge\\cdots\\wedge X_n)" }, { "math_id": 50, "text": "\\text{coker}\\left( \\bigoplus_i \\mathbb{Z}_{tr}(X_1\\times \\cdots \\times \\hat{X}_i \\times \\cdots \\times X_n) \\xrightarrow{id\\times \\cdots \\times x_i \\times \\cdots \\times id} \\mathbb{Z}_{tr}(X_1\\times\\cdots\\times X_n) \\right)" }, { "math_id": 51, "text": "(X,x),(Y,y)" }, { "math_id": 52, "text": "\\mathbb{Z}_{tr}(X\\wedge Y)" }, { "math_id": 53, "text": "\\mathbb{Z}_{tr}(X)\\oplus \\mathbb{Z}_{tr}(Y) \\xrightarrow{ \\begin{bmatrix}1\\times y & x\\times 1 \\end{bmatrix}} \\mathbb{Z}_{tr}(X\\times Y)" }, { "math_id": 54, "text": "X\\wedge Y = (X \\times Y) / (X \\vee Y)" }, { "math_id": 55, "text": "X\\times \\{y\\} \\cup \\{x\\}\\times Y" }, { "math_id": 56, "text": "\\mathbb{Z}_{tr}(X^{\\wedge q}) = \\mathbb{Z}_{tr}(X\\wedge \\cdots \\wedge X)" }, { "math_id": 57, "text": "\\mathbb{Z}_{tr}(\\mathbb{G}_m^{\\wedge q})" }, { "math_id": 58, "text": "\\mathbb{Z}(q)" }, { "math_id": 59, "text": "p:X\\times\\mathbb{A}^1 \\to X" }, { "math_id": 60, "text": "p^*:F(X) \\to F(X\\times \\mathbb{A}^1)" }, { "math_id": 61, "text": "\\Delta^n = \\text{Spec}\\left( \\frac{k[x_0,\\ldots,x_n]}{\\sum_{0 \\leq i \\leq n} x_i - 1} \\right)" }, { "math_id": 62, "text": "\\Delta^*" }, { "math_id": 63, "text": "\\partial_j:\\Delta^n \\to \\Delta^{n+1}" }, { "math_id": 64, "text": "x_j = 0" }, { "math_id": 65, "text": "\\frac{k[x_0,\\ldots,x_{n+1}]}{(\\sum_{0 \\leq i \\leq n} x_i - 1)} \\to \\frac{k[x_0,\\ldots,x_{n+1}]}{(\\sum_{0 \\leq i \\leq n} x_i - 1, x_j)} " }, { "math_id": 66, "text": "\\partial_j" }, { "math_id": 67, "text": "C_*F" }, { "math_id": 68, "text": "C_iF: U \\mapsto F(U \\times \\Delta^i)" }, { "math_id": 69, "text": "\\sum_{i=0}^j (-1)^i \\partial_i^*: C_jF \\to C_{j-1}F" }, { "math_id": 70, "text": "H_i(C_*F)" }, { "math_id": 71, "text": "H_0(C_*F)" }, { "math_id": 72, "text": "H_0^{sing}(X/k) := H_0(C_*\\mathbb{Z}_{tr}(X))(\\text{Spec}(k))" }, { "math_id": 73, "text": "H_0^{sing}(X/k) \\to \\text{CH}_0(X)" }, { "math_id": 74, "text": "H_0(C_*\\mathbb{Z}_{tr}(Y))(X) " }, { "math_id": 75, "text": "\\text{Hom}_{Cor}(X,Y)/\\mathbb{A}^1 \\text{ homotopy}" }, { "math_id": 76, "text": "f,g:X \\to Y" }, { "math_id": 77, "text": "h:X\\times\\mathbb{A}^1 \\to X" }, { "math_id": 78, "text": "h|_{X\\times 0} = f" }, { "math_id": 79, "text": "h|_{X \\times 1} = g" }, { "math_id": 80, "text": "M(X)" }, { "math_id": 81, "text": "C_*\\mathbb{Z}_{tr}(X)" }, { "math_id": 82, "text": "DM_{Nis}^{eff,-}(k,R)" }, { "math_id": 83, "text": "q \\geq 1" }, { "math_id": 84, "text": "\\mathbb{Z}(q) = C_*\\mathbb{Z}_{tr}(\\mathbb{G}_m^{\\wedge q})[-q]" }, { "math_id": 85, "text": "A" }, { "math_id": 86, "text": "\\mathbb{Z}/\\ell" }, { "math_id": 87, "text": "A(q) = \\mathbb{Z}(q) \\otimes A" }, { "math_id": 88, "text": "H^{p,q}(X,\\mathbb{Z}) = \\mathbb{H}_{Zar}^p(X,\\mathbb{Z}(q))" }, { "math_id": 89, "text": "p" }, { "math_id": 90, "text": "q" }, { "math_id": 91, "text": "H^{p,q}(X,A) = \\mathbb{H}_{Zar}^p(X,A(q))" }, { "math_id": 92, "text": "q = 0,1" }, { "math_id": 93, "text": "\\mathbb{Z}(0) \\cong \\mathbb{Z}_{tr}(\\mathbb{G}_m^{\\wedge 0})" }, { "math_id": 94, "text": "0" }, { "math_id": 95, "text": "H^{p,0}(X,\\mathbb{Z}) = \\begin{cases}\n\\mathbb{Z}(X) & \\text{if } p = 0 \\\\\n0 & \\text{otherwise}\n\\end{cases}" }, { "math_id": 96, "text": "\\mathbb{Z}(X) = \\text{Hom}_{Cor}(X,\\text{Spec}(k))" }, { "math_id": 97, "text": "\\mathbb{Z}(1)" }, { "math_id": 98, "text": "\\mathcal{O}^*[-1]" }, { "math_id": 99, "text": "\\begin{align}\nH^{1,1}(X,\\mathbb{Z}) &= H^0_{Zar}(X,\\mathcal{O}^*) = \\mathcal{O}^*(X) \\\\\nH^{2,1}(X,\\mathbb{Z}) &= H^1_{Zar}(X,\\mathcal{O}^*) = \\text{Pic}(X)\n\\end{align}" }, { "math_id": 100, "text": "k" }, { "math_id": 101, "text": "\\mathbb{Z}(n)" }, { "math_id": 102, "text": "\\mathbb{Z}_{tr}(\\mathbb{P}^n)" }, { "math_id": 103, "text": "C_*(\\mathbb{Z}_{tr}(\\mathbb{P}^n) / \\mathbb{Z}_{tr}(\\mathbb{P}^{n-1})) \\simeq \nC_*\\mathbb{Z}_{tr}(\\mathbb{G}_m^{\\wedge q})[n] " }, { "math_id": 104, "text": "\\mathbb{Z}(n) \\simeq C_{*}(\\mathbb {Z} _{tr}(\\mathbb {P} ^{n})/\\mathbb {Z} _{tr}(\\mathbb {P} ^{n-1}))[-2n] " } ]
https://en.wikipedia.org/wiki?curid=57389784
5739241
Cotorsion group
In abelian group theory, an abelian group is said to be cotorsion if every extension of it by a torsion-free group splits. If the group is formula_0, this says that formula_1 for all torsion-free groups formula_2. It suffices to check the condition for formula_2 the group of rational numbers. More generally, a module "M" over a ring "R" is said to be a cotorsion module if Ext1("F","M")=0 for all flat modules "F". This is equivalent to the definition for abelian groups (considered as modules over the ring Z of integers) because over Z flat modules are the same as torsion-free modules. Some properties of cotorsion groups: References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "M" }, { "math_id": 1, "text": "Ext(F,M) = 0" }, { "math_id": 2, "text": "F" } ]
https://en.wikipedia.org/wiki?curid=5739241
5739585
Pure subgroup
In mathematics, especially in the area of algebra studying the theory of abelian groups, a pure subgroup is a generalization of direct summand. It has found many uses in abelian group theory and related areas. Definition. A subgroup formula_0 of a (typically abelian) group formula_1 is said to be pure if whenever an element of formula_0 has an formula_2 root in formula_1, it necessarily has an formula_2 root in formula_0. Formally: formula_3, the existence of an formula_4 in G such that formula_5 the existence of a formula_6 in S such that formula_7. Origins. Pure subgroups are also called isolated subgroups or serving subgroups and were first investigated in Prüfer's 1923 paper, which described conditions for the decomposition of primary abelian groups as direct sums of cyclic groups using pure subgroups. The work of Prüfer was complemented by Kulikoff where many results were proved again using pure subgroups systematically. In particular, a proof was given that pure subgroups of finite exponent are direct summands. A more complete discussion of pure subgroups, their relation to infinite abelian group theory, and a survey of their literature is given in Irving Kaplansky's little red book. Examples. Since in a finitely generated abelian group the torsion subgroup is a direct summand, one might ask if the torsion subgroup is always a direct summand of an abelian group. It turns out that it is not always a summand, but it "is" a pure subgroup. Under certain mild conditions, pure subgroups are direct summands. So, one can still recover the desired result under those conditions, as in Kulikoff's paper. Pure subgroups can be used as an intermediate property between a result on direct summands with finiteness conditions and a full result on direct summands with less restrictive finiteness conditions. Another example of this use is Prüfer's paper, where the fact that "finite torsion abelian groups are direct sums of cyclic groups" is extended to the result that "all torsion abelian groups of finite exponent are direct sums of cyclic groups" via an intermediate consideration of pure subgroups. Generalizations. Pure subgroups were generalized in several ways in the theory of abelian groups and modules. Pure submodules were defined in a variety of ways, but eventually settled on the modern definition in terms of tensor products or systems of equations; earlier definitions were usually more direct generalizations such as the single equation used above for n'th roots. Pure injective and pure projective modules follow closely from the ideas of Prüfer's 1923 paper. While pure projective modules have not found as many applications as pure injectives, they are more closely related to the original work: A module is pure projective if it is a direct summand of a direct sum of finitely presented modules. In the case of the integers and abelian groups a pure projective module amounts to a direct sum of cyclic groups.
[ { "math_id": 0, "text": "S" }, { "math_id": 1, "text": "G" }, { "math_id": 2, "text": "n^{\\text{th}}" }, { "math_id": 3, "text": "\\forall n \\in\\Z, a \\in S" }, { "math_id": 4, "text": "x" }, { "math_id": 5, "text": "x^n = a \\Rightarrow" }, { "math_id": 6, "text": "y" }, { "math_id": 7, "text": "y^n = a" } ]
https://en.wikipedia.org/wiki?curid=5739585
5739636
Algebraically compact group
In mathematics, in the realm of abelian group theory, a group is said to be algebraically compact if it is a direct summand of every abelian group containing it as a pure subgroup. Equivalent characterizations of algebraic compactness: Relations with other properties:
[ { "math_id": 0, "text": "\\mathbb{Z}" } ]
https://en.wikipedia.org/wiki?curid=5739636
5740025
Stochastic volatility
When variance is a random variable In statistics, stochastic volatility models are those in which the variance of a stochastic process is itself randomly distributed. They are used in the field of mathematical finance to evaluate derivative securities, such as options. The name derives from the models' treatment of the underlying security's volatility as a random process, governed by state variables such as the price level of the underlying security, the tendency of volatility to revert to some long-run mean value, and the variance of the volatility process itself, among others. Stochastic volatility models are one approach to resolve a shortcoming of the Black–Scholes model. In particular, models based on Black-Scholes assume that the underlying volatility is constant over the life of the derivative, and unaffected by the changes in the price level of the underlying security. However, these models cannot explain long-observed features of the implied volatility surface such as volatility smile and skew, which indicate that implied volatility does tend to vary with respect to strike price and expiry. By assuming that the volatility of the underlying price is a stochastic process rather than a constant, it becomes possible to model derivatives more accurately. A middle ground between the bare Black-Scholes model and stochastic volatility models is covered by local volatility models. In these models the underlying volatility does not feature any new randomness but it isn't a constant either. In local volatility models the volatility is a non-trivial function of the underlying asset, without any extra randomness. According to this definition, models like constant elasticity of variance would be local volatility models, although they are sometimes classified as stochastic volatility models. The classification can be a little ambiguous in some cases. The early history of stochastic volatility has multiple roots (i.e. stochastic process, option pricing and econometrics), it is reviewed in Chapter 1 of Neil Shephard (2005) "Stochastic Volatility," Oxford University Press. Basic model. Starting from a constant volatility approach, assume that the derivative's underlying asset price follows a standard model for geometric Brownian motion: formula_0 where formula_1 is the constant drift (i.e. expected return) of the security price formula_2, formula_3 is the constant volatility, and formula_4 is a standard Wiener process with zero mean and unit rate of variance. The explicit solution of this stochastic differential equation is formula_5 The maximum likelihood estimator to estimate the constant volatility formula_3 for given stock prices formula_2 at different times formula_6 is formula_7 its expected value is formula_8 This basic model with constant volatility formula_3 is the starting point for non-stochastic volatility models such as Black–Scholes model and Cox–Ross–Rubinstein model. For a stochastic volatility model, replace the constant volatility formula_9 with a function formula_10 that models the variance of formula_11. This variance function is also modeled as Brownian motion, and the form of formula_10 depends on the particular SV model under study. formula_12 formula_13 where formula_14 and formula_15 are some functions of formula_16, and formula_17 is another standard gaussian that is correlated with formula_18 with constant correlation factor formula_19. Heston model. The popular Heston model is a commonly used SV model, in which the randomness of the variance process varies as the square root of variance. In this case, the differential equation for variance takes the form: formula_20 where formula_21 is the mean long-term variance, formula_22 is the rate at which the variance reverts toward its long-term mean, formula_23 is the volatility of the variance process, and formula_24 is, like formula_25, a gaussian with zero mean and formula_26 variance. However, formula_25 and formula_24 are correlated with the constant correlation value formula_27. In other words, the Heston SV model assumes that the variance is a random process that Some parametrisation of the volatility surface, such as 'SVI', are based on the Heston model. CEV model. The CEV model describes the relationship between volatility and price, introducing stochastic volatility: formula_28 Conceptually, in some markets volatility rises when prices rise (e.g. commodities), so formula_29. In other markets, volatility tends to rise as prices fall, modelled with formula_30. Some argue that because the CEV model does not incorporate its own stochastic process for volatility, it is not truly a stochastic volatility model. Instead, they call it a local volatility model. SABR volatility model. The SABR model (Stochastic Alpha, Beta, Rho), introduced by Hagan et al. describes a single forward formula_31 (related to any asset e.g. an index, interest rate, bond, currency or equity) under stochastic volatility formula_9: formula_32 formula_33 The initial values formula_34 and formula_35 are the current forward price and volatility, whereas formula_36 and formula_37 are two correlated Wiener processes (i.e. Brownian motions) with correlation coefficient formula_38. The constant parameters formula_39 are such that formula_40. The main feature of the SABR model is to be able to reproduce the smile effect of the volatility smile. GARCH model. The Generalized Autoregressive Conditional Heteroskedasticity (GARCH) model is another popular model for estimating stochastic volatility. It assumes that the randomness of the variance process varies with the variance, as opposed to the square root of the variance as in the Heston model. The standard GARCH(1,1) model has the following form for the continuous variance differential: formula_41 The GARCH model has been extended via numerous variants, including the NGARCH, TGARCH, IGARCH, LGARCH, EGARCH, GJR-GARCH, Power GARCH, Component GARCH, etc. Strictly, however, the conditional volatilities from GARCH models are not stochastic since at time "t" the volatility is completely pre-determined (deterministic) given previous values. 3/2 model. The 3/2 model is similar to the Heston model, but assumes that the randomness of the variance process varies with formula_42. The form of the variance differential is: formula_43 However the meaning of the parameters is different from Heston model. In this model, both mean reverting and volatility of variance parameters are stochastic quantities given by formula_44 and formula_45 respectively. Rough volatility models. Using estimation of volatility from high frequency data, smoothness of the volatility process has been questioned. It has been found that log-volatility behaves as a fractional Brownian motion with Hurst exponent of order formula_46, at any reasonable timescale. This led to adopting a fractional stochastic volatility (FSV) model, leading to an overall Rough FSV (RFSV) where "rough" is to highlight that formula_47. The RFSV model is consistent with time series data, allowing for improved forecasts of realized volatility. Calibration and estimation. Once a particular SV model is chosen, it must be calibrated against existing market data. Calibration is the process of identifying the set of model parameters that are most likely given the observed data. One popular technique is to use maximum likelihood estimation (MLE). For instance, in the Heston model, the set of model parameters formula_48 can be estimated applying an MLE algorithm such as the Powell Directed Set method to observations of historic underlying security prices. In this case, you start with an estimate for formula_49, compute the residual errors when applying the historic price data to the resulting model, and then adjust formula_50 to try to minimize these errors. Once the calibration has been performed, it is standard practice to re-calibrate the model periodically. An alternative to calibration is statistical estimation, thereby accounting for parameter uncertainty. Many frequentist and Bayesian methods have been proposed and implemented, typically for a subset of the abovementioned models. The following list contains extension packages for the open source statistical software R that have been specifically designed for heteroskedasticity estimation. The first three cater for GARCH-type models with deterministic volatilities; the fourth deals with stochastic volatility estimation. Many numerical methods have been developed over time and have solved pricing financial assets such as options with stochastic volatility models. A recent developed application is the local stochastic volatility model. This local stochastic volatility model gives better results in pricing new financial assets such as forex options. There are also alternate statistical estimation libraries in other languages such as Python: References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": " dS_t = \\mu S_t\\,dt + \\sigma S_t\\,dW_t \\, " }, { "math_id": 1, "text": "\\mu \\," }, { "math_id": 2, "text": "S_t \\," }, { "math_id": 3, "text": "\\sigma \\," }, { "math_id": 4, "text": "dW_t \\," }, { "math_id": 5, "text": "S_t= S_0 e^{(\\mu- \\frac{1}{2} \\sigma^2) t+ \\sigma W_t}. " }, { "math_id": 6, "text": "t_i \\," }, { "math_id": 7, "text": "\n\\begin{align}\n\\widehat{\\sigma}^2 &= \\left(\\frac 1 n \\sum_{i=1}^n \\frac{(\\ln S_{t_i}- \\ln S_{t_{i-1}})^2}{t_i-t_{i-1}} \\right) - \\frac 1 n \\frac{(\\ln S_{t_n}- \\ln S_{t_0})^2}{t_n-t_0}\\\\\n& = \\frac 1 n \\sum_{i=1}^n (t_i-t_{i-1})\\left(\\frac{\\ln \\frac{S_{t_i}}{S_{t_{i-1}}}}{t_i-t_{i-1}} - \\frac{\\ln \\frac{S_{t_n}}{S_{t_0}}}{t_n-t_0}\\right)^2;\n\\end{align}\n" }, { "math_id": 8, "text": "\\operatorname E \\left[ \\widehat{\\sigma}^2\\right]= \\frac{n-1}{n} \\sigma^2." }, { "math_id": 9, "text": "\\sigma" }, { "math_id": 10, "text": "\\nu_t" }, { "math_id": 11, "text": "S_t" }, { "math_id": 12, "text": " dS_t = \\mu S_t\\,dt + \\sqrt{\\nu_t} S_t\\,dW_t \\," }, { "math_id": 13, "text": " d\\nu_t = \\alpha_{\\nu,t}\\,dt + \\beta_{\\nu,t}\\,dB_t \\," }, { "math_id": 14, "text": "\\alpha_{\\nu,t} " }, { "math_id": 15, "text": "\\beta_{\\nu,t} " }, { "math_id": 16, "text": "\\nu " }, { "math_id": 17, "text": "dB_t " }, { "math_id": 18, "text": "dW_t " }, { "math_id": 19, "text": "\\rho " }, { "math_id": 20, "text": " d\\nu_t = \\theta(\\omega - \\nu_t)\\,dt + \\xi \\sqrt{\\nu_t}\\,dB_t \\," }, { "math_id": 21, "text": "\\omega" }, { "math_id": 22, "text": "\\theta" }, { "math_id": 23, "text": "\\xi" }, { "math_id": 24, "text": "dB_t" }, { "math_id": 25, "text": "dW_t" }, { "math_id": 26, "text": "dt" }, { "math_id": 27, "text": "\\rho" }, { "math_id": 28, "text": "dS_t=\\mu S_t \\, dt + \\sigma S_t^{\\, \\gamma} \\, dW_t" }, { "math_id": 29, "text": "\\gamma > 1" }, { "math_id": 30, "text": "\\gamma < 1" }, { "math_id": 31, "text": "F" }, { "math_id": 32, "text": "dF_t=\\sigma_t F^\\beta_t\\, dW_t," }, { "math_id": 33, "text": "d\\sigma_t=\\alpha\\sigma_t\\, dZ_t," }, { "math_id": 34, "text": "F_0" }, { "math_id": 35, "text": "\\sigma_0" }, { "math_id": 36, "text": "W_t" }, { "math_id": 37, "text": "Z_t" }, { "math_id": 38, "text": "-1<\\rho<1" }, { "math_id": 39, "text": "\\beta,\\;\\alpha" }, { "math_id": 40, "text": "0\\leq\\beta\\leq 1,\\;\\alpha\\geq 0" }, { "math_id": 41, "text": " d\\nu_t = \\theta(\\omega - \\nu_t)\\,dt + \\xi \\nu_t\\,dB_t \\," }, { "math_id": 42, "text": "\\nu_t^{3/2}" }, { "math_id": 43, "text": " d\\nu_t = \\nu_t(\\omega - \\theta\\nu_t)\\,dt + \\xi \\nu_t^{3/2} \\,dB_t. \\," }, { "math_id": 44, "text": " \\theta\\nu_t" }, { "math_id": 45, "text": " \\xi\\nu_t" }, { "math_id": 46, "text": "H = 0.1" }, { "math_id": 47, "text": "H < 1/2" }, { "math_id": 48, "text": "\\Psi_0 = \\{\\omega, \\theta, \\xi, \\rho\\} \\," }, { "math_id": 49, "text": "\\Psi_0 \\," }, { "math_id": 50, "text": "\\Psi \\," } ]
https://en.wikipedia.org/wiki?curid=5740025
574024
Hilbert transform
Integral transform and linear operator In mathematics and signal processing, the Hilbert transform is a specific singular integral that takes a function, "u"("t") of a real variable and produces another function of a real variable H("u")("t"). The Hilbert transform is given by the Cauchy principal value of the convolution with the function formula_0 (see ). The Hilbert transform has a particularly simple representation in the frequency domain: It imparts a phase shift of ±90° (π/2 radians) to every frequency component of a function, the sign of the shift depending on the sign of the frequency (see ). The Hilbert transform is important in signal processing, where it is a component of the analytic representation of a real-valued signal "u"("t"). The Hilbert transform was first introduced by David Hilbert in this setting, to solve a special case of the Riemann–Hilbert problem for analytic functions. Definition. The Hilbert transform of u can be thought of as the convolution of "u"("t") with the function "h"("t") =, known as the Cauchy kernel. Because 1/t is not integrable across "t" = 0, the integral defining the convolution does not always converge. Instead, the Hilbert transform is defined using the Cauchy principal value (denoted here by p.v.). Explicitly, the Hilbert transform of a function (or signal) "u"("t") is given by formula_1 provided this integral exists as a principal value. This is precisely the convolution of u with the tempered distribution p.v.. Alternatively, by changing variables, the principal-value integral can be written explicitly as formula_2 When the Hilbert transform is applied twice in succession to a function u, the result is formula_3 provided the integrals defining both iterations converge in a suitable sense. In particular, the inverse transform is formula_4. This fact can most easily be seen by considering the effect of the Hilbert transform on the Fourier transform of "u"("t") (see below). For an analytic function in the upper half-plane, the Hilbert transform describes the relationship between the real part and the imaginary part of the boundary values. That is, if "f"("z") is analytic in the upper half complex plane {"z" : Im{"z"} &gt; 0}, and "u"("t") = Re{"f" ("t" + 0·"i")}, then Im{"f"("t" + 0·"i")} = H("u")("t") up to an additive constant, provided this Hilbert transform exists. Notation. In signal processing the Hilbert transform of "u"("t") is commonly denoted by formula_5. However, in mathematics, this notation is already extensively used to denote the Fourier transform of "u"("t"). Occasionally, the Hilbert transform may be denoted by formula_6. Furthermore, many sources define the Hilbert transform as the negative of the one defined here. History. The Hilbert transform arose in Hilbert's 1905 work on a problem Riemann posed concerning analytic functions, which has come to be known as the Riemann–Hilbert problem. Hilbert's work was mainly concerned with the Hilbert transform for functions defined on the circle. Some of his earlier work related to the Discrete Hilbert Transform dates back to lectures he gave in Göttingen. The results were later published by Hermann Weyl in his dissertation. Schur improved Hilbert's results about the discrete Hilbert transform and extended them to the integral case. These results were restricted to the spaces "L"2 and ℓ2. In 1928, Marcel Riesz proved that the Hilbert transform can be defined for "u" in formula_7 (Lp space) for 1 &lt; "p" &lt; ∞, that the Hilbert transform is a bounded operator on formula_7 for 1 &lt; "p" &lt; ∞, and that similar results hold for the Hilbert transform on the circle as well as the discrete Hilbert transform. The Hilbert transform was a motivating example for Antoni Zygmund and Alberto Calderón during their study of singular integrals. Their investigations have played a fundamental role in modern harmonic analysis. Various generalizations of the Hilbert transform, such as the bilinear and trilinear Hilbert transforms are still active areas of research today. Relationship with the Fourier transform. The Hilbert transform is a multiplier operator. The multiplier of H is "σ"H("ω") = −"i" sgn("ω"), where sgn is the signum function. Therefore: formula_8 where formula_9 denotes the Fourier transform. Since sgn("x") = sgn(2π"x"), it follows that this result applies to the three common definitions of formula_10. By Euler's formula, formula_11 Therefore, H("u")("t") has the effect of shifting the phase of the negative frequency components of "u"("t") by +90° (&lt;templatestyles src="Fraction/styles.css" /&gt;1⁄2 radians) and the phase of the positive frequency components by −90°, and "i"·H("u")("t") has the effect of restoring the positive frequency components while shifting the negative frequency ones an additional +90°, resulting in their negation (i.e., a multiplication by −1). When the Hilbert transform is applied twice, the phase of the negative and positive frequency components of "u"("t") are respectively shifted by +180° and −180°, which are equivalent amounts. The signal is negated; i.e., H(H("u")) = −"u", because formula_12 Table of selected Hilbert transforms. In the following table, the frequency parameter formula_13 is real. Notes An extensive table of Hilbert transforms is available. Note that the Hilbert transform of a constant is zero. Domain of definition. It is by no means obvious that the Hilbert transform is well-defined at all, as the improper integral defining it must converge in a suitable sense. However, the Hilbert transform is well-defined for a broad class of functions, namely those in formula_7 for 1 &lt; "p" &lt; ∞. More precisely, if u is in formula_7 for 1 &lt; "p" &lt; ∞, then the limit defining the improper integral formula_14 exists for almost every t. The limit function is also in formula_7 and is in fact the limit in the mean of the improper integral as well. That is, formula_15 as "ε" → 0 in the Lp norm, as well as pointwise almost everywhere, by the Titchmarsh theorem. In the case "p" = 1, the Hilbert transform still converges pointwise almost everywhere, but may itself fail to be integrable, even locally. In particular, convergence in the mean does not in general happen in this case. The Hilbert transform of an "L"1 function does converge, however, in "L"1-weak, and the Hilbert transform is a bounded operator from "L"1 to "L"1,w. (In particular, since the Hilbert transform is also a multiplier operator on "L"2, Marcinkiewicz interpolation and a duality argument furnishes an alternative proof that H is bounded on "L""p".) Properties. Boundedness. If 1 &lt; "p" &lt; ∞, then the Hilbert transform on formula_7 is a bounded linear operator, meaning that there exists a constant Cp such that formula_16 for all formula_17. The best constant formula_18 is given by formula_19 An easy way to find the best formula_18 for formula_20 being a power of 2 is through the so-called Cotlar's identity that formula_21 for all real valued f. The same best constants hold for the periodic Hilbert transform. The boundedness of the Hilbert transform implies the formula_7 convergence of the symmetric partial sum operator formula_22 to f in formula_7. Anti-self adjointness. The Hilbert transform is an anti-self adjoint operator relative to the duality pairing between formula_7 and the dual space formula_23, where p and q are Hölder conjugates and 1 &lt; "p", "q" &lt; ∞. Symbolically, formula_24 for formula_17 and formula_25. Inverse transform. The Hilbert transform is an anti-involution, meaning that formula_26 provided each transform is well-defined. Since H preserves the space formula_7, this implies in particular that the Hilbert transform is invertible on formula_7, and that formula_27 Complex structure. Because H2 = −I ("I" is the identity operator) on the real Banach space of "real"-valued functions in formula_7, the Hilbert transform defines a linear complex structure on this Banach space. In particular, when "p" = 2, the Hilbert transform gives the Hilbert space of real-valued functions in formula_28 the structure of a "complex" Hilbert space. The (complex) eigenstates of the Hilbert transform admit representations as holomorphic functions in the upper and lower half-planes in the Hardy space H2 by the Paley–Wiener theorem. Differentiation. Formally, the derivative of the Hilbert transform is the Hilbert transform of the derivative, i.e. these two linear operators commute: formula_29 Iterating this identity, formula_30 This is rigorously true as stated provided u and its first k derivatives belong to formula_7. One can check this easily in the frequency domain, where differentiation becomes multiplication by ω. Convolutions. The Hilbert transform can formally be realized as a convolution with the tempered distribution formula_31 Thus formally, formula_32 However, "a priori" this may only be defined for u a distribution of compact support. It is possible to work somewhat rigorously with this since compactly supported functions (which are distributions "a fortiori") are dense in "Lp". Alternatively, one may use the fact that "h"("t") is the distributional derivative of the function log|"t"|/"π"; to wit formula_33 For most operational purposes the Hilbert transform can be treated as a convolution. For example, in a formal sense, the Hilbert transform of a convolution is the convolution of the Hilbert transform applied on "only one" of either of the factors: formula_34 This is rigorously true if u and v are compactly supported distributions since, in that case, formula_35 By passing to an appropriate limit, it is thus also true if "u" ∈ "Lp" and "v" ∈ "Lq" provided that formula_36 from a theorem due to Titchmarsh. Invariance. The Hilbert transform has the following invariance properties on formula_28. Up to a multiplicative constant, the Hilbert transform is the only bounded operator on L2 with these properties. In fact there is a wider set of operators that commute with the Hilbert transform. The group formula_38 acts by unitary operators U"g" on the space formula_28 by the formula formula_39 This unitary representation is an example of a principal series representation of formula_40 In this case it is reducible, splitting as the orthogonal sum of two invariant subspaces, Hardy space formula_41 and its conjugate. These are the spaces of "L"2 boundary values of holomorphic functions on the upper and lower halfplanes. formula_41 and its conjugate consist of exactly those "L"2 functions with Fourier transforms vanishing on the negative and positive parts of the real axis respectively. Since the Hilbert transform is equal to H = −"i" (2"P" − I), with P being the orthogonal projection from formula_28 onto formula_42 and I the identity operator, it follows that formula_43 and its orthogonal complement are eigenspaces of H for the eigenvalues ±"i". In other words, H commutes with the operators Ug. The restrictions of the operators Ug to formula_43 and its conjugate give irreducible representations of formula_38 – the so-called limit of discrete series representations. Extending the domain of definition. Hilbert transform of distributions. It is further possible to extend the Hilbert transform to certain spaces of distributions . Since the Hilbert transform commutes with differentiation, and is a bounded operator on Lp, H restricts to give a continuous transform on the inverse limit of Sobolev spaces: formula_44 The Hilbert transform can then be defined on the dual space of formula_45, denoted formula_46, consisting of Lp distributions. This is accomplished by the duality pairing: For formula_47, define: formula_48 It is possible to define the Hilbert transform on the space of tempered distributions as well by an approach due to Gel'fand and Shilov, but considerably more care is needed because of the singularity in the integral. Hilbert transform of bounded functions. The Hilbert transform can be defined for functions in formula_49 as well, but it requires some modifications and caveats. Properly understood, the Hilbert transform maps formula_49 to the Banach space of bounded mean oscillation (BMO) classes. Interpreted naïvely, the Hilbert transform of a bounded function is clearly ill-defined. For instance, with "u" = sgn("x"), the integral defining H("u") diverges almost everywhere to ±∞. To alleviate such difficulties, the Hilbert transform of an "L"∞ function is therefore defined by the following regularized form of the integral formula_50 where as above "h"("x") = and formula_51 The modified transform H agrees with the original transform up to an additive constant on functions of compact support from a general result by Calderón and Zygmund. Furthermore, the resulting integral converges pointwise almost everywhere, and with respect to the BMO norm, to a function of bounded mean oscillation. A deep result of Fefferman's work is that a function is of bounded mean oscillation if and only if it has the form for some formula_52. Conjugate functions. The Hilbert transform can be understood in terms of a pair of functions "f"("x") and "g"("x") such that the function formula_53 is the boundary value of a holomorphic function "F"("z") in the upper half-plane. Under these circumstances, if f and g are sufficiently integrable, then one is the Hilbert transform of the other. Suppose that formula_54 Then, by the theory of the Poisson integral, f admits a unique harmonic extension into the upper half-plane, and this extension is given by formula_55 which is the convolution of f with the Poisson kernel formula_56 Furthermore, there is a unique harmonic function v defined in the upper half-plane such that "F"("z") = "u"("z") + "i v"("z") is holomorphic and formula_57 This harmonic function is obtained from f by taking a convolution with the "conjugate Poisson kernel" formula_58 Thus formula_59 Indeed, the real and imaginary parts of the Cauchy kernel are formula_60 so that "F" = "u" + "i v" is holomorphic by Cauchy's integral formula. The function v obtained from u in this way is called the harmonic conjugate of u. The (non-tangential) boundary limit of "v"("x","y") as "y" → 0 is the Hilbert transform of f. Thus, succinctly, formula_61 Titchmarsh's theorem. Titchmarsh's theorem (named for E. C. Titchmarsh who included it in his 1937 work) makes precise the relationship between the boundary values of holomorphic functions in the upper half-plane and the Hilbert transform. It gives necessary and sufficient conditions for a complex-valued square-integrable function "F"("x") on the real line to be the boundary value of a function in the Hardy space H2("U") of holomorphic functions in the upper half-plane U. The theorem states that the following conditions for a complex-valued square-integrable function formula_62 are equivalent: A weaker result is true for functions of class Lp for "p" &gt; 1. Specifically, if "F"("z") is a holomorphic function such that formula_65 for all y, then there is a complex-valued function "F"("x") in formula_7 such that "F"("x" + "i y") → "F"("x") in the Lp norm as "y" → 0 (as well as holding pointwise almost everywhere). Furthermore, formula_66 where f is a real-valued function in formula_7 and g is the Hilbert transform (of class Lp) of f. This is not true in the case "p" = 1. In fact, the Hilbert transform of an "L"1 function f need not converge in the mean to another "L"1 function. Nevertheless, the Hilbert transform of f does converge almost everywhere to a finite function g such that formula_67 This result is directly analogous to one by Andrey Kolmogorov for Hardy functions in the disc. Although usually called Titchmarsh's theorem, the result aggregates much work of others, including Hardy, Paley and Wiener (see Paley–Wiener theorem), as well as work by Riesz, Hille, and Tamarkin Riemann–Hilbert problem. One form of the Riemann–Hilbert problem seeks to identify pairs of functions "F"+ and "F"− such that "F"+ is holomorphic on the upper half-plane and "F"− is holomorphic on the lower half-plane, such that for x along the real axis, formula_68 where "f"("x") is some given real-valued function of formula_69. The left-hand side of this equation may be understood either as the difference of the limits of "F"± from the appropriate half-planes, or as a hyperfunction distribution. Two functions of this form are a solution of the Riemann–Hilbert problem. Formally, if "F"± solve the Riemann–Hilbert problem formula_70 then the Hilbert transform of "f"("x") is given by formula_71 Hilbert transform on the circle. For a periodic function f the circular Hilbert transform is defined: formula_72 The circular Hilbert transform is used in giving a characterization of Hardy space and in the study of the conjugate function in Fourier series. The kernel, formula_73 is known as the Hilbert kernel since it was in this form the Hilbert transform was originally studied. The Hilbert kernel (for the circular Hilbert transform) can be obtained by making the Cauchy kernel &lt;templatestyles src="Fraction/styles.css" /&gt;1⁄1 periodic. More precisely, for "x" ≠ 0 formula_74 Many results about the circular Hilbert transform may be derived from the corresponding results for the Hilbert transform from this correspondence. Another more direct connection is provided by the Cayley transform "C"("x") = ("x" – "i") / ("x" + "i"), which carries the real line onto the circle and the upper half plane onto the unit disk. It induces a unitary map formula_75 of "L"2(T) onto formula_76 The operator U carries the Hardy space "H"2(T) onto the Hardy space formula_41. Hilbert transform in signal processing. Bedrosian's theorem. Bedrosian's theorem states that the Hilbert transform of the product of a low-pass and a high-pass signal with non-overlapping spectra is given by the product of the low-pass signal and the Hilbert transform of the high-pass signal, or formula_77 where "f"LP and "f"HP are the low- and high-pass signals respectively. A category of communication signals to which this applies is called the "narrowband signal model." A member of that category is amplitude modulation of a high-frequency sinusoidal "carrier": formula_78 where "u""m"("t") is the narrow bandwidth "message" waveform, such as voice or music. Then by Bedrosian's theorem: formula_79 Analytic representation. A specific type of conjugate function is: formula_80 known as the "analytic representation" of formula_81 The name reflects its mathematical tractability, due largely to Euler's formula. Applying Bedrosian's theorem to the narrowband model, the analytic representation is: A Fourier transform property indicates that this complex heterodyne operation can shift all the negative frequency components of "u""m"("t") above 0 Hz. In that case, the imaginary part of the result is a Hilbert transform of the real part. This is an indirect way to produce Hilbert transforms. Angle (phase/frequency) modulation. The form: formula_82 is called angle modulation, which includes both phase modulation and frequency modulation. The instantaneous frequency is  formula_83  For sufficiently large ω, compared to formula_84: formula_85 and: formula_86 Single sideband modulation (SSB). When "u""m"("t") in Eq.1 is also an analytic representation (of a message waveform), that is: formula_87 the result is single-sideband modulation: formula_88 whose transmitted component is: formula_89 Causality. The function formula_90 presents two causality-based challenges to practical implementation in a convolution (in addition to its undefined value at 0): Discrete Hilbert transform. For a discrete function, formula_94 with discrete-time Fourier transform (DTFT), formula_95, and discrete Hilbert transform formula_96 the DTFT of formula_97 in the region −"π" &lt; ω &lt; "π" is given by: formula_98 The inverse DTFT, using the convolution theorem, is: formula_99 where formula_100 which is an infinite impulse response (IIR). Practical considerations Method 1: Direct convolution of streaming formula_101 data with an FIR approximation of formula_102 which we will designate by formula_103 Examples of truncated formula_104 are shown in figures 1 and 2. Fig 1 has an odd number of anti-symmetric coefficients and is called Type III. This type inherently exhibits responses of zero magnitude at frequencies 0 and Nyquist, resulting in a bandpass filter shape. A Type IV design (even number of anti-symmetric coefficients) is shown in Fig 2. It has a highpass frequency response. Type III is the usual choice. for these reasons: The abrupt truncation of formula_104 creates a rippling (Gibbs effect) of the flat frequency response. That can be mitigated by use of a window function to taper formula_106 to zero. Method 2: Piecewise convolution. It is well known that direct convolution is computationally much more intensive than methods like overlap-save that give access to the efficiencies of the Fast Fourier transform via the convolution theorem. Specifically, the discrete Fourier transform (DFT) of a segment of formula_101 is multiplied pointwise with a DFT of the formula_106 sequence. An inverse DFT is done on the product, and the transient artifacts at the leading and trailing edges of the segment are discarded. Over-lapping input segments prevent gaps in the output stream. An equivalent time domain description is that segments of length formula_107 (an arbitrary parameter) are convolved with the periodic function: formula_108 When the duration of non-zero values of formula_109 is formula_110 the output sequence includes formula_111 samples of formula_112  formula_113 outputs are discarded from each block of formula_114 and the input blocks are overlapped by that amount to prevent gaps. Method 3: Same as method 2, except the DFT of formula_109 is replaced by samples of the formula_115 distribution (whose real and imaginary components are all just formula_116 or formula_117) That convolves formula_101 with a periodic summation: formula_118 for some arbitrary parameter, formula_120 formula_104 is not an FIR, so the edge effects extend throughout the entire transform. Deciding what to delete and the corresponding amount of overlap is an application-dependent design issue. Fig 3 depicts the difference between methods 2 and 3. Only half of the antisymmetric impulse response is shown, and only the non-zero coefficients. The blue graph corresponds to method 2 where formula_104 is truncated by a rectangular window function, rather than tapered. It is generated by a Matlab function, hilb(65). Its transient effects are exactly known and readily discarded. The frequency response, which is determined by the function argument, is the only application-dependent design issue. The red graph is formula_121 corresponding to method 3. It is the inverse DFT of the formula_115 distribution. Specifically, it is the function that is convolved with a segment of formula_101 by the MATLAB function, hilbert(u,512). The real part of the output sequence is the original input sequence, so that the complex output is an analytic representation of formula_122 When the input is a segment of a pure cosine, the resulting convolution for two different values of formula_107 is depicted in Fig 4 (red and blue plots). Edge effects prevent the result from being a pure sine function (green plot). Since formula_119 is not an FIR sequence, the theoretical extent of the effects is the entire output sequence. But the differences from a sine function diminish with distance from the edges. Parameter formula_107 is the output sequence length. If it exceeds the length of the input sequence, the input is modified by appending zero-valued elements. In most cases, that reduces the magnitude of the edge distortions. But their duration is dominated by the inherent rise and fall times of the formula_104 impulse response. Fig 5 is an example of piecewise convolution, using both methods 2 (in blue) and 3 (red dots). A sine function is created by computing the Discrete Hilbert transform of a cosine function, which was processed in four overlapping segments, and pieced back together. As the FIR result (blue) shows, the distortions apparent in the IIR result (red) are not caused by the difference between formula_104 and formula_119 (green and red in Fig 3). The fact that formula_119 is tapered ("windowed") is actually helpful in this context. The real problem is that it's not windowed enough. Effectively, formula_123 whereas the overlap-save method needs formula_124 Number-theoretic Hilbert transform. The number theoretic Hilbert transform is an extension of the discrete Hilbert transform to integers modulo an appropriate prime number. In this it follows the generalization of discrete Fourier transform to number theoretic transforms. The number theoretic Hilbert transform can be used to generate sets of orthogonal discrete sequences. Notes. &lt;templatestyles src="Reflist/styles.css" /&gt; Page citations. &lt;templatestyles src="Reflist/styles.css" /&gt; References. &lt;templatestyles src="Refbegin/styles.css" /&gt; Further reading. &lt;templatestyles src="Refbegin/styles.css" /&gt;
[ { "math_id": 0, "text": "1/(\\pi t)" }, { "math_id": 1, "text": "\n \\operatorname{H}(u)(t) = \\frac{1}{\\pi}\\, \\operatorname{p.v.} \\int_{-\\infty}^{+\\infty} \\frac{u(\\tau)}{t - \\tau}\\,\\mathrm{d}\\tau,\n" }, { "math_id": 2, "text": "\n \\operatorname{H}(u)(t) = \\frac{2}{\\pi}\\, \\lim_{\\varepsilon \\to 0} \\int_\\varepsilon^\\infty \\frac{u(t - \\tau) - u(t + \\tau)}{2\\tau} \\,\\mathrm{d}\\tau.\n" }, { "math_id": 3, "text": "\n \\operatorname{H}\\bigl(\\operatorname{H}(u)\\bigr)(t) = -u(t),\n" }, { "math_id": 4, "text": "-\\operatorname{H}" }, { "math_id": 5, "text": "\\hat{u}(t)" }, { "math_id": 6, "text": "\\tilde{u}(t)" }, { "math_id": 7, "text": "L^p(\\mathbb{R})" }, { "math_id": 8, "text": "\\mathcal{F}\\bigl(\\operatorname{H}(u)\\bigr)(\\omega) = -i \\sgn(\\omega) \\cdot \\mathcal{F}(u)(\\omega) ," }, { "math_id": 9, "text": "\\mathcal{F}" }, { "math_id": 10, "text": " \\mathcal{F}" }, { "math_id": 11, "text": "\\sigma_\\operatorname{H}(\\omega) = \\begin{cases}\n ~~i = e^{+i\\pi/2}, & \\text{for } \\omega < 0,\\\\\n ~~ 0, & \\text{for } \\omega = 0,\\\\\n -i = e^{-i\\pi/2}, & \\text{for } \\omega > 0.\n\\end{cases}" }, { "math_id": 12, "text": "\\left(\\sigma_\\operatorname{H}(\\omega)\\right)^2 = e^{\\pm i\\pi} = -1 \\quad \\text{for } \\omega \\neq 0 ." }, { "math_id": 13, "text": "\\omega" }, { "math_id": 14, "text": "\\operatorname{H}(u)(t) = \\frac{2}{\\pi} \\lim_{\\varepsilon \\to 0} \\int_\\varepsilon^\\infty \\frac{u(t - \\tau) - u(t + \\tau)}{2\\tau}\\,d\\tau" }, { "math_id": 15, "text": "\\frac{2}{\\pi} \\int_\\varepsilon^\\infty \\frac{u(t - \\tau) - u(t + \\tau)}{2\\tau}\\,\\mathrm{d}\\tau \\to \\operatorname{H}(u)(t)" }, { "math_id": 16, "text": "\\left\\|\\operatorname{H}u\\right\\|_p \\le C_p \\left\\|u\\right\\|_p " }, { "math_id": 17, "text": "u \\isin L^p(\\mathbb{R})" }, { "math_id": 18, "text": "C_p" }, { "math_id": 19, "text": "C_p = \\begin{cases}\n \\tan \\frac{\\pi}{2p} & \\text{for} ~ 1 < p \\leq 2, \\\\[4pt] \n \\cot \\frac{\\pi}{2p} & \\text{for} ~ 2 < p < \\infty.\n\\end{cases}" }, { "math_id": 20, "text": "p" }, { "math_id": 21, "text": " (\\operatorname{H}f)^2 =f^2 +2\\operatorname{H}(f\\operatorname{H}f)" }, { "math_id": 22, "text": "S_R f = \\int_{-R}^R \\hat{f}(\\xi) e^{2\\pi i x\\xi} \\, \\mathrm{d}\\xi " }, { "math_id": 23, "text": "L^q(\\mathbb{R})" }, { "math_id": 24, "text": "\\langle \\operatorname{H} u, v \\rangle = \\langle u, -\\operatorname{H} v \\rangle" }, { "math_id": 25, "text": "v \\isin L^q(\\mathbb{R})" }, { "math_id": 26, "text": "\\operatorname{H}\\bigl(\\operatorname{H}\\left(u\\right)\\bigr) = -u" }, { "math_id": 27, "text": "\\operatorname{H}^{-1} = -\\operatorname{H}" }, { "math_id": 28, "text": "L^2(\\mathbb{R})" }, { "math_id": 29, "text": "\\operatorname{H}\\left(\\frac{ \\mathrm{d}u}{\\mathrm{d}t}\\right) = \\frac{\\mathrm d}{\\mathrm{d}t}\\operatorname{H}(u)" }, { "math_id": 30, "text": "\\operatorname{H}\\left(\\frac{\\mathrm{d}^ku}{\\mathrm{d}t^k}\\right) = \\frac{\\mathrm{d}^k}{\\mathrm{d}t^k}\\operatorname{H}(u)" }, { "math_id": 31, "text": "h(t) = \\operatorname{p.v.} \\frac{1}{ \\pi \\, t }" }, { "math_id": 32, "text": "\\operatorname{H}(u) = h*u" }, { "math_id": 33, "text": "\\operatorname{H}(u)(t) = \\frac{\\mathrm{d}}{\\mathrm{d}t}\\left(\\frac{1}{\\pi} \\left(u*\\log\\bigl|\\cdot\\bigr|\\right)(t)\\right)" }, { "math_id": 34, "text": "\\operatorname{H}(u*v) = \\operatorname{H}(u)*v = u*\\operatorname{H}(v)" }, { "math_id": 35, "text": " h*(u*v) = (h*u)*v = u*(h*v)" }, { "math_id": 36, "text": " 1 < \\frac{1}{p} + \\frac{1}{q} " }, { "math_id": 37, "text": "\\mathbb{R}." }, { "math_id": 38, "text": "\\text{SL}(2,\\mathbb{R})" }, { "math_id": 39, "text": "\\operatorname{U}_{g}^{-1} f(x) = \\frac{1}{ c x + d } \\, f \\left( \\frac{ ax + b }{ cx + d } \\right) \\,,\\qquad g = \\begin{bmatrix} a & b \\\\ c & d \\end{bmatrix} ~,\\qquad \\text{ for }~ a d - b c = \\pm 1 . " }, { "math_id": 40, "text": "~\\text{SL}(2,\\mathbb{R})~." }, { "math_id": 41, "text": "H^2(\\mathbb{R})" }, { "math_id": 42, "text": "\\operatorname{H}^2(\\mathbb{R})," }, { "math_id": 43, "text": "\\operatorname{H}^2(\\mathbb{R})" }, { "math_id": 44, "text": "\\mathcal{D}_{L^p} = \\underset{n \\to \\infty}{\\underset{\\longleftarrow}{\\lim}} W^{n,p}(\\mathbb{R})" }, { "math_id": 45, "text": "\\mathcal{D}_{L^p}" }, { "math_id": 46, "text": "\\mathcal{D}_{L^p}'" }, { "math_id": 47, "text": " u\\in \\mathcal{D}'_{L^p} " }, { "math_id": 48, "text": "\\operatorname{H}(u)\\in \\mathcal{D}'_{L^p} = \\langle \\operatorname{H}u, v \\rangle \\ \\triangleq \\ \\langle u, -\\operatorname{H}v\\rangle,\\ \\text{for all} \\ v\\in\\mathcal{D}_{L^p} ." }, { "math_id": 49, "text": "L^\\infty (\\mathbb{R})" }, { "math_id": 50, "text": "\\operatorname{H}(u)(t) = \\operatorname{p.v.} \\int_{-\\infty}^\\infty u(\\tau)\\left\\{h(t - \\tau)- h_0(-\\tau)\\right\\} \\, \\mathrm{d}\\tau" }, { "math_id": 51, "text": "h_0(x) = \\begin{cases}\n0 & \\text{for} ~ |x| < 1 \\\\\n\\frac{1}{\\pi \\, x} & \\text{for} ~ |x| \\ge 1\n\\end{cases}" }, { "math_id": 52, "text": " f,g \\isin L^\\infty (\\mathbb{R})" }, { "math_id": 53, "text": "F(x) = f(x) + i\\,g(x)" }, { "math_id": 54, "text": "f \\isin L^p(\\mathbb{R})." }, { "math_id": 55, "text": "u(x + iy) = u(x, y) = \\frac{1}{\\pi} \\int_{-\\infty}^\\infty f(s)\\;\\frac{y}{(x - s)^2 + y^2} \\; \\mathrm{d}s" }, { "math_id": 56, "text": "P(x, y) = \\frac{ y }{ \\pi\\, \\left( x^2 + y^2 \\right) }" }, { "math_id": 57, "text": "\\lim_{y \\to \\infty} v\\,(x + i\\,y) = 0" }, { "math_id": 58, "text": "Q(x, y) = \\frac{ x }{ \\pi\\, \\left(x^2 + y^2\\right) } ." }, { "math_id": 59, "text": "v(x, y) = \\frac{1}{\\pi}\\int_{-\\infty}^\\infty f(s)\\;\\frac{x - s}{\\,(x - s)^2 + y^2\\,}\\;\\mathrm{d}s ." }, { "math_id": 60, "text": "\\frac{i}{\\pi\\,z} = P(x, y) + i\\,Q(x, y)" }, { "math_id": 61, "text": "\\operatorname{H}(f) = \\lim_{y \\to 0} Q(-, y) \\star f" }, { "math_id": 62, "text": "F : \\mathbb{R} \\to \\mathbb{C}" }, { "math_id": 63, "text": " \\int_{-\\infty}^\\infty |F(x + i\\,y)|^2\\;\\mathrm{d}x < K " }, { "math_id": 64, "text": "\\mathcal{F}(F)(x)" }, { "math_id": 65, "text": "\\int_{-\\infty}^\\infty |F(x + i\\,y)|^p\\;\\mathrm{d}x < K " }, { "math_id": 66, "text": "F(x) = f(x) - i\\,g(x)" }, { "math_id": 67, "text": "\\int_{-\\infty}^\\infty \\frac{ |g(x)|^p }{ 1 + x^2 } \\; \\mathrm{d}x < \\infty" }, { "math_id": 68, "text": "F_{+}(x) - F_{-}(x) = f(x)" }, { "math_id": 69, "text": "x \\isin \\mathbb{R}" }, { "math_id": 70, "text": "f(x) = F_{+}(x) - F_{-}(x)" }, { "math_id": 71, "text": "H(f)(x) = -i \\bigl( F_{+}(x) + F_{-}(x) \\bigr) ." }, { "math_id": 72, "text": "\\tilde f(x) \\triangleq \\frac{1}{ 2\\pi } \\operatorname{p.v.} \\int_0^{2\\pi} f(t)\\,\\cot\\left(\\frac{ x - t }{2}\\right)\\,\\mathrm{d}t" }, { "math_id": 73, "text": "\\cot\\left(\\frac{ x - t }{2}\\right)" }, { "math_id": 74, "text": "\\frac{1}{\\,2\\,}\\cot\\left(\\frac{x}{2}\\right) = \\frac{1}{x} + \\sum_{n=1}^\\infty \\left(\\frac{1}{x + 2n\\pi} + \\frac{1}{\\,x - 2n\\pi\\,} \\right)" }, { "math_id": 75, "text": " U\\,f(x) = \\frac{1}{(x + i)\\,\\sqrt{\\pi}} \\, f\\left(C\\left(x\\right)\\right) " }, { "math_id": 76, "text": "L^2 (\\mathbb{R})." }, { "math_id": 77, "text": "\\operatorname{H}\\left(f_\\text{LP}(t)\\cdot f_\\text{HP}(t)\\right) = f_\\text{LP}(t)\\cdot \\operatorname{H}\\left(f_\\text{HP}(t)\\right)," }, { "math_id": 78, "text": "u(t) = u_m(t) \\cdot \\cos(\\omega t + \\varphi)," }, { "math_id": 79, "text": "\\operatorname{H}(u)(t) = \n\\begin{cases}\n+u_m(t) \\cdot \\sin(\\omega t + \\varphi), & \\omega > 0, \\\\\n-u_m(t) \\cdot \\sin(\\omega t + \\varphi), & \\omega < 0.\n\\end{cases}\n" }, { "math_id": 80, "text": "u_a(t) \\triangleq u(t) + i\\cdot H(u)(t)," }, { "math_id": 81, "text": "u(t)." }, { "math_id": 82, "text": "u(t) = A \\cdot \\cos(\\omega t + \\varphi_m(t))" }, { "math_id": 83, "text": "\\omega + \\varphi_m^\\prime(t)." }, { "math_id": 84, "text": "\\varphi_m^\\prime" }, { "math_id": 85, "text": "\\operatorname{H}(u)(t) \\approx A \\cdot \\sin(\\omega t + \\varphi_m(t))" }, { "math_id": 86, "text": "u_a(t) \\approx A \\cdot e^{i(\\omega t + \\varphi_m(t))}." }, { "math_id": 87, "text": "u_m(t) = m(t) + i \\cdot \\widehat{m}(t)" }, { "math_id": 88, "text": "u_a(t) = (m(t) + i \\cdot \\widehat{m}(t)) \\cdot e^{i(\\omega t + \\varphi)}" }, { "math_id": 89, "text": "\\begin{align}\n u(t) &= \\operatorname{Re}\\{u_a(t)\\}\\\\\n &= m(t)\\cdot \\cos(\\omega t + \\varphi) - \\widehat{m}(t)\\cdot \\sin(\\omega t + \\varphi)\n\\end{align}" }, { "math_id": 90, "text": "h(t) = 1/(\\pi t)" }, { "math_id": 91, "text": "h(t-\\tau)," }, { "math_id": 92, "text": "\\tau." }, { "math_id": 93, "text": "\\tau" }, { "math_id": 94, "text": "u[n]," }, { "math_id": 95, "text": "U(\\omega)" }, { "math_id": 96, "text": "\\widehat u[n]," }, { "math_id": 97, "text": "\\widehat u[n]" }, { "math_id": 98, "text": "\\operatorname{DTFT} (\\widehat u) = U(\\omega)\\cdot (-i\\cdot \\sgn(\\omega))." }, { "math_id": 99, "text": "\n\\begin{align}\n\\widehat u[n] &= {\\scriptstyle \\mathrm{DTFT}^{-1}} (U(\\omega))\\ *\\ {\\scriptstyle \\mathrm{DTFT}^{-1}} (-i\\cdot \\sgn(\\omega))\\\\\n&= u[n]\\ *\\ \\frac{1}{2 \\pi}\\int_{-\\pi}^{\\pi} (-i\\cdot \\sgn(\\omega))\\cdot e^{i \\omega n} \\,\\mathrm{d}\\omega\\\\\n&= u[n]\\ *\\ \\underbrace{\\frac{1}{2 \\pi}\\left[\\int_{-\\pi}^0 i\\cdot e^{i \\omega n} \\,\\mathrm{d}\\omega - \\int_0^\\pi i\\cdot e^{i \\omega n} \\,\\mathrm{d}\\omega \\right]}_{h[n]},\n\\end{align}\n" }, { "math_id": 100, "text": "h[n]\\ \\triangleq \\ \n\\begin{cases}\n 0, & \\text{for }n\\text{ even}\\\\\n\\frac 2 {\\pi n} & \\text{for }n\\text{ odd},\n\\end{cases}" }, { "math_id": 101, "text": "u[n]" }, { "math_id": 102, "text": "h[n]," }, { "math_id": 103, "text": "\\tilde h[n]." }, { "math_id": 104, "text": "h[n]" }, { "math_id": 105, "text": "\\tfrac{1}{2}" }, { "math_id": 106, "text": "\\tilde h[n]" }, { "math_id": 107, "text": "N" }, { "math_id": 108, "text": "\\tilde{h}_N[n]\\ \\triangleq \\sum_{m=-\\infty}^\\infty \\tilde{h}[n - mN]." }, { "math_id": 109, "text": "\\tilde{h}[n]" }, { "math_id": 110, "text": "M < N," }, { "math_id": 111, "text": "N-M+1" }, { "math_id": 112, "text": "\\widehat u." }, { "math_id": 113, "text": "M-1" }, { "math_id": 114, "text": "N," }, { "math_id": 115, "text": "-i \\operatorname{sgn}(\\omega)" }, { "math_id": 116, "text": "0" }, { "math_id": 117, "text": "\\pm 1." }, { "math_id": 118, "text": "h_N[n]\\ \\triangleq \\sum_{m=-\\infty}^\\infty h[n - mN]," }, { "math_id": 119, "text": "h_N[n]" }, { "math_id": 120, "text": "N." }, { "math_id": 121, "text": "h_{512}[n]," }, { "math_id": 122, "text": "u[n]." }, { "math_id": 123, "text": "M=N," }, { "math_id": 124, "text": "M < N." } ]
https://en.wikipedia.org/wiki?curid=574024
57407405
Intensity (measure theory)
In the mathematical discipline of measure theory, the intensity of a measure is the average value the measure assigns to an interval of length one. Definition. Let formula_0 be a measure on the real numbers. Then the intensity formula_1 of formula_0 is defined as formula_2 if the limit exists and is independent of formula_3 for all formula_4. Example. Look at the Lebesgue measure formula_5. Then for a fixed formula_3, it is formula_6 so formula_7 Therefore the Lebesgue measure has intensity one. Properties. The set of all measures formula_8 for which the intensity is well defined is a measurable subset of the set of all measures on formula_9. The mapping formula_10 defined by formula_11 is measurable.
[ { "math_id": 0, "text": " \\mu " }, { "math_id": 1, "text": " \\overline \\mu " }, { "math_id": 2, "text": " \\overline \\mu:= \\lim_{|t| \\to \\infty} \\frac{\\mu((-s,t-s])}{t} " }, { "math_id": 3, "text": " s " }, { "math_id": 4, "text": " s \\in \\R " }, { "math_id": 5, "text": " \\lambda " }, { "math_id": 6, "text": " \\lambda((-s,t-s])=(t-s)-(-s)=t, " }, { "math_id": 7, "text": " \\overline \\lambda:= \\lim_{|t| \\to \\infty} \\frac{\\lambda((-s,t-s])}{t}= \\lim_{|t| \\to \\infty} \\frac t t =1. " }, { "math_id": 8, "text": " M " }, { "math_id": 9, "text": " \\R " }, { "math_id": 10, "text": " I \\colon M \\to \\mathbb R " }, { "math_id": 11, "text": " I(\\mu) = \\overline \\mu " } ]
https://en.wikipedia.org/wiki?curid=57407405
57409796
Stuart–Landau equation
The Stuart–Landau equation describes the behavior of a nonlinear oscillating system near the Hopf bifurcation, named after John Trevor Stuart and Lev Landau. In 1944, Landau proposed an equation for the evolution of the magnitude of the disturbance, which is now called as the Landau equation, to explain the transition to turbulence based on a phenomenological argument and an attempt to derive this equation from hydrodynamic equations was done by Stuart for plane Poiseuille flow in 1958. The formal derivation to derive the "Landau equation" was given by Stuart, Watson and Palm in 1960. The perturbation in the vicinity of bifurcation is governed by the following equation formula_0 where The evolution of the actual disturbance is given by the real part of formula_5 i.e., by formula_6. Here the real part of the growth rate is taken to be positive, i.e., formula_7 because otherwise the system is stable in the linear sense, that is to say, for infinitesimal disturbances (formula_8 is a small number), the nonlinear term in the above equation is negligible in comparison to the other two terms in which case the amplitude grows in time only if formula_7. The "Landau constant" is also taken to be positive, formula_9 because otherwise the amplitude will grow indefinitely (see below equations and the general solution in the next section). The "Landau equation" is the equation for the magnitude of the disturbance, formula_10 which can also be re-written as formula_11 Similarly, the equation for the phase is given by formula_12 For non-homogeneous systems, i.e., when formula_13 depends on spatial coordinates, see Ginzburg–Landau equation. Due to the universality of the equation, the equation finds its application in many fields such as hydrodynamic stability, Belousov–Zhabotinsky reaction, etc. General solution. The "Landau equation" is linear when it is written for the dependent variable formula_14, formula_15 The general solution for formula_16 of the above equation is formula_17 As formula_18, the magnitude of the disturbance formula_8 approaches a constant value that is independent of its initial value, i.e., formula_19 when formula_20. The above solution implies that formula_8 does not have a real solution if formula_21 and formula_7. The associated solution for the phase function formula_22 is given by formula_23 As formula_20, the phase varies linearly with time, formula_24 It is instructive to consider a hydrodynamic stability case where it is found that, according to the linear stability analysis, the flow is stable when formula_25 and unstable otherwise, where formula_26 is the Reynolds number and the formula_27 is the critical Reynolds number; a familiar example that is applicable here is the critical Reynolds number, formula_28, corresponding to the transition to Kármán vortex street in the problem of flow past a cylinder. The growth rate formula_29 is negative when formula_30 and is positive when formula_31 and therefore in the neighbourhood formula_32, it may written as formula_33 wherein the constant is positive. Thus, the limiting amplitude is given by formula_34 Negative Landau constant. When the Landau constant is negative, formula_21, we must include a negative term of higher order to arrest the unbounded increase of the perturbation. In this case, the Landau equation becomes formula_35 The limiting amplitude then becomes formula_36 where the plus sign corresponds to the stable branch and the minus sign to the unstable branch. There exists a value of a critical value formula_37 where the above two roots are equal (formula_38) such that formula_39, indicating that the flow in the region formula_40 is "metastable", that is to say, in the metastable region, the flow is stable to infinitesimal perturbations, but not to finite amplitude perturbations. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\frac{dA}{dt} = \\sigma A - \\frac{l}{2} A |A|^2." }, { "math_id": 1, "text": "A = |A| e^{i\\phi}" }, { "math_id": 2, "text": "\\sigma = \\sigma_r + i\\sigma_i" }, { "math_id": 3, "text": "l = l_r + i l_i" }, { "math_id": 4, "text": "l_r" }, { "math_id": 5, "text": "A(t)" }, { "math_id": 6, "text": "|A|\\cos\\phi" }, { "math_id": 7, "text": "\\sigma_r>0" }, { "math_id": 8, "text": "|A|" }, { "math_id": 9, "text": "l_r>0" }, { "math_id": 10, "text": "\\frac{d|A|^2}{dt} = 2\\sigma_r |A|^2 - l_r |A|^4," }, { "math_id": 11, "text": "\\frac{d|A|}{dt} = \\sigma_r |A| -\\frac{l_r}{2} |A|^3." }, { "math_id": 12, "text": "\\frac{d\\phi}{dt}= \\sigma_i-\\frac{l_i}{2} |A|^2." }, { "math_id": 13, "text": "A" }, { "math_id": 14, "text": "|A|^{-2}" }, { "math_id": 15, "text": "\\frac{d|A|^{-2}}{dt} + 2\\sigma_r |A|^{-2} = l_r." }, { "math_id": 16, "text": "\\sigma_r\\neq 0" }, { "math_id": 17, "text": "|A(t)|^{-2} = \\frac{l_r}{2\\sigma_r} + \\left(|A(0)|^{-2} - \\frac{l_r}{2\\sigma_r}\\right)e^{-2\\sigma_r t}." }, { "math_id": 18, "text": "t\\rightarrow\\infty" }, { "math_id": 19, "text": "|A|_{\\mathrm{max}}\\rightarrow(2\\sigma_r/l_r)^{1/2}" }, { "math_id": 20, "text": "t\\gg 1/\\sigma_r" }, { "math_id": 21, "text": "l_r<0" }, { "math_id": 22, "text": "\\phi(t)" }, { "math_id": 23, "text": "\\phi(t)-\\phi(0) = \\sigma_i t - \\frac{l_i}{2l_r} \\ln \\left[1+ \\frac{|A(0)|^2l_r}{2\\sigma_r}(e^{2\\sigma_r t}-1)\\right]." }, { "math_id": 24, "text": "\\phi \\sim (\\sigma_i/\\sigma_r-l_i/l_r)\\sigma_rt." }, { "math_id": 25, "text": "Re\\leq Re_{\\mathrm{cr}}" }, { "math_id": 26, "text": "Re" }, { "math_id": 27, "text": "Re_{\\mathrm{cr}}" }, { "math_id": 28, "text": "Re_{\\mathrm{cr}}\\approx 50" }, { "math_id": 29, "text": "\\sigma_r" }, { "math_id": 30, "text": "Re<Re_{\\mathrm{cr}}" }, { "math_id": 31, "text": "Re>Re_{\\mathrm{cr}}" }, { "math_id": 32, "text": "Re\\rightarrow Re_{\\mathrm{cr}}" }, { "math_id": 33, "text": "\\sigma_r=\\text{const}.\\times (Re-Re_{\\mathrm{cr}})" }, { "math_id": 34, "text": "|A|_{\\mathrm{max}} \\propto \\sqrt{Re-Re_{\\mathrm{cr}}}." }, { "math_id": 35, "text": "\\frac{d|A|^2}{dt} = 2\\sigma_r |A|^2 - l_r |A|^4 - \\beta_r|A|^6, \\quad \\beta_r>0." }, { "math_id": 36, "text": "|A|_{\\mathrm{max}}\\rightarrow \\frac{|l_r|}{2\\beta_r} \\pm \\sqrt{\\frac{l_r^2}{4\\beta_r^2}+\\frac{2|l_r|\\sigma_r}{\\beta_r}}, \\quad \\text{as} \\quad t\\gg 1/\\sigma_r" }, { "math_id": 37, "text": "Re_{\\mathrm{cr}}'" }, { "math_id": 38, "text": "\\sigma_r = -|l_r|/8\\beta_r" }, { "math_id": 39, "text": "Re_{\\mathrm{cr}}'<Re_{\\mathrm{cr}}" }, { "math_id": 40, "text": "Re_{\\mathrm{cr}}'<Re<Re_{\\mathrm{cr}}" } ]
https://en.wikipedia.org/wiki?curid=57409796
57411
Multiplication algorithm
Algorithm to multiply two numbers A multiplication algorithm is an algorithm (or method) to multiply two numbers. Depending on the size of the numbers, different algorithms are more efficient than others. Numerous algorithms are known and there has been much research into the topic. The oldest and simplest method, known since antiquity as long multiplication or grade-school multiplication, consists of multiplying every digit in the first number by every digit in the second and adding the results. This has a time complexity of formula_0, where "n" is the number of digits. When done by hand, this may also be reframed as grid method multiplication or lattice multiplication. In software, this may be called "shift and add" due to bitshifts and addition being the only two operations needed. In 1960, Anatoly Karatsuba discovered Karatsuba multiplication, unleashing a flood of research into fast multiplication algorithms. This method uses three multiplications rather than four to multiply two two-digit numbers. (A variant of this can also be used to multiply complex numbers quickly.) Done recursively, this has a time complexity of formula_1. Splitting numbers into more than two parts results in Toom-Cook multiplication; for example, using three parts results in the Toom-3 algorithm. Using many parts can set the exponent arbitrarily close to 1, but the constant factor also grows, making it impractical. In 1968, the Schönhage-Strassen algorithm, which makes use of a Fourier transform over a modulus, was discovered. It has a time complexity of formula_2. In 2007, Martin Fürer proposed an algorithm with complexity formula_3. In 2014, Harvey, Joris van der Hoeven, and Lecerf proposed one with complexity formula_4, thus making the implicit constant explicit; this was improved to formula_5 in 2018. Lastly, in 2019, Harvey and van der Hoeven came up with an algorithm with complexity formula_6. This matches a guess by Schönhage and Strassen that this would be the optimal bound, although this remains a conjecture today. Integer multiplication algorithms can also be used to multiply polynomials by means of the method of Kronecker substitution. Long multiplication. If a positional numeral system is used, a natural way of multiplying numbers is taught in schools as long multiplication, sometimes called grade-school multiplication, sometimes called the Standard Algorithm: multiply the multiplicand by each digit of the multiplier and then add up all the properly shifted results. It requires memorization of the multiplication table for single digits. This is the usual algorithm for multiplying larger numbers by hand in base 10. A person doing long multiplication on paper will write down all the products and then add them together; an abacus-user will sum the products as soon as each one is computed. Example. This example uses "long multiplication" to multiply 23,958,233 (multiplicand) by 5,830 (multiplier) and arrives at 139,676,498,390 for the result (product). 23958233 × 5830 00000000 ( = 23,958,233 × 0) 71874699 ( = 23,958,233 × 30) 191665864 ( = 23,958,233 × 800) + 119791165 ( = 23,958,233 × 5,000) 139676498390 ( = 139,676,498,390) Other notations. In some countries such as Germany, the above multiplication is depicted similarly but with the original product kept horizontal and computation starting with the first digit of the multiplier: 23958233 · 5830 119791165 191665864 71874699 00000000 139676498390 Below pseudocode describes the process of above multiplication. It keeps only one row to maintain the sum which finally becomes the result. Note that the '+=' operator is used to denote sum to existing value and store operation (akin to languages such as Java and C) for compactness. multiply(a[1..p], b[1..q], base) // Operands containing rightmost digits at index 1 product = [1..p+q] // Allocate space for result for b_i = 1 to q // for all digits in b carry = 0 for a_i = 1 to p // for all digits in a product[a_i + b_i - 1] += carry + a[a_i] * b[b_i] carry = product[a_i + b_i - 1] / base product[a_i + b_i - 1] = product[a_i + b_i - 1] mod base product[b_i + p] = carry // last digit comes from final carry return product Usage in computers. Some chips implement long multiplication, in hardware or in microcode, for various integer and floating-point word sizes. In arbitrary-precision arithmetic, it is common to use long multiplication with the base set to 2"w", where "w" is the number of bits in a word, for multiplying relatively small numbers. To multiply two numbers with "n" digits using this method, one needs about "n"2 operations. More formally, multiplying two "n"-digit numbers using long multiplication requires Θ("n"2) single-digit operations (additions and multiplications). When implemented in software, long multiplication algorithms must deal with overflow during additions, which can be expensive. A typical solution is to represent the number in a small base, "b", such that, for example, 8"b" is a representable machine integer. Several additions can then be performed before an overflow occurs. When the number becomes too large, we add part of it to the result, or we carry and map the remaining part back to a number that is less than "b". This process is called "normalization". Richard Brent used this approach in his Fortran package, MP. Computers initially used a very similar algorithm to long multiplication in base 2, but modern processors have optimized circuitry for fast multiplications using more efficient algorithms, at the price of a more complex hardware realization. In base two, long multiplication is sometimes called "shift and add", because the algorithm simplifies and just consists of shifting left (multiplying by powers of two) and adding. Most currently available microprocessors implement this or other similar algorithms (such as Booth encoding) for various integer and floating-point sizes in hardware multipliers or in microcode. On currently available processors, a bit-wise shift instruction is usually (but not always) faster than a multiply instruction and can be used to multiply (shift left) and divide (shift right) by powers of two. Multiplication by a constant and division by a constant can be implemented using a sequence of shifts and adds or subtracts. For example, there are several ways to multiply by 10 using only bit-shift and addition. ((x « 2) + x) « 1 # Here 10*x is computed as (x*2^2 + x)*2 (x « 3) + (x « 1) # Here 10*x is computed as x*2^3 + x*2 In some cases such sequences of shifts and adds or subtracts will outperform hardware multipliers and especially dividers. A division by a number of the form formula_7 or formula_8 often can be converted to such a short sequence. Algorithms for multiplying by hand. In addition to the standard long multiplication, there are several other methods used to perform multiplication by hand. Such algorithms may be devised for speed, ease of calculation, or educational value, particularly when computers or multiplication tables are unavailable. Grid method. The grid method (or box method) is an introductory method for multiple-digit multiplication that is often taught to pupils at primary school or elementary school. It has been a standard part of the national primary school mathematics curriculum in England and Wales since the late 1990s. Both factors are broken up ("partitioned") into their hundreds, tens and units parts, and the products of the parts are then calculated explicitly in a relatively simple multiplication-only stage, before these contributions are then totalled to give the final answer in a separate addition stage. The calculation 34 × 13, for example, could be computed using the grid: followed by addition to obtain 442, either in a single sum (see right), or through forming the row-by-row totals (300 + 40) + (90 + 12) = 340 + 102 = 442. This calculation approach (though not necessarily with the explicit grid arrangement) is also known as the partial products algorithm. Its essence is the calculation of the simple multiplications separately, with all addition being left to the final gathering-up stage. The grid method can in principle be applied to factors of any size, although the number of sub-products becomes cumbersome as the number of digits increases. Nevertheless, it is seen as a usefully explicit method to introduce the idea of multiple-digit multiplications; and, in an age when most multiplication calculations are done using a calculator or a spreadsheet, it may in practice be the only multiplication algorithm that some students will ever need. Lattice multiplication. Lattice, or sieve, multiplication is algorithmically equivalent to long multiplication. It requires the preparation of a lattice (a grid drawn on paper) which guides the calculation and separates all the multiplications from the additions. It was introduced to Europe in 1202 in Fibonacci's Liber Abaci. Fibonacci described the operation as mental, using his right and left hands to carry the intermediate calculations. Matrakçı Nasuh presented 6 different variants of this method in this 16th-century book, Umdet-ul Hisab. It was widely used in Enderun schools across the Ottoman Empire. Napier's bones, or Napier's rods also used this method, as published by Napier in 1617, the year of his death. As shown in the example, the multiplicand and multiplier are written above and to the right of a lattice, or a sieve. It is found in Muhammad ibn Musa al-Khwarizmi's "Arithmetic", one of Leonardo's sources mentioned by Sigler, author of "Fibonacci's Liber Abaci", 2002. Example. The pictures on the right show how to calculate 345 × 12 using lattice multiplication. As a more complicated example, consider the picture below displaying the computation of 23,958,233 multiplied by 5,830 (multiplier); the result is 139,676,498,390. Notice 23,958,233 is along the top of the lattice and 5,830 is along the right side. The products fill the lattice and the sum of those products (on the diagonal) are along the left and bottom sides. Then those sums are totaled as shown. Russian peasant multiplication. The binary method is also known as peasant multiplication, because it has been widely used by people who are classified as peasants and thus have not memorized the multiplication tables required for long multiplication. The algorithm was in use in ancient Egypt. Its main advantages are that it can be taught quickly, requires no memorization, and can be performed using tokens, such as poker chips, if paper and pencil aren't available. The disadvantage is that it takes more steps than long multiplication, so it can be unwieldy for large numbers. Description. On paper, write down in one column the numbers you get when you repeatedly halve the multiplier, ignoring the remainder; in a column beside it repeatedly double the multiplicand. Cross out each row in which the last digit of the first number is even, and add the remaining numbers in the second column to obtain the product. Examples. This example uses peasant multiplication to multiply 11 by 3 to arrive at a result of 33. Decimal: Binary: 11 3 1011 11 5 6 101 110 2 12 10 1100 1 24 1 11000 33 100001 Describing the steps explicitly: The method works because multiplication is distributive, so: formula_9 A more complicated example, using the figures from the earlier examples (23,958,233 and 5,830): Decimal: Binary: 5830 23958233 1011011000110 1011011011001001011011001 2915 47916466 101101100011 10110110110010010110110010 1457 95832932 10110110001 101101101100100101101100100 728 191665864 1011011000 1011011011001001011011001000 364 383331728 101101100 10110110110010010110110010000 182 766663456 10110110 101101101100100101101100100000 91 1533326912 1011011 1011011011001001011011001000000 45 3066653824 101101 10110110110010010110110010000000 22 6133307648 10110 101101101100100101101100100000000 11 12266615296 1011 1011011011001001011011001000000000 5 24533230592 101 10110110110010010110110010000000000 2 49066461184 10 101101101100100101101100100000000000 1 98132922368 1 1011011011001001011011001000000000000 ———————————— 1022143253354344244353353243222210110 (before carry) 139676498390 10000010000101010111100011100111010110 Quarter square multiplication. This formula can in some cases be used, to make multiplication tasks easier to complete: formula_10 In the case where formula_11 and formula_12 are integers, we have that formula_13 because formula_14 and formula_15 are either both even or both odd. This means that formula_16 and it's sufficient to (pre-)compute the integral part of squares divided by 4 like in the following example. Examples. Below is a lookup table of quarter squares with the remainder discarded for the digits 0 through 18; this allows for the multiplication of numbers up to 9×9. If, for example, you wanted to multiply 9 by 3, you observe that the sum and difference are 12 and 6 respectively. Looking both those values up on the table yields 36 and 9, the difference of which is 27, which is the product of 9 and 3. History of quarter square multiplication. In prehistoric time, quarter square multiplication involved floor function; that some sources attribute to Babylonian mathematics (2000–1600 BC). Antoine Voisin published a table of quarter squares from 1 to 1000 in 1817 as an aid in multiplication. A larger table of quarter squares from 1 to 100000 was published by Samuel Laundy in 1856, and a table from 1 to 200000 by Joseph Blater in 1888. Quarter square multipliers were used in analog computers to form an analog signal that was the product of two analog input signals. In this application, the sum and difference of two input voltages are formed using operational amplifiers. The square of each of these is approximated using piecewise linear circuits. Finally the difference of the two squares is formed and scaled by a factor of one fourth using yet another operational amplifier. In 1980, Everett L. Johnson proposed using the quarter square method in a digital multiplier. To form the product of two 8-bit integers, for example, the digital device forms the sum and difference, looks both quantities up in a table of squares, takes the difference of the results, and divides by four by shifting two bits to the right. For 8-bit integers the table of quarter squares will have 29−1=511 entries (one entry for the full range 0..510 of possible sums, the differences using only the first 256 entries in range 0..255) or 29−1=511 entries (using for negative differences the technique of 2-complements and 9-bit masking, which avoids testing the sign of differences), each entry being 16-bit wide (the entry values are from (0²/4)=0 to (510²/4)=65025). The quarter square multiplier technique has benefited 8-bit systems that do not have any support for a hardware multiplier. Charles Putney implemented this for the 6502. Computational complexity of multiplication. &lt;templatestyles src="Unsolved/styles.css" /&gt; Unsolved problem in computer science: What is the fastest algorithm for multiplication of two formula_17-digit numbers? A line of research in theoretical computer science is about the number of single-bit arithmetic operations necessary to multiply two formula_17-bit integers. This is known as the computational complexity of multiplication. Usual algorithms done by hand have asymptotic complexity of formula_0, but in 1960 Anatoly Karatsuba discovered that better complexity was possible (with the Karatsuba algorithm). Currently, the algorithm with the best computational complexity is a 2019 algorithm of David Harvey and Joris van der Hoeven, which uses the strategies of using number-theoretic transforms introduced with the Schönhage–Strassen algorithm to multiply integers using only formula_6 operations. This is conjectured to be the best possible algorithm, but lower bounds of formula_18 are not known. Karatsuba multiplication. Karatsuba multiplication is an O("n"log23) ≈ O("n"1.585) divide and conquer algorithm, that uses recursion to merge together sub calculations. By rewriting the formula, one makes it possible to do sub calculations / recursion. By doing recursion, one can solve this in a fast manner. Let formula_11 and formula_12 be represented as formula_17-digit strings in some base formula_19. For any positive integer formula_20 less than formula_17, one can write the two given numbers as formula_21 formula_22 where formula_23 and formula_24 are less than formula_25. The product is then formula_26 where formula_27 formula_28 formula_29 These formulae require four multiplications and were known to Charles Babbage. Karatsuba observed that formula_30 can be computed in only three multiplications, at the cost of a few extra additions. With formula_31 and formula_32 as before one can observe that formula_33 Because of the overhead of recursion, Karatsuba's multiplication is slower than long multiplication for small values of "n"; typical implementations therefore switch to long multiplication for small values of "n". General case with multiplication of N numbers. By exploring patterns after expansion, one see following: formula_34 &lt;br&gt; formula_35 &lt;br&gt; formula_36 &lt;br&gt; formula_37 &lt;br&gt; formula_38 Each summand is associated to a unique binary number from 0 to formula_39, for example formula_40 etc. Furthermore; B is powered to number of 1, in this binary string, multiplied with m. If we express this in fewer terms, we get: formula_41, where formula_42 means digit in number i at position j. Notice that formula_43 formula_44&lt;br&gt; formula_45&lt;br&gt; formula_46 History. Karatsuba's algorithm was the first known algorithm for multiplication that is asymptotically faster than long multiplication, and can thus be viewed as the starting point for the theory of fast multiplications. Toom–Cook. Another method of multiplication is called Toom–Cook or Toom-3. The Toom–Cook method splits each number to be multiplied into multiple parts. The Toom–Cook method is one of the generalizations of the Karatsuba method. A three-way Toom–Cook can do a size-"3N" multiplication for the cost of five size-"N" multiplications. This accelerates the operation by a factor of 9/5, while the Karatsuba method accelerates it by 4/3. Although using more and more parts can reduce the time spent on recursive multiplications further, the overhead from additions and digit management also grows. For this reason, the method of Fourier transforms is typically faster for numbers with several thousand digits, and asymptotically faster for even larger numbers. Schönhage–Strassen. Every number in base B, can be written as a polynomial: formula_47 Furthermore, multiplication of two numbers could be thought of as a product of two polynomials: formula_48 Because,for formula_49: formula_50, we have a convolution. By using fft (fast fourier transformation) with convolution rule, we can get formula_51 ● formula_52. That is; formula_53 ● formula_54, where formula_55 is the corresponding coefficient in fourier space. This can also be written as: fft(a * b) = fft(a) ● fft(b). We have the same coefficient due to linearity under fourier transformation, and because these polynomials only consist of one unique term per coefficient: formula_56 and formula_57 Convolution rule: formula_58 ● formula_59 We have reduced our convolution problem to product problem, through fft. By finding ifft (polynomial interpolation), for each formula_60, one get the desired coefficients. Algorithm uses divide and conquer strategy, to divide problem to subproblems. It has a time complexity of O("n" log("n") log(log("n"))). History. The algorithm was invented by Strassen (1968). It was made practical and theoretical guarantees were provided in 1971 by Schönhage and Strassen resulting in the Schönhage–Strassen algorithm. Further improvements. In 2007 the asymptotic complexity of integer multiplication was improved by the Swiss mathematician Martin Fürer of Pennsylvania State University to "n" log("n") 2Θ(log*("n")) using Fourier transforms over complex numbers, where log* denotes the iterated logarithm. Anindya De, Chandan Saha, Piyush Kurur and Ramprasad Saptharishi gave a similar algorithm using modular arithmetic in 2008 achieving the same running time. In context of the above material, what these latter authors have achieved is to find "N" much less than 23"k" + 1, so that "Z"/"NZ" has a (2"m")th root of unity. This speeds up computation and reduces the time complexity. However, these latter algorithms are only faster than Schönhage–Strassen for impractically large inputs. In 2014, Harvey, Joris van der Hoeven and Lecerf gave a new algorithm that achieves a running time of formula_61, making explicit the implied constant in the formula_62 exponent. They also proposed a variant of their algorithm which achieves formula_63 but whose validity relies on standard conjectures about the distribution of Mersenne primes. In 2016, Covanov and Thomé proposed an integer multiplication algorithm based on a generalization of Fermat primes that conjecturally achieves a complexity bound of formula_63. This matches the 2015 conditional result of Harvey, van der Hoeven, and Lecerf but uses a different algorithm and relies on a different conjecture. In 2018, Harvey and van der Hoeven used an approach based on the existence of short lattice vectors guaranteed by Minkowski's theorem to prove an unconditional complexity bound of formula_63. In March 2019, David Harvey and Joris van der Hoeven announced their discovery of an "O"("n" log "n") multiplication algorithm. It was published in the "Annals of Mathematics" in 2021. Because Schönhage and Strassen predicted that "n" log("n") is the "best possible" result, Harvey said: "...our work is expected to be the end of the road for this problem, although we don't know yet how to prove this rigorously." Lower bounds. There is a trivial lower bound of Ω("n") for multiplying two "n"-bit numbers on a single processor; no matching algorithm (on conventional machines, that is on Turing equivalent machines) nor any sharper lower bound is known. Multiplication lies outside of AC0["p"] for any prime "p", meaning there is no family of constant-depth, polynomial (or even subexponential) size circuits using AND, OR, NOT, and MOD"p" gates that can compute a product. This follows from a constant-depth reduction of MOD"q" to multiplication. Lower bounds for multiplication are also known for some classes of branching programs. Complex number multiplication. Complex multiplication normally involves four multiplications and two additions. formula_64 Or formula_65 As observed by Peter Ungar in 1963, one can reduce the number of multiplications to three, using essentially the same computation as Karatsuba's algorithm. The product ("a" + "bi") · ("c" + "di") can be calculated in the following way. "k"1 = "c" · ("a" + "b") "k"2 = "a" · ("d" − "c") "k"3 = "b" · ("c" + "d") Real part = "k"1 − "k"3 Imaginary part = "k"1 + "k"2. This algorithm uses only three multiplications, rather than four, and five additions or subtractions rather than two. If a multiply is more expensive than three adds or subtracts, as when calculating by hand, then there is a gain in speed. On modern computers a multiply and an add can take about the same time so there may be no speed gain. There is a trade-off in that there may be some loss of precision when using floating point. For fast Fourier transforms (FFTs) (or any linear transformation) the complex multiplies are by constant coefficients "c" + "di" (called twiddle factors in FFTs), in which case two of the additions ("d"−"c" and "c"+"d") can be precomputed. Hence, only three multiplies and three adds are required. However, trading off a multiplication for an addition in this way may no longer be beneficial with modern floating-point units. Polynomial multiplication. All the above multiplication algorithms can also be expanded to multiply polynomials. Alternatively the Kronecker substitution technique may be used to convert the problem of multiplying polynomials into a single binary multiplication. Long multiplication methods can be generalised to allow the multiplication of algebraic formulae: 14ac - 3ab + 2 multiplied by ac - ab + 1 14ac -3ab 2 ac -ab 1 14a2c2 -3a2bc 2ac -14a2bc 3 a2b2 -2ab 14ac -3ab 2 14a2c2 -17a2bc 16ac 3a2b2 -5ab +2 As a further example of column based multiplication, consider multiplying 23 long tons (t), 12 hundredweight (cwt) and 2 quarters (qtr) by 47. This example uses avoirdupois measures: 1 t = 20 cwt, 1 cwt = 4 qtr. t cwt qtr 23 12 2 47 x 141 94 94 940 470 29 23 1110 587 94 1110 7 2 ================= Answer: 1110 ton 7 cwt 2 qtr First multiply the quarters by 47, the result 94 is written into the first workspace. Next, multiply cwt 12*47 = (2 + 10)*47 but don't add up the partial results (94, 470) yet. Likewise multiply 23 by 47 yielding (141, 940). The quarters column is totaled and the result placed in the second workspace (a trivial move in this case). 94 quarters is 23 cwt and 2 qtr, so place the 2 in the answer and put the 23 in the next column left. Now add up the three entries in the cwt column giving 587. This is 29 t 7 cwt, so write the 7 into the answer and the 29 in the column to the left. Now add up the tons column. There is no adjustment to make, so the result is just copied down. The same layout and methods can be used for any traditional measurements and non-decimal currencies such as the old British £sd system. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "O(n^2)" }, { "math_id": 1, "text": "O(n^{\\log_2 3})" }, { "math_id": 2, "text": "O(n\\log n\\log\\log n)" }, { "math_id": 3, "text": "O(n\\log n 2^{\\Theta(\\log^* n)})" }, { "math_id": 4, "text": "O(n\\log n 2^{3\\log^* n})" }, { "math_id": 5, "text": "O(n\\log n 2^{2\\log^* n})" }, { "math_id": 6, "text": "O(n\\log n)" }, { "math_id": 7, "text": "2^n" }, { "math_id": 8, "text": "2^n \\pm 1" }, { "math_id": 9, "text": "\n\\begin{align}\n3 \\times 11 & = 3 \\times (1\\times 2^0 + 1\\times 2^1 + 0\\times 2^2 + 1\\times 2^3) \\\\\n& = 3 \\times (1 + 2 + 8) \\\\\n& = 3 + 6 + 24 \\\\\n& = 33.\n\\end{align}\n" }, { "math_id": 10, "text": "\n \\frac{\\left(x+y\\right)^2}{4} - \\frac{\\left(x-y\\right)^2}{4} =\n\\frac{1}{4}\\left(\\left(x^2+2xy+y^2\\right) - \\left(x^2-2xy+y^2\\right)\\right) =\n\\frac{1}{4}\\left(4xy\\right) = xy.\n" }, { "math_id": 11, "text": "x" }, { "math_id": 12, "text": "y" }, { "math_id": 13, "text": " (x+y)^2 \\equiv (x-y)^2 \\bmod 4" }, { "math_id": 14, "text": "x+y" }, { "math_id": 15, "text": "x-y" }, { "math_id": 16, "text": "\\begin{align}\nxy &= \\frac14(x+y)^2 - \\frac14(x-y)^2 \\\\\n &= \\left((x+y)^2 \\text{ div } 4\\right)- \\left((x-y)^2 \\text{ div } 4\\right)\n\\end{align}" }, { "math_id": 17, "text": "n" }, { "math_id": 18, "text": "\\Omega(n\\log n)" }, { "math_id": 19, "text": "B" }, { "math_id": 20, "text": "m" }, { "math_id": 21, "text": "x = x_1 B^m + x_0," }, { "math_id": 22, "text": "y = y_1 B^m + y_0," }, { "math_id": 23, "text": "x_0" }, { "math_id": 24, "text": "y_0" }, { "math_id": 25, "text": "B^m" }, { "math_id": 26, "text": "\n\\begin{align}\nxy &= (x_1 B^m + x_0)(y_1 B^m + y_0) \\\\\n &= x_1 y_1 B^{2m} + (x_1 y_0 + x_0 y_1) B^m + x_0 y_0 \\\\\n &= z_2 B^{2m} + z_1 B^m + z_0, \\\\\n\\end{align}\n" }, { "math_id": 27, "text": "z_2 = x_1 y_1," }, { "math_id": 28, "text": "z_1 = x_1 y_0 + x_0 y_1," }, { "math_id": 29, "text": "z_0 = x_0 y_0." }, { "math_id": 30, "text": "xy" }, { "math_id": 31, "text": "z_0" }, { "math_id": 32, "text": "z_2" }, { "math_id": 33, "text": "\n\\begin{align}\nz_1 &= x_1 y_0 + x_0 y_1 \\\\\n &= x_1 y_0 + x_0 y_1 + x_1 y_1 - x_1 y_1 + x_0 y_0 - x_0 y_0 \\\\\n &= x_1 y_0 + x_0 y_0 + x_0 y_1 + x_1 y_1 - x_1 y_1 - x_0 y_0 \\\\\n &= (x_1 + x_0) y_0 + (x_0 + x_1) y_1 - x_1 y_1 - x_0 y_0 \\\\\n &= (x_1 + x_0) (y_0 + y_1) - x_1 y_1 - x_0 y_0 \\\\\n &= (x_1 + x_0) (y_1 + y_0) - z_2 - z_0. \\\\\n\\end{align}\n" }, { "math_id": 34, "text": " (x_1 B^{ m} + x_0) (y_1 B^{m} + y_0) (z_1 B^{ m} + z_0) (a_1 B^{ m} + a_0) = " }, { "math_id": 35, "text": " a_1 x_1 y_1 z_1 B^{4m} + a_1 x_1 y_1 z_0 B^{3m} + a_1 x_1 y_0 z_1 B^{3 m} + a_1 x_0 y_1 z_1 B^{3 m} " }, { "math_id": 36, "text": "+ a_0 x_1 y_1 z_1 B^{3 m} + a_1 x_1 y_0 z_0 B^{2 m} + a_1 x_0 y_1 z_0 B^{2 m} + a_0 x_1 y_1 z_0 B^{2 m}" }, { "math_id": 37, "text": " + a_1 x_0 y_0 z_1 B^{2 m} + a_0 x_1 y_0 z_1 B^{2 m} + a_0 x_0 y_1 z_1 B^{2 m} + a_1 x_0 y_0 z_0 B^{ m}" }, { "math_id": 38, "text": "+ a_0 x_1 y_0 z_0 B^{m} + a_0 x_0 y_1 z_0 B^{m} + a_0 x_0 y_0 z_1 B^{ m} + a_0 x_0 y_0 z_0 " }, { "math_id": 39, "text": " 2^{N+1}-1 " }, { "math_id": 40, "text": " a_1 x_1 y_1 z_1 \\longleftrightarrow 1111,\\ a_1 x_0 y_1 z_0 \\longleftrightarrow 1010 " }, { "math_id": 41, "text": " \\prod_{j=1}^N (x_{j,1} B^{ m} + x_{j,0}) = \\sum_{i=1}^{2^{N+1}-1}\n\n\\prod_{j=1}^N x_{j,c(i,j)}B^{m\\sum_{j=1}^N c(i,j)} = \\sum_{j=0}^{N}z_jB^{jm} \n" }, { "math_id": 42, "text": " c(i,j) " }, { "math_id": 43, "text": " c(i,j) \\in \\{0,1\\} " }, { "math_id": 44, "text": "\n\nz_{0} = \\prod_{j=1}^N x_{j,0}\n" }, { "math_id": 45, "text": "\nz_{N} = \\prod_{j=1}^N x_{j,1}\n" }, { "math_id": 46, "text": "\nz_{N-1} = \\prod_{j=1}^N (x_{j,0} + x_{j,1}) - \\sum_{i \\ne N-1}^{N} z_i\n\n" }, { "math_id": 47, "text": " X = \\sum_{i=0}^N {x_iB^i} " }, { "math_id": 48, "text": "XY = (\\sum_{i=0}^N {x_iB^i})(\\sum_{j=0}^N {y_iB^j}) " }, { "math_id": 49, "text": " B^k " }, { "math_id": 50, "text": "c_k =\\sum_{(i,j):i+j=k} {a_ib_j} = \\sum_{i=0}^k {a_ib_{k-i}} " }, { "math_id": 51, "text": " \\hat{f}(a * b) = \\hat{f}(\\sum_{i=0}^k {a_ib_{k-i}}) = \\hat{f}(a)" }, { "math_id": 52, "text": " \\hat{f}(b) " }, { "math_id": 53, "text": " C_k = a_k " }, { "math_id": 54, "text": " b_k " }, { "math_id": 55, "text": " C_k " }, { "math_id": 56, "text": " \\hat{f}(x^n) = \\left(\\frac{i}{2\\pi}\\right)^n \\delta^{(n)} " }, { "math_id": 57, "text": " \\hat{f}(a\\, X(\\xi) + b\\, Y(\\xi)) = a\\, \\hat{X}(\\xi) + b\\, \\hat{Y}(\\xi)" }, { "math_id": 58, "text": " \\hat{f}(X * Y) = \\ \\hat{f}(X) " }, { "math_id": 59, "text": " \\hat{f}(Y) " }, { "math_id": 60, "text": "c_k " }, { "math_id": 61, "text": "O(n\\log n \\cdot 2^{3\\log^* n})" }, { "math_id": 62, "text": "O(\\log^* n)" }, { "math_id": 63, "text": "O(n\\log n \\cdot 2^{2\\log^* n})" }, { "math_id": 64, "text": "(a+bi) (c+di) = (ac-bd) + (bc+ad)i." }, { "math_id": 65, "text": "\n\\begin{array}{c|c|c}\n\\times & a & bi \\\\\n\\hline\nc & ac & bci \\\\\n\\hline\ndi & adi & -bd\n\\end{array}\n" } ]
https://en.wikipedia.org/wiki?curid=57411
57411567
Bonnie Gold
American mathematics educator Bonnie Gold (born 1948) is an American mathematician, mathematical logician, philosopher of mathematics, and mathematics educator. She is a professor emerita of mathematics at Monmouth University. Education and career. Gold completed her Ph.D. in 1976 at Cornell University, under the supervision of Michael D. Morley. She was the chair of the mathematics department at Wabash College before moving to Monmouth, where she also became department chair. Contributions. The research from Gold's dissertation, "Compact and formula_0-compact formulas in formula_1", was later published in the journal "Archiv für Mathematische Logik und Grundlagenforschung", and concerned infinitary logic. With Sandra Z. Keith and William A. Marion she co-edited "Assessment Practices in Undergraduate Mathematics", published by the Mathematical Association of America (MAA) in 1999. With Roger A. Simons, Gold is also the editor of another book, "Proof and Other Dilemmas: Mathematics and Philosophy" (MAA, 2008). Her essay "How your philosophy of mathematics impacts your teaching" was selected for inclusion in "The Best Writing on Mathematics 2012". In it, she argues that the philosophy of mathematics affects the teaching of mathematics even when the teacher's philosophical principles are implicit and unexamined. Recognition. In 2012, Gold became the winner of the 22nd Louise Hay Award of the Association for Women in Mathematics for her contributions to mathematics education. The award citation noted her work in educational assessment for undergraduate study in mathematics. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\omega" }, { "math_id": 1, "text": "L_{\\omega_{1},\\omega}" } ]
https://en.wikipedia.org/wiki?curid=57411567
57414658
Tropical cryptography
Cryptography using tropical algebra In tropical analysis, tropical cryptography refers to the study of a class of cryptographic protocols built upon tropical algebras. In many cases, tropical cryptographic schemes have arisen from adapting classical (non-tropical) schemes to instead rely on tropical algebras. The case for the use of tropical algebras in cryptography rests on at least two key features of tropical mathematics: in the tropical world, there is no classical multiplication (a computationally expensive operation), and the problem of solving systems of tropical polynomial equations has been shown to be NP-hard. Basic Definitions. The key mathematical object at the heart of tropical cryptography is the tropical semiring formula_0 (also known as the min-plus algebra), or a generalization thereof. The operations are defined as follows for formula_1: formula_2 &lt;br&gt; formula_3 It is easily verified that with formula_4 as the additive identity, these binary operations on formula_5 form a semiring. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "(\\mathbb{R} \\cup \\{\\infty\\},\\oplus,\\otimes)" }, { "math_id": 1, "text": "x,y \\in \\mathbb{R} \\cup \\{\\infty\\}" }, { "math_id": 2, "text": "x \\oplus y = \\min\\{x,y\\}" }, { "math_id": 3, "text": "x \\otimes y = x + y" }, { "math_id": 4, "text": "\\infty" }, { "math_id": 5, "text": "\\mathbb{R} \\cup \\{\\infty\\}" } ]
https://en.wikipedia.org/wiki?curid=57414658
57418849
Probe tip
Instrument used to physically scan the surface of a sample A probe tip is an instrument used in scanning probe microscopes (SPMs) to scan the surface of a sample and make nano-scale images of surfaces and structures. The probe tip is mounted on the end of a cantilever and can be as sharp as a single atom. In microscopy, probe tip geometry (length, width, shape, aspect ratio, and tip apex radius) and the composition (material properties) of both the tip and the surface being probed directly affect resolution and imaging quality. Tip size and shape are extremely important in monitoring and detecting interactions between surfaces. SPMs can precisely measure electrostatic forces, magnetic forces, chemical bonding, Van der Waals forces, and capillary forces. SPMs can also reveal the morphology and topography of a surface. The use of probe-based tools began with the invention of scanning tunneling microscopy (STM) and atomic force microscopy (AFM), collectively called scanning probe microscopy (SPM) by Gerd Binnig and Heinrich Rohrer at the IBM Zurich research laboratory in 1982. It opened a new era for probing the nano-scale world of individual atoms and molecules as well as studying surface science, due to their unprecedented capability to characterize the mechanical, chemical, magnetic, and optical functionalities of various samples at nanometer-scale resolution in a vacuum, ambient, or fluid environment. The increasing demand for sub-nanometer probe tips is attributable to their robustness and versatility. Applications of sub-nanometer probe tips exist in the fields of nanolithography, nanoelectronics, biosensor, electrochemistry, semiconductor, micromachining and biological studies. History and development. Increasingly sharp probe tips have been of interest to researchers for applications in the material, life, and biological sciences, as they can map surface structure and material properties at molecular or atomic dimensions. The history of the probe tip can be traced back to 1859 with a predecessor of the modern gramophone, called the phonautograph. During the later development of the gramophone, the hog's hair used in the phonautograph was replaced with a needle used to reproduce sound. In 1940, a pantograph was built utilizing a shielded probe and adjustable tip. A stylus was free moving allowing it to slide vertically in contact with the paper. In 1948, a circuit was employed in the probe tip to measure peak voltage, creating what may be considered the first scanning tunneling microscope (STM). The fabrication of electrochemically etched sharp tungsten, copper, nickel and molybdenum tips were reported by Muller in 1937. A revolution in sharp tips then occurred, producing a variety of tips with different shapes, sizes, and aspect ratios. They composed of tungsten wire, silicon, diamond and carbon nanotubes with Si-based circuit technologies. This allowed the production of tips for numerous applications in the broad spectrum of nanotechnological fields. Following the development of STM, atomic force microscopy (AFM) was developed by Gerd Binnig, Calvin F. Quate, and Christoph Gerber in 1986. Their instrument used a broken piece of diamond as the tip with a hand-cut gold foil cantilever. Focused ion and electron beam techniques for the fabrication of strong, stable, reproducible Si3N4 pyramidal tips with 1.0 μm length and 0.1 μm diameter were reported by Russell in 1992. Significant advancement also came through the introduction of micro-fabrication methods for the creation of precise conical or pyramidal silicon and silicon nitride tips. Numerous research experiments were conducted to explore fabrication of comparatively less expensive and more robust tungsten tips, focusing on a need to attain less than 50 nm radius of curvature. A new era in the field of fabrication of probe tips was reached when the carbon nanotube, an approximately 1 nm cylindrical shell of graphene, was introduced. The use of single wall carbon nanotubes makes the tips more flexible and less vulnerable to breaking or crushing during imaging. Probe tips made from carbon nano-tubes can be used to obtain high-resolution images of both soft and weakly adsorbed biomolecules like DNA on surfaces with molecular resolution. Multifunctional hydrogel nano-probe techniques also advanced tip fabrication and resulted in increased applicability for inorganic and biological samples in both air and liquid. The biggest advantage of this mechanical method is that the tip can be made in different shapes, such as hemispherical, embedded spherical, pyramidal, and distorted pyramidal, with diameters ranging from 10 nm – 1000 nm. This covers applications including topography or functional imaging, force spectroscopy on soft matter, biological, chemical and physical sensors. Table 1. Summarizes various methods for fabricating probe tips, and the associated materials and applications. Tunneling current and force measurement principle. The tip itself does not have any working principle for imaging, but depending on the instrumentation, mode of application, and the nature of the sample under investigation, the probe's tip may follow different principles to image the surface of the sample. For example, when a tip is integrated with STM, it measures the tunneling current that arises from the interaction between the sample and the tip. In AFM, short-ranged force deflection during the raster scan by the tip across the surface is measured. A conductive tip is essential for the STM instrumentation whereas AFM can use conductive and non-conductive probe tip. Although the probe tip is used in various techniques with different principles, for STM and AFM coupled with probe tip is discussed in detail. Conductive probe tip. As the name implies, STM utilizes the tunneling charge transfer principle from tip to surface or vice versa, thereby recording the current response. This concept originates from a particle in a box concept; if potential energy for a particle is small, the electron may be found outside of the potential well, which is a classically forbidden region. This phenomenon is called tunneling. Expression derived from Schrödinger equation for transmission charge transfer probability is as follows: formula_0 where formula_1 formula_2 formula_3 is the Planck constant Non-conductive probe tip. Non-conductive nanoscale tips are widely used for AFM measurements. For non-conducting tip, surface forces acting on the tip/cantilever are responsible for deflection or attraction of tip. These attractive or repulsive forces are used for surface topology, chemical specifications, magnetic and electronic properties. The distance-dependent forces between substrate surface and tip are responsible for imaging in AFM. These interactions include van der Waals forces, capillary forces, electrostatic forces, Casimir forces, and solvation forces. One unique repulsion force is Pauli Exclusion repulsive force, which is responsible for single-atom imaging as in references and Figures 10 &amp; 11 (contact region in Fig. 1). Fabrication methods. Tip fabrication techniques fall into two broad classifications, mechanical and physicochemical. In the early stage of the development of probe tips, mechanical procedures were popular because of the ease of fabrication. Mechanical methods. Reported mechanical methods in fabricating tips include cutting, grinding, and pulling.; an example would be cutting a wire at certain angles with a razor blade, wire cutter, or scissors. Another mechanical method for tip preparation is fragmentation of bulk pieces into small pointy pieces. Grinding a metal wire or rod into a sharp tip was also a method used. These mechanical procedures usually leave rugged surfaces with many tiny asperities protruding from the apex, which led to atomic resolution on flat surfaces. However, irregular shape and large macroscopic radius of curvature result in poor reproducibility and decreased stability especially for probing rough surfaces. Another main disadvantage of making probes by this method is that it creates many mini tips which lead to many different signals, yielding error in imaging. Cutting, grinding and pulling procedures can only be adapted for metallic tips like W, Ag, Pt, Ir, Pt-Ir and gold. Non-metallic tips cannot be fabricated by these methods. In contrast, a sophisticated mechanical method for tip fabrication is based on the hydro-gel method. This method is based on a bottom-up strategy to make probe tips by a molecular self-assembly process. A cantilever is formed in a mould by curing the pre-polymer solution, then it is brought into contact with the mould of the tip which also contains the pre-polymer solution. The polymer is cured with ultraviolet light which helps to provide a firm attachment of the cantilever to the probe. This fabrication method is shown in Fig. 2. Physio-chemical procedures. Physiochemical procedures are fabrication methods of choice, which yield extremely sharp and symmetric tips, with more reproducibility compared to mechanical fabrication-based tips. Among physicochemical methods, the electrochemical etching method is one of the most popular methods. Etching is a two or more step procedure. The "zone electropolishing" is the second step which further sharpens the tip in a very controlled manner. Other physicochemical methods include chemical vapor deposition and electron beam deposition onto pre-existing tips. Other tip fabrication methods include field ion microscopy and ion milling. In field ion microscopy techniques, consecutive field evaporation of single atoms yields specific atomic configuration at the probe tip, which yields very high resolution. Fabrication through etching. Electrochemical etching is one of the most widely accepted metallic probe tip fabrication methods. Three commonly used electrochemical etching methods for tungsten tip fabrication are single lamella drop-off methods, double lamella drop-off method, and submerged method. Various cone shape tips can be fabricated by this method by minor changes in the experimental setup. A DC potential is applied between the tip and a metallic electrode (usually W wire) immersed in solution (Figure 3 a-c); electrochemical reactions at cathode and anode in basic solutions (2M KOH or 2M NaOH) are usually used. The overall etching process involved is as follows: Anode; &lt;chem&gt;W (s) + 8OH- -&gt; WO4 + 4H2O + 6e- (E= 1.05V)&lt;/chem&gt; Cathode: &lt;chem&gt;6H2O + 6e- -&gt; 3H2 + 6OH- (E=-2.48V)&lt;/chem&gt; Overall: &lt;chem&gt;W (s) + 2OH- -&gt; WO4^2- + 2H2O (l) + 6e- + 3H2 (g) (E= -1.43V)&lt;/chem&gt; Here, all the potentials are reported vs. SHE. The schematics of the fabrication method of probe tip production through the electrochemical etching method is shown in Fig. 3. In the electrochemical etching process, W is etched at the liquid, solid, and air interface; this is due to surface tension, as shown in Fig. 3. Etching is called static if the W wire is kept stationary. Once the tip is etched, the lower part falls due to the lower tensile strength than the weight of the lower part of the wire. The irregular shape is produced by the shifting of the meniscus. However, slow etching rates can produce regular tips when the current flows slowly through the electrochemical cells. Dynamic etching involves slowly pulling up the wire from the solution, or sometimes the wire is moved up and down (oscillating wire) producing smooth tips. Submerged method. In this method, a metal wire is vertically etched, reducing the diameter from 0.25 mm ~ 20 nm. A schematic diagram for probe tip fabrication with submerged electrochemical etching method is illustrated in Fig 4. These tips can be used for high-quality STM images. Lamella method. In the double lamella method, the lower part of the metal is etched away, and the upper part of the tip is not etched further. Further etching of the upper part of the wire is prevented by covering it with a polymer coating. This method is usually limited to laboratory fabrication. The double lamella method schematic is shown in Fig. 5. Single atom tip preparation. Transitional metals like Cu, Au and Ag adsorb single molecules linearly on their surface due to weak van der Waals forces. This linear projection of single molecules allows interactions of the terminal atoms of the tip with the atoms of the substrate, resulting in Pauli repulsion for single molecule or atom mapping studies. Gaseous deposition on the tip is carried out in an ultrahigh vacuum (5 x 10−8 mbar) chamber at a low temperature (10K). Depositions of Xe, Kr, NO, CH4 or CO on tip have been successfully prepared and used for imaging studies. However, these tips preparations rely on the attachment of single atoms or molecules on the tip and the resulting atomic structure of the tip is not known exactly. The probability of attachment of simple molecules on metal surfaces is very tedious and required great skill; as such, this method is not widely used. Chemical vapor deposition (CVD). Sharp tips used in SPM are fragile, and prone to wear and tear under high working loads. Diamond is considered the best option to address this issue. Diamond tips for SPMs are fabricated by fracturing, grinding and polishing bulk diamond, resulting in a considerable loss of diamond. One alternative is depositing a thin diamond film on Silicone tips by CVD. In CVD, diamond is deposited directly on silicon or W cantilevers. A is shown in Fig. 6. In this method, the flow of methane and hydrogen gas is controlled to maintain an internal pressure of 40 Torr inside the chamber. CH4 and H2 dissociate at 2100 °C with the help of the Ta filament, and nucleation sites are created on the tip of the cantilever. Once CVD is complete, the flow of CH4 is stopped and the chamber is cooled under the flow of H2. A schematic diagram of a CVD setup used for diamond tip fabrication for AFM application is shown in Fig. 6. Reactive ion etching (RIE) fabrication. A groove or structure is made on a substrate to form a template. The desired material is then deposited in that template. Once the tip is formed, the template is etched off, leaving the tip and cantilever. Fig. 7 illustrates diamond tip fabrication on silicon wafers using this method. Focused ion beam (FIB) milling. FIB milling is a sharpening method for probe tips in SPM. A blunt tip is first fabricated by other etching methods, such as CVD, or the use of a pyramid mold for pyramidal tips. This tip is then sharpened by FIB milling as shown in Fig. 8. The diameter of the focused ion beam, which directly affects the tip's final diameter, is controlled through a programmable aperture. Glue. This method is used to attach carbon nanotubes to a cantilever or blunt tip. A strong adhesive (such as soft acrylic glue) is used to bind CNT with the silicon cantilever. CNT is robust, stiff and increases the durability of probe tips, and can be used for both contact and tapping mode. Cleaning procedures. Electrochemically etched tips are usually covered with contaminants on their surfaces which cannot be removed simply by rinsing in water, acetone or ethanol. Some oxide layers on metallic tips, especially on tungsten, need to be removed by post-fabrication treatment. Annealing. To clean W sharp tips, it is highly desirable to remove contaminant and the oxide layer. In this method a tip is heated in an UHV chamber at elevated temperature which desorb the contaminated layer. The reaction details are shown below. 2WO3 + W → 3WO2 ↑ WO2 → W (sublimation at formula_41075K) At elevated temperature, trioxides of W are converted to WO2 which sublimates around 1075K, and cleaned metallic W surfaces are left behind. An additional advantage provided by annealing is the healing of crystallographic defects produced by fabrication, and the process also smoothens the tip surface. HF chemical cleaning. In the HF cleaning method, a freshly prepared tip is dipped in 15% concentrated hydrofluoric acid for 10 to 30 seconds, which dissolves the oxides of W. Ion milling. In this method, argon ions are directed at the tip surface to remove the contaminant layer by sputtering. The tip is rotated in a flux of argon ions at a certain angle, in a way that allows the beam to target the apex. The bombardment of ions at the tip depletes the contaminants and also results in a reduction of the radius of the tip. The bombardment time needs to be finely tuned with respect to the shape of the tip. Sometimes, short annealing is required after ion milling. Self-sputtering. This method is very similar to ion milling, but in this procedure, the UHV chamber is filled with neon at a pressure of 10−4 mbar. When a negative voltage is applied on the tip, a strong electric field (produced by tip under negative potential) will ionize the neon gas, and these positively charged ions are accelerated back to the tip, where they cause sputtering. The sputtering removes contaminants and some atoms from the tip which, like ion milling, reduces the apex radius. By changing the field strength, one can tune the radius of the tip to 20 nm. Coating. The surface of silicon-based tips cannot be easily controlled because they usually carry the silanol group. The Si surface is hydrophilic and can be contaminated easily by the environment. Another disadvantage of Si tips is the wear and tear of the tip. It is important to coat the Si tip to prevent tip deterioration, and the tip coating may also enhance image quality. To coat a tip, an adhesive layer is pasted (usually chromium layer on 5 nm thick titanium) and then gold is deposited by vapor deposition (40-100 nm or less). Sometimes, the coating layer reduces the tunnelling current detection capability of probe tips. Characterization. The most important aspect of a probe tip is imaging the surfaces efficiently at nanometre dimensions. Some concerns involving credibility of the imaging or measurement of the sample arise when the shape of the tip is not determined accurately. For example, when an unknown tip is used to measure a linewidth pattern or other high aspect ratio feature of a surface, there may remain some confusion when determining the contribution of the tip and of the sample in the acquired image. Consequently, it is important to fully and accurately characterize the tips. Probe tips can be characterized for their shape, size, sharpness, bluntness, aspect ratio, radius of curvature, geometry and composition using many advanced instrumental techniques. For example, electron field emission measurement, scanning electron microscopy (SEM), transmission electron microscopy (TEM), scanning tunnelling spectroscopy as well as more easily accessible optical microscope. In some cases, optical microscopy cannot provide exact measurements for small tips in nanoscale due to the resolution limitation of the optical microscopy. Electron field emission current measurement. In the electron field emission current measurement method, a high voltage is applied between the tip and another electrode, followed by measuring field emission current employing Fowler-Nordheim curves formula_5. Large fields-emission current measurements may indicate that the tip is sharp, and low field-emission current indicates that the tip is blunt, molten or mechanically damaged. A minimum voltage is essential to facilitate the release of electrons from the surface of the tip which in turn indirectly is used to obtain the tip curvature. Although this method has several advantages, a disadvantage is that the high electric field required for producing strong electric force can melt the apex of the tip, or might change the crystallographic tip nature. Scanning electron microscopy and transmission electron microscopy. The size and shape of the tip can be obtained by scanning electron microscopy and transmission electron microscopy measurements. In addition, transmission electron microscopy (TEM) images are helpful to detect any layer of insulating materials on the surface of the tip as well as to estimate the size of the layer. These oxides are formed gradually on the surface of tip soon after fabrication, due to the oxidation of the metallic tip by reacting with the O2 present in the surrounding atmosphere. Scanning electron microscopy (SEM) has a resolution limitation of below 4 nm, so TEM may be needed to observe even a single atom theoretically and practically. Tip grain down to 1-3 nm, thin polycrystalline oxides, or carbon or graphite layers at the tip apex, are routinely measured using TEM. The orientation of tip crystal, which is the angle between the tip plane in the single-crystal and the tip normal, can be estimated. Optical microscopy. In the past, optical microscopes were the only method used to investigate whether the tip is bent, through microscale imaging at many microscales. This is because the resolution limitation of an optical microscope is about 200 nm. Imaging software, including ImageJ, allows determination of the curvature, and aspect ratio of the tip. One drawback of this method is that it renders an image of tip, which is an object due to the uncertainty in the nanoscale dimension. This problem can be resolved by taking images of the tip multiple times, followed by combining them into an image by confocal microscope with some fluorescent material coating on the tip. It is also a time-consuming process due to the necessity of monitoring the wear or damage or degradation of the tip by collision with the surface during scanning the surface after each scan. Scanning tunneling spectroscopy. The scanning tunneling spectroscopy (STS) is spectroscopic form of STM. Spectroscopic data based on curvature is obtained to analyze the existence of any oxides or impurities on the tip. This is done by monitoring the linearity of the curve, which represents metallic tunnel junction. Generally, the curve is non-linear; hence, the tip has a gap-like shape around zero bias voltage for oxidized or impure tip, whereas the opposite is observed for sharp pure un-oxidized tip. Auger electron spectroscopy, X-ray photoelectron spectroscopy. In Auger electron spectroscopy (AES), any oxides present on the tip surface are sputtered out during in-depth analysis with argon ion beam generated by differentially pumped ion pump, followed by comparing the sputtering rate of the oxide with experimental sputtering yields. These Auger measurements may estimate the nature of oxides because of the surface contamination. Composition can also be revealed, and in some cases, thickness of the oxide layer down to 1-3 nm can be estimated. X-ray photoelectron spectroscopy also performs similar characterization for the chemical and surface composition, by providing information on the binding energy of the surface elements. Overall, the aforementioned characterization methods of tips can be categorized into three major classes. They are as follows: Applications. Probes tips have a wide variety of applications in different fields of science and technology. One of the major areas where probe tips are used is for application in SPM i.e., STM and AFM. For example, carbon nanotube tips in conjunction with AFM provides an excellent tool for surface characterization in the nanometer realm. CNT tips are also used in tapping-mode Scanning Force Microscopy (SFM), which is a technique where a tip taps a surface by a cantilever driven near resonant frequency of the cantilever. The CNT probe tips fabricated using CVD technique can be used for imaging of biological macromolecules, semiconductor and chemical structure. For example, it is possible to obtain an intermittent AFM contact image of IgM macromolecules with excellent resolution using a single CNT tip. Individual CNT tips can be used for high resolution imaging of protein molecules. In another application, multiwall carbon nanotube (MWCNT) and single wall carbon nanotube (SWCNT) tips were used to image amyloid β (1-40) derived protofibrils and fibrils by tapping mode AFM. Functionalized probes can be used in Chemical Force Microscopy (CFM) to measure intermolecular forces and map chemical functionality. Functionalized SWCNT probes can be used for chemically sensitive imaging with high lateral resolution and to study binding energy in chemical and biological system. Probe tips that have been functionalized with either hydrophobic or hydrophilic molecules can be used to measure the adhesive interaction between hydrophobic-hydrophobic, hydrophobic-hydrophilic, and hydrophilic-hydrophilic molecules. From these adhesive interactions the friction image of patterned sample surface can be found. Probe tips used in force microscopy can provide imaging of structure and dynamics of adsorbate at the nanometer scale. Self-assembled functionalized organic thiols on the surface of Au coated Si3N4 probe tips have been used to study the interaction between molecular groups. Again, carbon nanotube probe tips in conjunction with AFM can be used for probing crevices that occur in microelectronic circuits with improved lateral resolution. Functionality modified probe tips have been to measure the binding force between single protein-ligand pairs. Probe tips have been used as a tapping mode technique to provide information about the elastic properties of materials. Probe tips are also used in the mass spectrometer. Enzymatically active probe tips have been used for the enzymatic degradation of analytes. They have also been used as devices to introduce samples into the mass spectrophotometer. For example, trypsin-activated gold (Au/trypsin) probe tips can be used for the peptide mapping of the hen egg lysozyme. Atomically sharp probe tips can be used for imaging a single atom in a molecule. An example of visualizing single atoms in water cluster can be seen in Fig. 10. By visualizing single atoms in molecules present on a surface, scientists can determine bond length, bond order and discrepancies, if any, in conjugation which was previously thought to be impossible in experimental work. Fig. 9 shows the experimentally determined bond order in a poly-aromatic compound, which was thought to be very hard in the past. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "T=16\\epsilon(1-\\epsilon)^{-2k}" }, { "math_id": 1, "text": "\\epsilon = \\frac{E}{V} = \\text{Kinetic energy/potential energy}" }, { "math_id": 2, "text": "k = 2\\pi \\sqrt{\\frac{2mE}{h}}" }, { "math_id": 3, "text": "h" }, { "math_id": 4, "text": "\\backsim" }, { "math_id": 5, "text": "[\\log_{10}(1/V^2) vs. (1/V)] " } ]
https://en.wikipedia.org/wiki?curid=57418849
57421458
Binomial process
A binomial process is a special point process in probability theory. Definition. Let formula_0 be a probability distribution and formula_1 be a fixed natural number. Let formula_2 be i.i.d. random variables with distribution formula_0, so formula_3 for all formula_4. Then the binomial process based on "n" and "P" is the random measure formula_5 where formula_6 Properties. Name. The name of a binomial process is derived from the fact that for all measurable sets formula_7 the random variable formula_8 follows a binomial distribution with parameters formula_9 and formula_1: formula_10 Laplace-transform. The Laplace transform of a binomial process is given by formula_11 for all positive measurable functions formula_12. Intensity measure. The intensity measure formula_13 of a binomial process formula_14 is given by formula_15 Generalizations. A generalization of binomial processes are mixed binomial processes. In these point processes, the number of points is not deterministic like it is with binomial processes, but is determined by a random variable formula_16. Therefore mixed binomial processes conditioned on formula_17 are binomial process based on formula_1 and formula_0.
[ { "math_id": 0, "text": " P " }, { "math_id": 1, "text": " n " }, { "math_id": 2, "text": " X_1, X_2, \\dots, X_n " }, { "math_id": 3, "text": " X_i \\sim P " }, { "math_id": 4, "text": " i \\in \\{1, 2, \\dots, n \\}" }, { "math_id": 5, "text": " \\xi= \\sum_{i=1}^n \\delta_{X_i}, " }, { "math_id": 6, "text": "\\delta_{X_i(A)}=\\begin{cases}1, &\\text{if }X_i\\in A,\\\\ 0, &\\text{otherwise}.\\end{cases}" }, { "math_id": 7, "text": " A " }, { "math_id": 8, "text": " \\xi(A) " }, { "math_id": 9, "text": " P(A) " }, { "math_id": 10, "text": " \\xi(A) \\sim \\operatorname{Bin}(n,P(A))." }, { "math_id": 11, "text": " \\mathcal L_{P,n}(f)= \\left[ \\int \\exp(-f(x)) \\mathrm P(dx) \\right]^n " }, { "math_id": 12, "text": " f " }, { "math_id": 13, "text": " \\operatorname{E}\\xi " }, { "math_id": 14, "text": " \\xi " }, { "math_id": 15, "text": " \\operatorname{E}\\xi =n P." }, { "math_id": 16, "text": " K " }, { "math_id": 17, "text": " K=n " } ]
https://en.wikipedia.org/wiki?curid=57421458
57421934
Heat flux measurements of thermal insulation
Heat flux measurements of thermal insulation are applied in laboratory and industrial environments to obtain reference or in-situ measurements of the thermal properties of an insulation material. Thermal insulation is tested using nondestructive testing techniques relying on heat flux sensors. Procedures and requirements for in-situ measurements are standardized in ASTM C1041 standard: "Standard Practice for In-Situ Measurements of Heat Flux in Industrial Thermal Insulation Using Heat Flux Transducers". On-site methods. On-site heat flux measurements are often focused on testing the thermal transport properties of for example pipes, tanks, ovens and boilers, by calculating the heat flux "q" or the apparent thermal conductivity "formula_0". The real-time energy gain or loss is measured under pseudo steady state-conditions with minimal disturbance by a heat flux transducer (HFT). This on-site method is for flat surfaces (non-pipes) only. Measurement procedure. After successful application of these preparations connect the sensor to a datalogger or integrating voltmeter and wait until pseudo steady-state is achieved. It is advised to average the readings over a short time period when steady-state is achieved. This voltage measurement is the final measurement, but for good measure these steps should be applied on multiple relevant locations on the insulation. Calculation and precision. The heat flux formula_1 can be calculated from the voltage by: formula_2 "V" is the voltage measured by the HFT (measured in volt, V) "S" is the sensitivity of the HFT (measured in volt / watt per square meter formula_3) The apparent thermal conductivity can be calculated from: formula_4 "q" is the heat flux calculated from the HFT (measured in watt per square meter, formula_5 ) "D" is the thickness of the insulation material (measured in millimeter, mm) "formula_6" the temperature of the process surface, the inside of the material "formula_7" the temperature of the surface near the HFT, the outside of the material The interpretation and precision of the results depends on the section of measurement, the choice of HFT and external conditions. The correct heat flux sensor and measurement test section are of importance for a good in-situ measurement and should be based on manufacturer recommendations, past experience and careful consideration of the testing area. Standards. ASTM C1041: Standard Practice for In-Situ Measurements of Heat Flux in Industrial Thermal Insulation Using Heat Flux Transducers References. &lt;templatestyles src="Reflist/styles.css" /&gt; Bibliography. Sweden, 1979. (Draft Translation, March 1982, U.S. Army Corps of Engineers)
[ { "math_id": 0, "text": "\\lambda" }, { "math_id": 1, "text": "q" }, { "math_id": 2, "text": "q = \\frac{V}{S}" }, { "math_id": 3, "text": "\\frac{Vm^{2}}{W}" }, { "math_id": 4, "text": "\\lambda = \\frac{{qD}}{t_{1}-t_{2}}" }, { "math_id": 5, "text": "\\frac{W}{m^{2}}" }, { "math_id": 6, "text": "t_1" }, { "math_id": 7, "text": "t_2" } ]
https://en.wikipedia.org/wiki?curid=57421934
57425265
Loupekine snarks
Two related snarks in graph theory In the mathematical field of graph theory, the Loupekine snarks are two snarks, both with 22 vertices and 33 edges. The first Loupekine snark graph can be described as follows (using the SageMath's syntax): lou1 = Graph({1:[2,3,4], 5:[6,10],6:[7],7:[8],8:[9],9:[10], 11:[16,12],12:[13],13:[14],14:[15],15:[16], 17:[2,5,16],18:[2,10,11], 19:[3,7,12],20:[3,6,13], 21:[9,4,14],22:[4,8,15]}). The second Loupekine snark is obtained (up to an isomorphism) by replacing edges 5–6 and 11–12 by edges 5–12 and 6–11 in the first graph. Properties. Both snarks share the same invariants (as given in the boxes). The set of all the automorphisms of a graph is a group for the composition. For both Loupekine snarks, this group is the dihedral group formula_0 (identified as [12,4] in the Small Groups Database). The orbits under the action of formula_0 are : 1 2,3,4 17, 18, 19, 20, 21, 22 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16 The characteristic polynomials are different, namely: formula_1 and formula_2
[ { "math_id": 0, "text": "D_6" }, { "math_id": 1, "text": "\\chi_1= (x - 3) (x + 2)^{3} (x^{3} + x^{2} - 4x - 2) (x^{3} - 2x^{2} - x + 1)^{2} \\cdot (x - 2) (x^{2} + 2x - 2) (x^{3} - 3x + 1)^{2} " }, { "math_id": 2, "text": "\\chi_2= (x - 3) (x + 2)^{3} (x^{3} + x^{2} - 4x - 2) (x^{3} - 2x^{2} - x + 1)^{2} \\cdot x (x^{2} - 2) (x^{3} - 5x + 3)^{2} " } ]
https://en.wikipedia.org/wiki?curid=57425265
57425816
Contorted aromatics
Hydrocarbon compounds composed of rings fused such that the molecule is nonplanar In organic chemistry, contorted aromatics, or more precisely contorted polycyclic aromatic hydrocarbons, are polycyclic aromatic hydrocarbons (PAHs) in which the fused aromatic molecules deviate from the usual planarity. Introduction. The comparison of structures of graphene and fullerene can help comprehend the origin of curved pi surfaces and contortion in PAHs. Both graphene and fullerene bear resemblance in having sp2 hybridized carbons; however, exhibit different geometries (allotropes of carbon). The fact that fullerenes have five-membered rings incorporated among six-membered rings, makes them spherical whereas graphene remains planar due to the presence of exclusively all six-membered rings in it. In general, the ideal bond length and angle for or is about 1.42 Å and 2π/3 respectively, whereas in corannulene structure, a five membered ring is surrounded by six-membered rings contributing to non-planarity in the molecule. This leads to change in bond angles and bond lengths rendering non-planarity in the structure. Such type of contortions in the structure of PAHs are known as ‘arching distortions’. The hydrogen and carbon atoms of the PAH molecule which are either closest to one another due to non-planar structure or suffer from angle strain are known as the "saturated" ones and may serve as a source of ‘splitting distortions’ (see fig. 2). The region of a contorted molecule having saturated hydrogens and carbons is known as bay region (having white spheres or saturated hydrogens) (see fig. 4). Another reason for making the PAH molecules contorted may be the size of these molecules. Theoretical vibrational frequency studies of ("n" = 2-12) on coronenes using quantum chemical calculation (Hartree-Fock and DFT) show accelerated loss of planarity on increasing size of PAH molecules in-spite of having a stable structure due to conjugation, delocalization and aromatization. A gas phase coronene can be expected to switch to non-planar geometry around "n" = 9-12. The combined strain of arching and splitting distortion is thought to force the PAH molecules out of the plane and generate contortions. A switch to umbrella geometry may occur at "n" = 12 (see fig. 3). Figure 4 illustrates the strain energies carried by three different PAH molecules. The white spheres represent saturated hydrogens and the grey spheres as the leading carbons contributing to strain energy due to non-planar distortions. Table 1 shows the energy values for non-planarity (strain) of PAHs in kcal/mol for five-membered and six-membered ring molecules with common helicene and coronene units with reference to fullerenes C60 = 483.91 kcal/mol and C70 = 492.58 kcal/mol. The shaded parts indicate the bay regions in PAH molecules bearing the most strain. The strain energy is expressed in units of 10-2 kcal/mol. The ENP non-planar strain (contortion) energies indicate how much a PAH molecule deviate from the standard planar structure in terms of bond angles and bond lengths. The standard for PAHs is usually graphite. An ENP is introduced in the molecular structure whenever it deviates from the standard structural geometries to get the strain relief. The ENP values represented for a variety of PAHs in Table 1 are shown. Two types of contortions or Enp can be observed based on the data in Table 1. The PAH molecules shown in table 1 are based on two kinds of motifs. The ones based on helicenes and the others on corannulenes. The small ENP values (ranging from 0.25-8 kcal/mol) represented by the helicene based molecules and are because of weak contortions in their structures. The contributing factor seems to be the presence of only splitting distortions in the bay region of these PAHs. However, the other group based on corannulenes show higher ENP values (ranging from 1.86-116 kcal/mol). The higher ENP values suggest stronger contortions which may have originated due to the higher number of fused rings and a five-membered ring core in PAH molecules along with the bay region contortions. Based on the above discussion it can be cautiously inferred that the combined effects of non-planar strain caused by arching and splitting distortions as well as the size of PAH molecules give birth to contortions in PAHs structures. These unparalleled contorted molecules exhibit wide absorption spectrum and enhanced charge transport properties which make them potential candidates for electronic and optoelectronic applications. The curvature in these molecules cause slight shifting of the frontier orbitals from ideal parallel symmetry. The departure from parallel overlap induces modification in HOMO and LUMO eventually giving birth to changed optical properties. Inversion of these molecules in solution may allow changes in orbital geometry, thereby, broadening the absorption and emission spectra which is useful for light-emitting diode (LED) applications. Some derivatives of corannulenes act as blue emitters. The efficient charge transport in contorted aromatic molecules is related with self-assembly and crystal packing. The discotic contorted molecules in crystals tilt relative to the columnar axis because of the interaction of protons with the electronic cloud of neighboring PAHs molecules. This tilt forbids the latitudinal stacking of molecules and allows only the longitudinal stacking (see fig 6). This property enhances the linear overlap of orbitals and promote the charge carrier mobility. All these properties make the contorted PAHs excellent candidates for semiconductor, organic field effect transistors (OFETs), and organic photovoltaic (OPV) devices applications. Discs and ribbons (see Fig. 1) constitute the major classes of contorted aromatic structures. The discs possess a concave π molecular surface and upon substitution with a hydrocarbon side chain can be self- assembled to turn into columns figure5. These columns can express the desired properties of nanoscale phase separation, charge separation and charge transport in films. The concave discs can behave as molecular sensors for electron deficient aromatic molecules. Once in contact with electron acceptors like fullerenes they show typical characteristics of a p-n junction in shape of a ball and socket joint model Figure 7. The ribbons, on the other hand, can be envisioned as pieces of graphene being spun into ribbons owing to contortions. They are good materials for OFETs as electron transporting materials. With the incorporation of donor polymers they behave as potential alternative for non-fullerene based organic solar cells. The focus on developing the non-fullerene organic semiconductor is because of some potential downsides of fullerene-based materials for n-type organic molecules. Fullerenes fail to fulfil the basic criterion of good organic semiconductors by not having good absorption in UV-Vis region and poor tunability upon substitution with various groups. These demands have fuelled research towards finding new materials including contorted aromatic molecule which possess fundamental qualities of wide absorption range and better charge transportation. These molecules also provide controlled π-π stacking in small domains and excellent charge percolation pathways. History. The Nobel Prize of 1953 won by Hermann Staudinger for characterizing macromolecules as polymers opened the gateway for a new field of materials. Ever since that time the polymers have been replacing conventional materials like wood, metals and now are widely used as conductors and semiconductors in commercialized products. The age of plastic based society really began in the wake of 1953’s Nobel prize. It was later revealed in 1970 that some polymers can exhibit appropriate electrical conductivity. The thought would have been consolidated by the fact that graphite a purely carbon-based material was capable of electric conduction due to its expansive π conjugation system. The organic conductors are however, not as good as metal-based conductors. By late 1970s efforts were made for fabricating polymer-based power transmission lines, light weight motors and novel approaches for achieving superconductivity. Currently massive worldwide efforts for achieving higher power conversion efficiencies in OPVs, better hole carrier mobilities in OFETs in π-conjugated polymer domain are underway. The discovery of iodine doped polyacetylene and its electrical properties in 1977 fuelled the research work towards finding better and more efficient conjugated organic polymers. The ‘2000’ Nobel Prize in chemistry by Alan J. Heeger, Alan G. Macdiarmid, and Hideki Shirakawa in recognition of their contribution towards unravelling the polyacetylene figure electronics set another milestone in the journey of finding conducting polymers. Over the past two decades developments have been made in synthesizing new polymers like polythiophenes, polyphenylenes and polyphenylene sulphides and small organic molecules. The contorted aromatic molecules due to their properties of making ladder polymers, self-assembly and innate charge percolation pathways because of small domains and π stacking other than providing conjugations throughout their structure became another focal point for their use in a variety electronic application. These contorted molecules offer a delicate balance and phase separation which can promote the power conversion efficiency (PCE) close to theoretical limit of 20%. Widespread futuristic studies on synthesizing new small contorted aromatic molecules capable of being stable over longer period and higher PCE are being carried out. From inorganic semiconductors to Contorted Aromatic Small molecules. One may keep getting pestered by the simple question of why we need the organic solar cells? No wonder if the commercial inorganic silicon based solar cells had been doing fine no one would have ever thought of an alternative way of getting at the same point. The answer is very simple. Inorganic solar cells have their own pros and cons. But the point of concern would be to address the issues which have limit their wide spread use so far. One of the major drags in commercializing this technology on massive scale is the cost. Highly purified silicon production needs a lot of energy which would add to the cost of this technology on one hand and hurt the green chemistry on the other. Efforts are underway to bring about new ways to aggregate as much stacks of silicon material as possible to increase the PCE which does not seem to help as far as cost reduction is concerned. The inorganic solar panels are heavy and non-flexible. These factors count towards limiting this technology for mass adaptation and may not be able to compete with other green and cheap energy sources like wind and hydro power. Multi junction silicon based solar cells however, are the best so far for providing highest PCE 44.4%. All the above-mentioned challenges have been pushing the scientific community to seek for new opportunities. The organic solar cells seem to supersede their predecessors in many aspects but the challenge of competing with PCE remains. Unlike the inorganic solar cells, the material requirement for fabrication of OPV is much smaller. They can be printed out using printing tools which would help in low cost commercialization of these materials. Solution processable organic semiconductors can be easily fabricated into thin films. In short OPVs, OFETs and organic light emitting diodes OLEDs are light weight, flexible, easy to fabricate, inexpensive and tunable to demand as compared to the conventional inorganic materials. The downsides of organic small contorted molecules as semiconductors is their less reliable long-term usability and low PCE so far. The highest PCE for organic solar cell achieved so far is 11%. When it comes to the reasons that limit OPVs power conversion efficiencies, we need to understand the structural and chemical limitations that existing organic small molecules offer. The OPVs make use of a pair of electron acceptor fullerene or another non-fullerene (n-type) and electron donor molecules bond (p-type) creating heterojunctions within the active film. There are distinctive designs in which these molecules can be used in blends. Mostly planar heterojunction (Fig.8) is used for fullerene based organic semiconductors in which the two molecules end up on top of each other to create a ball and socket joint model1. Bulk heterojunctions (Fig 8) have reportedly produced better PCEs when solution processed in single active layer. The use of fullerene in OPVs and semiconductors generate a variety of challenges that are detrimental for good light harvesting and power generation. Fullerenes do not give a strong absorption in visible (ε = 724 L mol−1 cm−1) and near-IR region (ε = 7500 L mol−1 cm−1) which reduces the charge generation capability for these devices. Since conduction in organic semiconductors is thermally activated phenomenon of hopping electrons through sp2 hybridized p-orbitals, therefore electron acceptors with better absorption in visible-near IR regions are superior performers. The fullerenes offer less tunability and poor electronic communication between the cage (C60 or C70) and its substituent. Another challenge is high probability of exciton recombination in fullerenes-based electron acceptors. Contrary to the inorganic semiconductors, the excitons formed in organic materials possess a higher binding energy (0.3-1.0 eV). This issue is resolved at the acceptor donor interface by matching HOMO (donor) and LUMO (acceptor) energy gaps. Planar heterojunction (PHJ) is the commonly used OPV architecture in which donor and acceptors molecules are joined with one another. The contorted PAHs owing to their non-polar and non-planar topology and shape are expected to offer a balance between miscibility and self-aggregation for optimal charge transport to electrodes. These contorted conjugated aromatic molecules support the charge transport due to resonance of electrons and hinder the possibility of electron hole pair recombination for having suitable HOMO and LUMO offsets at donor and acceptor interphase while allowing the antagonistic charge percolation pathways. Self-Assembled Materials. These contorted molecules show the remarkable phenomenon of self-assembly. Based upon the data collected from OFET and OPVs, these molecules show efficient charge transport. The c-HBC, c-OCBC and c-DBTTC act as electron donors and c-PDIs as electron acceptors. The contorted non-planar structure allows these molecules to express sufficient charge transport in self-assembled layers. The self-assembly characteristics change in case of alkyl chain derivatives of c-HBC into orthorhombic crystalline cables which act as p-type semiconductors. The contorted peripheral edges of these molecules serve to provide unique intermolecular contacts which make these molecules efficient in charge transport. The tetra-dodecaloxy side chains in c-HBC promotes the self-assembly. These materials are deposited in the form of columnar hexagonal liquid crystals. The main aromatic core transport the charges and side chains act as insulators. In thin film, the columns arrange themselves parallel to the surface and allow lateral charge transport. Designing and Synthesis of Contorted PAHs. Influenced by fullerenes themselves, the following structural features are incorporated into the non-fullerene-based electron acceptors. Figure 9 Cyclopentafused Contorted PAHs. More recently a Pd-catalyzed cyclopentannulation followed by Scholl cyclodehydrogenation devised by Kyle Plunkett's research group has been found valuable for synthesizing the five-membered ring core poly aromatic contorted molecules. Based on the hypothesis that the five membered cyclopentaaceanthralene core is a part of the fullerene itself and can help assist conjugation in the molecule in its substituted form. The resonance structures Figure 10 show that the five-membered aromatic ring core can accept a pair of electrons in one of its (anti aromatic) resonance structures to afford cyclopentadienyl anion (aromatic) rings and thus can behave as a good electron acceptor molecule. The tendency to achieve low energy aromatic structure makes it a good electron acceptor core. This methodology helps in relatively easy extension of conjugated aromatic core of the molecule which serves as a conduit for charge transfer. These contorted aromatic molecules provide better solubility and solution processability besides exhibiting the π stacking in solid state. The contortions thus enhance the π-stacking ability through a lock and key like model by giving reasonable phase separation favorable for isotropic charge transport. The synthetic scheme-I shows the use of above mentioned cyclopentadienyl anion scaffold in synthesizing contorted aromatic molecules. The reaction of dibromoanthracene with 3,3 dialkoxy 1,1 diphenyl acetylene (R=CH3, C12H25) in the presence of Pd2(dba)3 P("o"-tol)3, KOAc, LiCl, and DMF gives 1,2,6,7-tetra(3-alkoxyphenyl)cyclopenta["hi"]aceanthrylene which after Scholl cyclodehydrogenation affords 2,7,13,18-Tetraalkoxytetrabenzo["f,h,r,t"]-rubicene. The crystal structure figure 11 of above shown compound 2, 7, 13, 18-Tetraalkoxytetrabenzo [f,h,r,t]-rubicene in scheme 1 shows the non-planar contorted structure. The splay angles of bay regions manifest the contorted structure. So the synthesis of cyclopentafused contorted PAHs can be done by this method. The contortion enhances the solubility of these molecules compared to their planar counterparts. Device Fabrication And Testing. These contorted PAH molecules are used in the fabrication of OFET devices in form of a thin film to test their semiconducting properties (Figure 12). The contorted PAHs are usually used as solution on an insulator dielectric layer of SiO2/Si substrate. Three electrodes source (S) drain (D) and gate (G) are affixed for providing potential and charge measurement. A significantly higher than the threshold value of voltage is applied between gate and source VGS which generates positive (p-type) channel at the semiconductor and dielectric interface. On applying a negative voltage from source to drain VSD, the hole flows from source to drain which is also equivalent to flow of electron in opposite direction. By increasing the VGS the flow of current from drain/source IDS increases. The process continues until the maximum value of current “pinch off” reaches at which point the positive channel gets saturated on one side. Data at different VGS values is collected and plotted in the form of an output plot for c-HBC with tetra-dodecyloxy side chains as shown in Figure 5. The p-type semiconductors exhibit negative VSD &amp; VGS values and vice versa for n-type semiconductors. Plotting with the (ID-S)1/2 Vs VG (transfer plot) gives the slope for calculation of mobility. μ(mobility)=2L/WCi (formula_0 Where, L' is the Channel length W’ …….. Channel width Ci’ ……. Capacitance in F/cm2 dVG‘ …….. Slope of the transfer plot The units of mobility are cm2/ V.s The c-HBC, c-OCBC and c-DBTTC and their many derivatives act as p-type and c-PDI and its oligomers as electron acceptors. Experiments indicate that the photon conversion efficiency of contorted c-HBC is better than the planar HBC in OPVs1. The contorted shape enhances the electronics properties of the molecules by building intimate structural interfaces between donor and acceptors. The c-PDI dimers on the other hand show better electron mobility (~10−2 cm2 V−1 s−1), electron acceptability and almost similar LUMO levels as fullerenes. Applications. The generalized applications of contorted PAHs and organic semiconductors are given as follows. Organic Light Emitting Diodes. The organic semiconductors are being used as OLEDs in various electronic display applications. Philips launched Sensotech Philishave in 2002 featuring first OLED based display panel. Kodak has also introduced LS633 digital camera with award winning OLED display technology. Sony also produced a 27” OLED prototype TV88 Organic Photovoltaic Cells. Another extensively researched area toward the applications of organic semiconductors is light harvesting. The current photon conversion efficiency may be an obstacle in commercialization of this technology but there are many attractive features like low cost, flexibility and miniaturization attached. Organic Field Effect Transistors. OFETs being another exciting application of organic semiconductors. This technology with the right pre-requisites hold enough potential to revolutionise the existing technologies in use. Smart Textiles. The flexibility and light weight feature of organic semiconductors offer futuristic applications like smart fabric for health care, military, sporting and space exploration ventures. This will allow the real time observation of the person's health wearing these fabrics. Healthcare. Organic light emitting diodes can also be used in photodynamic therapy for skin cancer and cosmetic industry for antiaging treatment. Flexible Screen and Displays Units. Flexibility gives another edge to organic semiconductors being better than inorganic semiconductors. This technology may allow us to have screens capable of rolling into small devices. Philips has made a black and white prototype Lab on a Chip. The organic semiconducting technology is replacing silicon and may one day achieve the goal of having a lab on a chip. The contorted aromatic molecules offer a wide variety of tuning and suitability options. Both electron acceptor (n-type) and electron donor (p-type) contorted molecules can be manufactured. The non-planar structure offers self-assembly characteristics leading to simultaneously optimized miscibility and phase separation. The charge mobility of contorted aromatic molecules being higher in two orders of magnitude than the planar structures points towards them having well-tuned charge percolation pathways. There is still a long way to go from 11% of PCE to theoretical maximum of 20%. If the researchers are able to strike the right balance between miscibility and phase separation of these materials and tune the orbitals correctly they may be able to find the best organic semiconductors. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "d\\sqrt{I}_{D-S}/(d_{V-G})^2" } ]
https://en.wikipedia.org/wiki?curid=57425816
57427675
Specific pump power
Pump energy-efficiency metric Specific Pump Power (SPP) is a metric in fluid dynamics that quantifies the energy-efficiency of pump systems. It is a measure of the electric power that is needed to operate a pump (or collection of pumps), relative to the volume flow rate. It is not constant for a given pump, but changes with both flow rate and pump pressure. This term 'SPP' is adapted from the established metric Specific fan power (SFP) for fans (blowers). It is commonly used when measuring the energy efficiency of buildings. Definition. The SPP for a specific operating point (combination of flow rate and pressure rise) for a pump system is defined as: formula_0 where: Just as for SFP (i.e. fan power), SPP is also related to pump pressure (pump head) and the pump system efficiency, as follows: formula_3 where: This equation is simply an application of Bernoulli's principle in the case where the inlet and outlet have the same diameter and same height. Observe that SPP is not a property of the pump alone, but is also dependent on the pressure drop of the circuit that the pump circulates fluid through. Thus, in order to minimize energy use for pump system, one must reduce the system pressure drop (e.g. use large diameter pipes and low flow rates) in addition to selecting pumps with good intrinsic efficiency (hydrodynamically efficient with an efficient motor). Applying the above equations enables us to estimate electrical power consumption in a number of ways: formula_6 where: References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "SPP \\equiv {{\\sum P_{elec}} \\over q_v}" }, { "math_id": 1, "text": "{\\sum P_{elec}}" }, { "math_id": 2, "text": "{q_v}" }, { "math_id": 3, "text": "SPP ={\\Delta p_t \\over \\eta_{tot}}" }, { "math_id": 4, "text": "\\Delta p_t" }, { "math_id": 5, "text": "\\eta_{tot}" }, { "math_id": 6, "text": "{\\sum P_{elec}}={q_v \\cdot SPP}={q_v \\Delta p_t \\over \\eta_{tot}}={P_h \\over \\eta_{tot}}" }, { "math_id": 7, "text": "P_h" }, { "math_id": 8, "text": "=q_v \\Delta p_t" } ]
https://en.wikipedia.org/wiki?curid=57427675
57429392
Fusion of anyons
Anyon fusion is the process by which multiple anyons behave as one larger composite anyon. Anyon fusion is essential to understanding the physics of non-abelian anyons and how they can be used in quantum information. Abelian anyons. If formula_0 identical abelian anyons each with individual statistics formula_1 (that is, the system picks up a phase formula_2 when two individual anyons undergo adiabatic counterclockwise exchange) all fuse together, they together have statistics formula_3. This can be seen by noting that upon counterclockwise rotation of two composite anyons about each other, there are formula_4 pairs of individual anyons (one in the first composite anyon, one in the second composite anyon) that each contribute a phase formula_2. An analogous analysis applies to the fusion of non-identical abelian anyons. The statistics of the composite anyon is uniquely determined by the statistics of its components. Non-abelian anyon fusion rules. Non-abelian anyons have more complicated fusion relations. As a rule, in a system with non-abelian anyons, there is a composite particle whose statistics label is not uniquely determined by the statistics labels of its components, but rather exists as a quantum superposition (this is completely analogous to how two fermions known to each have spin 1/2 and 3/2 are together in quantum superposition of total spin 1 and 2). If the overall statistics of the fusion of all of several anyons is known, there is still ambiguity in the fusion of some subsets of those anyons, and each possibility is a unique quantum state. These multiple states provide a Hilbert space on which quantum computation can be done. Specifically, two non-abelian anyons labeled formula_5 and formula_6 have a fusion rule given by formula_7, where the formal sum over formula_8 goes over all labels of possible anyon types in the system (as well as the trivial label formula_9 denoting no particles), and each formula_10 is a nonnegative integer which denotes how many distinct quantum states there are in which formula_11 and formula_12 fuse into formula_13 (This is true in the abelian case as well, except in that case, for each formula_11 and formula_12, there is one type of anyon formula_13 for which formula_14 and for all other formula_13, formula_15 .) Each anyon type formula_11 should also have a conjugate antiparticle formula_16 among the list of possible anyon types, such that formula_17, i.e. it can annihilate with its antiparticle. The anyon type label does not specify all of the information about the anyon, but the information that it does indicate is topologically invariant under local perturbations. For example, the Fibonacci anyon system, one of the simplest, consists of labels formula_18 and formula_19 (formula_19 denotes a Fibonacci anyon), which satisfy fusion rule formula_20 (corresponding to formula_21) as well as the trivial rules formula_22 and formula_23 (corresponding to formula_24). The Ising anyon system consists of labels formula_18 , formula_25 and formula_26 , which satisfy fusion rules formula_27, formula_28, and the trivial rules. The formula_29 operation is commutative and associative, as it must be to physically make sense with fused anyons. Furthermore, it is possible to view the formula_30 coefficients as matrix entries formula_31 of a matrix with row and column indices formula_32 and formula_33; then the largest eigenvalue of this matrix is known as the quantum dimension formula_34 of anyon type formula_11. Fusion rules can also be generalized to consider in how many ways formula_35 a collection formula_36 can be fused to a final anyon type formula_8. Hilbert spaces of fusion processes. The fusion process where formula_11 and formula_12 fuse into formula_13 corresponds to a formula_30 dimensional complex vector space formula_37, consisting of all the distinct orthonormal quantum states in which formula_11 and formula_12 fuse into formula_13. This forms a Hilbert space. When formula_38, such as in the Ising and Fibonacci examples, formula_39 is at most just a one dimensional space with one state. The direct sum formula_40 is a decomposition of formula_41 the tensor product of the Hilbert space of individual anyon formula_11 and the Hilbert space of individual anyon formula_6. In topological quantum field theory, formula_42 is the vector space associated with the pair of pants with waist labeled formula_13 and legs formula_11 and formula_12. More complicated Hilbert spaces can be constructed corresponding to the fusion of three or more particles, i.e. for the quantum systems where it is known that the formula_36 fuse into final anyon type formula_8. This Hilbert space formula_43 would describe, for example, the quantum system formed by starting with a quasiparticle formula_8 and, via some local physical procedure, splitting up that quasiparticle into quasiparticles formula_36 (because in such a system all the anyons must necessarily fuse back into formula_13 by topological invariance). There is an isomorphism between formula_44 and formula_45 for any formula_46. As mentioned in the previous section, the permutations of the labels are also isomorphic. One can understand the structure of formula_43 by considering fusion processes one pair of anyons at a time. There are many arbitrary ways one can do this, each of which can be used to derive a different decomposition of formula_47 into pairs of pants. One possible choice is to first fuse formula_48 and formula_49 into formula_50, then fuse formula_51 and formula_52 into formula_53, and so on. This approach shows us that formula_54, and correspondingly formula_55 where formula_56 is the matrix defined in the previous section. This decomposition manifestly indicates a choice of basis for the Hilbert space. Different arbitrary choices of the order in which to fuse anyons will correspond to different choices of basis.
[ { "math_id": 0, "text": "N" }, { "math_id": 1, "text": "\\alpha " }, { "math_id": 2, "text": " e^{i \\alpha} " }, { "math_id": 3, "text": " N^2 \\alpha " }, { "math_id": 4, "text": " N^2 " }, { "math_id": 5, "text": " a" }, { "math_id": 6, "text": " b " }, { "math_id": 7, "text": " a \\times b = \\sum_c N^c_{ab} c " }, { "math_id": 8, "text": " c " }, { "math_id": 9, "text": "c = 1" }, { "math_id": 10, "text": " N^c_{ab} " }, { "math_id": 11, "text": "a" }, { "math_id": 12, "text": "b" }, { "math_id": 13, "text": "c" }, { "math_id": 14, "text": "N^c_{ab}=1" }, { "math_id": 15, "text": "N^c_{ab}=0" }, { "math_id": 16, "text": "\\bar{a}" }, { "math_id": 17, "text": " N^1_{a \\bar{a}} \\neq 0 " }, { "math_id": 18, "text": " 1 " }, { "math_id": 19, "text": " \\tau " }, { "math_id": 20, "text": " \\tau \\times \\tau = 1 + \\tau " }, { "math_id": 21, "text": "N^{\\tau}_{\\tau \\tau}=N^{1}_{\\tau \\tau} = 1" }, { "math_id": 22, "text": " \\tau \\times 1= \\tau " }, { "math_id": 23, "text": " 1 \\times 1 = 1 " }, { "math_id": 24, "text": "N^{\\tau}_{\\tau 1}=N^{1}_{1 1} = 1" }, { "math_id": 25, "text": " \\psi " }, { "math_id": 26, "text": " \\sigma " }, { "math_id": 27, "text": " \\sigma \\times \\sigma = 1 + \\psi " }, { "math_id": 28, "text": " \\sigma \\times \\psi= \\sigma " }, { "math_id": 29, "text": " \\times " }, { "math_id": 30, "text": " N^c_{ab} " }, { "math_id": 31, "text": " (N_a)^c_b " }, { "math_id": 32, "text": " b" }, { "math_id": 33, "text": " c" }, { "math_id": 34, "text": " d_a " }, { "math_id": 35, "text": " N^{c}_{a_1, a_2, \\ldots a_m} " }, { "math_id": 36, "text": " a_1, a_2, \\ldots a_m " }, { "math_id": 37, "text": " V^c_{ab} " }, { "math_id": 38, "text": " N^c_{ab} \\le 1" }, { "math_id": 39, "text": "V^c_{ab} " }, { "math_id": 40, "text": " \\bigoplus_c V^c_{ab} " }, { "math_id": 41, "text": " \\mathcal{H}_a \\otimes \\mathcal{H}_b" }, { "math_id": 42, "text": " V^{c}_{ab} " }, { "math_id": 43, "text": " V^{c}_{a_1, a_2, \\ldots a_m} " }, { "math_id": 44, "text": " V^{1}_{a_1, a_2, \\ldots a_m} " }, { "math_id": 45, "text": " V^{\\bar{a}_1, \\bar{a}_2, \\ldots \\bar{a}_j }_{a_{j+1}, a_{j+2}, \\ldots a_m} " }, { "math_id": 46, "text": " j " }, { "math_id": 47, "text": " V^{c}_{a_1, \\ldots a_m} " }, { "math_id": 48, "text": " a_1" }, { "math_id": 49, "text": "a_2 " }, { "math_id": 50, "text": "b_1" }, { "math_id": 51, "text": " b_1" }, { "math_id": 52, "text": "a_3 " }, { "math_id": 53, "text": "b_2" }, { "math_id": 54, "text": " V^{c}_{a_1, a_2, \\ldots a_m} = \\bigoplus_{\\lbrace b_j \\rbrace} \\left( V^{b_1}_{a_1,a_2}\\otimes V^{b_2}_{b_1,a_3}\\otimes V^{b_3}_{b_2,a_4}\\ldots V^{b_{m-2}}_{b_{m-3},a_{m-1}}\\otimes V^{c}_{b_{m-2},a_{m}} \\right)" }, { "math_id": 55, "text": " N^{c}_{a_1, \\ldots a_m} = \\left( \\prod_{j=2}^{m} N_{a_j}\\right)^c_{a_1}" }, { "math_id": 56, "text": " N_a " } ]
https://en.wikipedia.org/wiki?curid=57429392
57430168
Simple point process
A simple point process is a special type of point process in probability theory. In simple point processes, every point is assigned the weight one. Definition. Let formula_0 be a locally compact second countable Hausdorff space and let formula_1 be its Borel formula_2-algebra. A point process formula_3, interpreted as random measure on formula_4, is called a simple point process if it can be written as formula_5 for an index set formula_6 and random elements formula_7 which are almost everywhere pairwise distinct. Here formula_8 denotes the Dirac measure on the point formula_9. Examples. Simple point processes include many important classes of point processes such as Poisson processes, Cox processes and binomial processes. Uniqueness. If formula_10 is a generating ring of formula_1 then a simple point process formula_3 is uniquely determined by its values on the sets formula_11. This means that two simple point processes formula_3 and formula_12 have the same distributions iff formula_13
[ { "math_id": 0, "text": " S " }, { "math_id": 1, "text": " \\mathcal S " }, { "math_id": 2, "text": " \\sigma " }, { "math_id": 3, "text": " \\xi " }, { "math_id": 4, "text": " (S, \\mathcal S) " }, { "math_id": 5, "text": " \\xi =\\sum_{i \\in I} \\delta_{X_i} " }, { "math_id": 6, "text": " I " }, { "math_id": 7, "text": " X_i " }, { "math_id": 8, "text": " \\delta_x " }, { "math_id": 9, "text": " x " }, { "math_id": 10, "text": " \\mathcal I " }, { "math_id": 11, "text": " U \\in \\mathcal I " }, { "math_id": 12, "text": " \\zeta " }, { "math_id": 13, "text": " P(\\xi(U)=0) = P(\\zeta(U)=0) \\text{ for all } U \\in \\mathcal I" } ]
https://en.wikipedia.org/wiki?curid=57430168
574311
Doomsday argument
Doomsday scenario on human births The doomsday argument (DA), or Carter catastrophe, is a probabilistic argument that claims to predict the future population of the human species based on an estimation of the number of humans born to date. The doomsday argument was originally proposed by the astrophysicist Brandon Carter in 1983, leading to the initial name of the Carter catastrophe. The argument was subsequently championed by the philosopher John A. Leslie and has since been independently conceived by J. Richard Gott and Holger Bech Nielsen. Similar principles of eschatology were proposed earlier by Heinz von Foerster, among others. A more general form was given earlier in the Lindy effect, which proposes that for certain phenomena, the future life expectancy is proportional to (though not necessarily equal to) the current age and is based on a decreasing mortality rate over time. Summary. The premise of the argument is as follows: suppose that the total number of human beings that will ever exist is fixed. If so, the likelihood of a randomly selected person existing at a particular time in history would be proportional to the total population at that time. Given this, the argument posits that a person alive today should adjust their expectations about the future of the human race because their existence provides information about the total number of humans that will ever live. If the total number of humans who were born or will ever be born is denoted by formula_0, then the Copernican principle suggests that any one human is equally likely (along with the other formula_1 humans) to find themselves in any position formula_2 of the total population formula_0so humans assume that our fractional position formula_3 is uniformly distributed on the interval [0,1] before learning our absolute position. formula_4 is uniformly distributed on (0,1) even after learning the absolute position formula_2. For example, there is a 95% chance that formula_4 is in the interval (0.05,1), that is formula_5. In other words, one can assume with 95% certainty that any individual human would be within the last 95% of all the humans ever to be born. If the absolute position formula_2 is known, this argument implies a 95% confidence upper bound for formula_0 obtained by rearranging formula_6 to give formula_7. If Leslie's figure is used, then approximately 60 billion humans have been born so far, so it can be estimated that there is a 95% chance that the total number of humans formula_0 will be less than 20formula_860 billion = 1.2 trillion. Assuming that the world population stabilizes at 10 billion and a life expectancy of 80 years, it can be estimated that the remaining 1140 billion humans will be born in 9120 years. Depending on the projection of the world population in the forthcoming centuries, estimates may vary, but the argument states that it is unlikely that more than 1.2 trillion humans will ever live. Aspects. Assume, for simplicity, that the total number of humans who will ever be born is 60 billion ("N"1), or 6,000 billion ("N"2). If there is no prior knowledge of the position that a currently living individual, "X", has in the history of humanity, one may instead compute how many humans were born before "X", and arrive at say 59,854,795,447, which would necessarily place "X" among the first 60 billion humans who have ever lived. It is possible to sum the probabilities for each value of "N" and, therefore, to compute a statistical 'confidence limit' on "N". For example, taking the numbers above, it is 99% certain that "N" is smaller than 6 trillion. Note that as remarked above, this argument assumes that the prior probability for "N" is flat, or 50% for "N"1 and 50% for "N"2 in the absence of any information about "X". On the other hand, it is possible to conclude, given "X", that "N"2 is more likely than "N"1 if a different prior is used for "N". More precisely, Bayes' theorem tells us that P("N"|"X") = P("X"|"N")P("N")/P("X"), and the conservative application of the Copernican principle tells us only how to calculate P("X"|"N"). Taking P("X") to be flat, we still have to assume the prior probability P("N") that the total number of humans is "N". If we conclude that "N"2 is much more likely than "N"1 (for example, because producing a larger population takes more time, increasing the chance that a low probability but cataclysmic natural event will take place in that time), then P("X"|"N") can become more heavily weighted towards the bigger value of "N". A further, more detailed discussion, as well as relevant distributions P("N"), are given below in the Rebuttals section. The doomsday argument does "not" say that humanity cannot or will not exist indefinitely. It does not put any upper limit on the number of humans that will ever exist nor provide a date for when humanity will become extinct. An abbreviated form of the argument "does" make these claims, by confusing probability with certainty. However, the actual conclusion for the version used above is that there is a 95% "chance" of extinction within 9,120 years and a 5% chance that some humans will still be alive at the end of that period. (The precise numbers vary among specific doomsday arguments.) Variations. This argument has generated a philosophical debate, and no consensus has yet emerged on its solution. The variants described below produce the DA by separate derivations. Gott's formulation: "vague prior" total population. Gott specifically proposes the functional form for the prior distribution of the number of people who will ever be born ("N"). Gott's DA used the vague prior distribution: formula_9. where Since Gott specifies the prior distribution of total humans, "P(N)", Bayes' theorem and the principle of indifference alone give us "P(N|n)", the probability of "N" humans being born if "n" is a random draw from "N": formula_10 This is Bayes' theorem for the posterior probability of the total population ever born of "N", conditioned on population born thus far of "n". Now, using the indifference principle: formula_11. The unconditioned "n" distribution of the current population is identical to the vague prior "N" probability density function, so: formula_12, giving P ("N" | "n") for each specific "N" (through a substitution into the posterior probability equation): formula_13. The easiest way to produce the doomsday estimate with a given confidence (say 95%) is to pretend that "N" is a continuous variable (since it is very large) and integrate over the probability density from "N" = "n" to "N" = "Z". (This will give a function for the probability that "N" ≤ "Z"): formula_14 formula_15 Defining "Z" = 20"n" gives: formula_16. This is the simplest Bayesian derivation of the doomsday argument: The chance that the total number of humans that will ever be born ("N") is greater than twenty times the total that have been is below 5% The use of a vague prior distribution seems well-motivated as it assumes as little knowledge as possible about "N", given that some particular function must be chosen. It is equivalent to the assumption that the probability density of one's fractional position remains uniformly distributed even after learning of one's absolute position ("n"). Gott's "reference class" in his original 1993 paper was not the number of births, but the number of years "humans" had existed as a species, which he put at 200,000. Also, Gott tried to give a 95% confidence interval between a "minimum" survival time and a maximum. Because of the 2.5% chance that he gives to underestimating the minimum, he has only a 2.5% chance of overestimating the maximum. This equates to 97.5% confidence that extinction occurs before the upper boundary of his confidence interval, which can be used in the integral above with "Z" = 40"n", and "n" = 200,000 years: formula_17 This is how Gott produces a 97.5% confidence of extinction within "N" ≤ 8,000,000 years. The number he quoted was the likely time remaining, "N" − "n" = 7.8 million years. This was much higher than the temporal confidence bound produced by counting births, because it applied the principle of indifference to time. (Producing different estimates by sampling different parameters in the same hypothesis is Bertrand's paradox.) Similarly, there is a 97.5% chance that the present lies in the first 97.5% of human history, so there is a 97.5% chance that the total lifespan of humanity will be at least formula_18; In other words, Gott's argument gives a 95% confidence that humans will go extinct between 5,100 and 7.8 million years in the future. Gott has also tested this formulation against the Berlin Wall and Broadway and off-Broadway plays. Leslie's argument differs from Gott's version in that he does not assume a" vague prior" probability distribution for "N". Instead, he argues that the force of the doomsday argument resides purely in the increased probability of an early doomsday once you take into account your birth position, regardless of your prior probability distribution for "N". He calls this the "probability shift". Heinz von Foerster argued that humanity's abilities to construct societies, civilizations and technologies do not result in self-inhibition. Rather, societies' success varies directly with population size. Von Foerster found that this model fits some 25 data points from the birth of Jesus to 1958, with only 7% of the variance left unexplained. Several follow-up letters (1961, 1962, ...) were published in "Science" showing that von Foerster's equation was still on track. The data continued to fit up until 1973. The most remarkable thing about von Foerster's model was it predicted that the human population would reach infinity or a mathematical singularity, on Friday, November 13, 2026. In fact, von Foerster did not imply that the world population on that day could actually become infinite. The real implication was that the world population growth pattern followed for many centuries prior to 1960 was about to come to an end and be transformed into a radically different pattern. Note that this prediction began to be fulfilled just in a few years after the "doomsday" argument was published. Reference classes. The reference class from which "n" is drawn, and of which "N" is the ultimate size, is a crucial point of contention in the doomsday argument argument. The "standard" doomsday argument hypothesis skips over this point entirely, merely stating that the reference class is the number of "people". Given that you are human, the Copernican principle might be used to determine if you were born exceptionally early, however the term "human" has been heavily contested on practical and philosophical reasons. According to Nick Bostrom, consciousness is (part of) the discriminator between what is in and what is out of the reference class, and therefore extraterrestrial intelligence might have a significant impact on the calculation. The following sub-sections relate to different suggested reference classes, each of which has had the standard doomsday argument applied to it. SSSA: Sampling from observer-moments. Nick Bostrom, considering observation selection effects, has produced a Self-Sampling Assumption (SSA): "that you should think of yourself as if you were a random observer from a suitable reference class". If the "reference class" is the set of humans to ever be born, this gives "N" &lt; 20"n" with 95% confidence (the standard doomsday argument). However, he has refined this idea to apply to "observer-moments" rather than just observers. He has formalized this as: The strong self-sampling assumption (SSSA): Each observer-moment should reason as if it were randomly selected from the class of all observer-moments in its reference class. An application of the principle underlying SSSA (though this application is nowhere expressly articulated by Bostrom), is: If the minute in which you read this article is randomly selected from every minute in every human's lifespan, then (with 95% confidence) this event has occurred after the first 5% of human observer-moments. If the mean lifespan in the future is twice the historic mean lifespan, this implies 95% confidence that "N" &lt; 10"n" (the average future human will account for twice the observer-moments of the average historic human). Therefore, the 95th percentile extinction-time estimate in this version is 4560 years. Rebuttals. We are in the earliest 5%, "a priori". One counterargument to the doomsday argument agrees with its statistical methods but disagrees with its extinction-time estimate. This position requires justifying why the observer cannot be assumed to be randomly selected from the set of all humans ever to be born, which implies that this set is not an appropriate reference class. By disagreeing with the doomsday argument, it implies that the observer is within the first 5% of humans to be born. By analogy, if one is a member of 50,000 people in a collaborative project, the reasoning of the doomsday argument implies that there will never be more than a million members of that project, within a 95% confidence interval. However, if one's characteristics are typical of an early adopter, rather than typical of an average member over the project's lifespan, then it may not be reasonable to assume one has joined the project at a random point in its life. For instance, the mainstream of potential users will prefer to be involved when the project is nearly complete. However, if one were to enjoy the project's incompleteness, it is already known that he or she is unusual, before the discovery of his or her early involvement. If one has measurable attributes that set one apart from the typical long-run user, the project doomsday argument can be refuted based on the fact that one could expect to be within the first 5% of members, "a priori". The analogy to the total-human-population form of the argument is that confidence in a prediction of the distribution of human characteristics that places modern and historic humans outside the mainstream implies that it is already known, before examining "n", that it is likely to be very early in "N". This is an argument for changing the reference class. For example, if one is certain that 99% of humans who will ever live will be cyborgs, but that only a negligible fraction of humans who have been born to date are cyborgs, one could be equally certain that at least one hundred times as many people remain to be born as have been. Robin Hanson's paper sums up these criticisms of the doomsday argument: &lt;templatestyles src="Template:Blockquote/styles.css" /&gt;All else is not equal; we have good reasons for thinking we are not randomly selected humans from all who will ever live. Human extinction is distant, "a posteriori". The a posteriori observation that extinction level events are rare could be offered as evidence that the doomsday argument's predictions are implausible; typically, extinctions of dominant species happen less often than once in a million years. Therefore, it is argued that human extinction is unlikely within the next ten millennia. (Another probabilistic argument, drawing a different conclusion than the doomsday argument.) In Bayesian terms, this response to the doomsday argument says that our knowledge of history (or ability to prevent disaster) produces a prior marginal for "N" with a minimum value in the trillions. If "N" is distributed uniformly from 1012 to 1013, for example, then the probability of "N" &lt; 1,200 billion inferred from "n" = 60 billion will be extremely small. This is an equally impeccable Bayesian calculation, rejecting the Copernican principle because we must be 'special observers' since there is no likely mechanism for humanity to go extinct within the next hundred thousand years. This response is accused of overlooking the technological threats to humanity's survival, to which earlier life was not subject, and is specifically rejected by most academic critics of the doomsday argument (arguably excepting Robin Hanson). The prior "N" distribution may make "n" very uninformative. Robin Hanson argues that "N"'s prior may be exponentially distributed: formula_19 Here, "c" and" q" are constants. If "q" is large, then our 95% confidence upper bound is on the uniform draw, not the exponential value of "N". The simplest way to compare this with Gott's Bayesian argument is to flatten the distribution from the vague prior by having the probability fall off more slowly with "N" (than inverse proportionally). This corresponds to the idea that humanity's growth may be exponential in time with doomsday having a vague prior probability density function in "time". This would mean that "N", the last birth, would have a distribution looking like the following: formula_20 This prior "N" distribution is all that is required (with the principle of indifference) to produce the inference of "N" from "n", and this is done in an identical way to the standard case, as described by Gott (equivalent to formula_21 = 1 in this distribution): formula_22 Substituting into the posterior probability equation): formula_23 Integrating the probability of any "N" above "xn": formula_24 For example, if "x" = 20, and formula_21 = 0.5, this becomes: formula_25 Therefore, with this prior, the chance of a trillion births is well over 20%, rather than the 5% chance given by the standard DA. If formula_21 is reduced further by assuming a flatter prior "N" distribution, then the limits on" N" given by "n" become weaker. An formula_21 of one reproduces Gott's calculation with a birth reference class, and formula_21 around 0.5 could approximate his temporal confidence interval calculation (if the population were expanding exponentially). As formula_26 (gets smaller) "n" becomes less and less informative about "N". In the limit this distribution approaches an (unbounded) uniform distribution, where all values of "N" are equally likely. This is Page et al.'s "Assumption 3", which they find few reasons to reject, "a priori". (Although all distributions with formula_27 are improper priors, this applies to Gott's vague-prior distribution also, and they can all be converted to produce proper integrals by postulating a finite upper population limit.) Since the probability of reaching a population of size 2"N" is usually thought of as the chance of reaching "N" multiplied by the survival probability from "N" to 2"N" it follows that Pr("N") must be a monotonically decreasing function of "N", but this doesn't necessarily require an inverse proportionality. Infinite expectation. Another objection to the doomsday argument is that the expected total human population is actually infinite. The calculation is as follows: The total human population N = n/f, where n is the human population to date and f is our fractional position in the total. We assume that f is uniformly distributed on (0,1]. The expectation of N is formula_28 For a similar example of counterintuitive infinite expectations, see the St. Petersburg paradox. Self-indication assumption: The possibility of not existing at all. One objection is that the possibility of a human existing at all depends on how many humans will ever exist ("N"). If this is a high number, then the possibility of their existing is higher than if only a few humans will ever exist. Since they do indeed exist, this is evidence that the number of humans that will ever exist is high. This objection, originally by Dennis Dieks (1992), is now known by Nick Bostrom's name for it: the "Self-Indication Assumption objection". It can be shown that some SIAs prevent any inference of "N" from "n" (the current population). Caves' rebuttal. The Bayesian argument by Carlton M. Caves states that the uniform distribution assumption is incompatible with the Copernican principle, not a consequence of it. Caves gives a number of examples to argue that Gott's rule is implausible. For instance, he says, imagine stumbling into a birthday party, about which you know nothing: Your friendly enquiry about the age of the celebrant elicits the reply that she is celebrating her ("t""p"=) 50th birthday. According to Gott, you can predict with 95% confidence that the woman will survive between [50]/39 = 1.28 years and 39[×50] = 1,950 years into the future. Since the wide range encompasses reasonable expectations regarding the woman's survival, it might not seem so bad, till one realizes that [Gott's rule] predicts that with probability 1/2 the woman will survive beyond 100 years old and with probability 1/3 beyond 150. Few of us would want to bet on the woman's survival using Gott's rule. "(See Caves' online paper below.)" Although this example exposes a weakness in J. Richard Gott's "Copernicus method" DA (that he does not specify when the "Copernicus method" can be applied) it is not precisely analogous with the modern DA; epistemological refinements of Gott's argument by philosophers such as Nick Bostrom specify that: Knowing the absolute birth rank ("n") must give no information on the total population ("N"). Careful DA variants specified with this rule aren't shown implausible by Caves' "Old Lady" example above, because the woman's age is given prior to the estimate of her lifespan. Since human age gives an estimate of survival time (via actuarial tables) Caves' Birthday party age-estimate could not fall into the class of DA problems defined with this proviso. To produce a comparable "Birthday Party Example" of the carefully specified Bayesian DA, we would need to completely exclude all prior knowledge of likely human life spans; in principle this could be done (e.g.: hypothetical Amnesia chamber). However, this would remove the modified example from everyday experience. To keep it in the everyday realm the lady's age must be "hidden" prior to the survival estimate being made. (Although this is no longer exactly the DA, it is much more comparable to it.) Without knowing the lady's age, the DA reasoning produces a "rule" to convert the birthday ("n") into a maximum lifespan with 50% confidence ("N"). Gott's Copernicus method rule is simply: Prob ("N" &lt; 2"n") = 50%. How accurate would this estimate turn out to be? Western demographics are now fairly uniform across ages, so a random birthday ("n") could be (very roughly) approximated by a U(0,"M"] draw where "M" is the maximum lifespan in the census. In this 'flat' model, everyone shares the same lifespan so "N" = "M". If "n" happens to be less than ("M")/2 then Gott's 2"n" estimate of "N" will be under "M", its true figure. The other half of the time 2"n" underestimates "M", and in this case (the one Caves highlights in his example) the subject will die before the 2"n" estimate is reached. In this "flat demographics" model Gott's 50% confidence figure is proven right 50% of the time. Self-referencing doomsday argument rebuttal. Some philosophers have suggested that only people who have contemplated the doomsday argument (DA) belong in the reference class "human". If that is the appropriate reference class, Carter defied his own prediction when he first described the argument (to the Royal Society). An attendant could have argued thus: Presently, only one person in the world understands the Doomsday argument, so by its own logic there is a 95% chance that it is a minor problem which will only ever interest twenty people, and I should ignore it. Jeff Dewynne and Professor Peter Landsberg suggested that this line of reasoning will create a paradox for the doomsday argument: If a member of the Royal Society did pass such a comment, it would indicate that they understood the DA sufficiently well that in fact 2 people could be considered to understand it, and thus there would be a 5% chance that 40 or more people would actually be interested. Also, of course, ignoring something because you only expect a small number of people to be interested in it is extremely short sighted—if this approach were to be taken, nothing new would ever be explored, if we assume no "a priori" knowledge of the nature of interest and attentional mechanisms. Conflation of future duration with total duration. Various authors have argued that the doomsday argument rests on an incorrect conflation of future duration with total duration. This occurs in the specification of the two time periods as "doom soon" and "doom deferred" which means that both periods are selected to occur "after" the observed value of the birth order. A rebuttal in Pisaturo (2009) argues that the doomsday argument relies on the equivalent of this equation: formula_29, where: "X" = the prior information; "Dp" = the data that past duration is "tp"; "HFS" = the hypothesis that the future duration of the phenomenon will be short; "HFL" = the hypothesis that the future duration of the phenomenon will be long; "HTS" = the hypothesis that the "total" duration of the phenomenon will be short—i.e., that "tt", the phenomenon's "total" longevity, = "tTS"; "HTL" = the hypothesis that the "total" duration of the phenomenon will be long—i.e., that "tt", the phenomenon's "total" longevity, = "tTL", with "tTL" &gt; "tTS". Pisaturo then observes: Clearly, this is an invalid application of Bayes' theorem, as it conflates future duration and total duration. Pisaturo takes numerical examples based on two possible corrections to this equation: considering only future durations and considering only total durations. In both cases, he concludes that the doomsday argument's claim, that there is a "Bayesian shift" in favor of the shorter future duration, is fallacious. This argument is also echoed in O'Neill (2014). In this work O'Neill argues that a unidirectional "Bayesian Shift" is an impossibility within the standard formulation of probability theory and is contradictory to the rules of probability. As with Pisaturo, he argues that the doomsday argument conflates future duration with total duration by specification of doom times that occur after the observed birth order. According to O'Neill: The reason for the hostility to the doomsday argument and its assertion of a "Bayesian shift" is that many people who are familiar with probability theory are implicitly aware of the absurdity of the claim that one can have an automatic unidirectional shift in beliefs regardless of the actual outcome that is observed. This is an example of the "reasoning to a foregone conclusion" that arises in certain kinds of failures of an underlying inferential mechanism. An examination of the inference problem used in the argument shows that this suspicion is indeed correct, and the doomsday argument is invalid. (pp. 216-217) Confusion over the meaning of confidence intervals. Gelman and Robert assert that the doomsday argument confuses frequentist confidence intervals with Bayesian credible intervals. Suppose that every individual knows their number "n" and uses it to estimate an upper bound on "N". Every individual has a different estimate, and these estimates are constructed so that 95% of them contain the true value of "N" and the other 5% do not. This, say Gelman and Robert, is the defining property of a frequentist lower-tailed 95% confidence interval. But, they say, "this does not mean that there is a 95% chance that any particular interval will contain the true value." That is, while 95% of the confidence intervals will contain the true value of "N", this is not the same as "N" being contained in the confidence interval with 95% probability. The latter is a different property and is the defining characteristic of a Bayesian credible interval. Gelman and Robert conclude: &lt;templatestyles src="Template:Blockquote/styles.css" /&gt;the Doomsday argument is the ultimate triumph of the idea, beloved among Bayesian educators, that our students and clients do not really understand Neyman–Pearson confidence intervals and inevitably give them the intuitive Bayesian interpretation. Notes. &lt;templatestyles src="Reflist/styles.css" /&gt; References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "N" }, { "math_id": 1, "text": "N-1" }, { "math_id": 2, "text": "n" }, { "math_id": 3, "text": "f=n/N" }, { "math_id": 4, "text": "f" }, { "math_id": 5, "text": "f > 0.05" }, { "math_id": 6, "text": "n/N > 0.05" }, { "math_id": 7, "text": "N < 20n " }, { "math_id": 8, "text": "\\times" }, { "math_id": 9, "text": "P(N) = \\frac{k}{N}" }, { "math_id": 10, "text": "P(N\\mid n) = \\frac{P(n\\mid N) P(N)}{P(n)}." }, { "math_id": 11, "text": "P(n\\mid N) = \\frac{1}{N}" }, { "math_id": 12, "text": "P(n) = \\frac{k}{n}" }, { "math_id": 13, "text": "P(N\\mid n) = \\frac{n}{N^2}" }, { "math_id": 14, "text": "P(N \\leq Z) = \\int_{N=n}^{N=Z} P(N|n)\\,dN" }, { "math_id": 15, "text": " = \\frac{Z-n}{Z}" }, { "math_id": 16, "text": "P(N \\leq 20n) = \\frac{19}{20}" }, { "math_id": 17, "text": "P(N \\leq 40[200000]) = \\frac{39}{40}" }, { "math_id": 18, "text": "N \\geq 200000 \\times \\frac{40}{39} \\approx 205100~\\text{years}" }, { "math_id": 19, "text": "N = \\frac{e^{U(0, q]}}{c}" }, { "math_id": 20, "text": "\\Pr(N) = \\frac{k}{N^\\alpha}, 0 < \\alpha < 1.\n" }, { "math_id": 21, "text": "\\alpha" }, { "math_id": 22, "text": " \\Pr(n) = \\int_{N=n}^{N=\\infty} \\Pr(n\\mid N) \\Pr(N) \\,dN = \\int_{n}^{\\infty} \\frac{k}{N^{(\\alpha+1)}} \\,dN = \\frac{k}{{\\alpha}n^{\\alpha}}" }, { "math_id": 23, "text": "\\Pr(N\\mid n) = \\frac{{\\alpha}n^{\\alpha}}{N^{(1+\\alpha)}}." }, { "math_id": 24, "text": "\\Pr(N > xn) = \\int_{N=xn}^{N=\\infty} \\Pr(N\\mid n)\\,dN = \\frac{1}{x^{\\alpha}}." }, { "math_id": 25, "text": "\\Pr(N > 20n) = \\frac{1}{\\sqrt{20}} \\simeq 22.3\\%. " }, { "math_id": 26, "text": "\\alpha \\to 0" }, { "math_id": 27, "text": "\\alpha \\leq 1" }, { "math_id": 28, "text": " E(N) = \\int_{0}^{1} {n \\over f} \\, df = n [\\ln (f) ]_{0}^{1}= n \\ln (1) - n \\ln (0) = + \\infty ." }, { "math_id": 29, "text": " P(H_{TS}|D_pX)/P(H_{TL}|D_pX) = [P(H_{FS}|X)/P(H_{FL}|X)] \\cdot [P(D_p|H_{TS}X)/P(D_p|H_{TL}X)] " } ]
https://en.wikipedia.org/wiki?curid=574311
574312
Fashionable Nonsense
1997 book by Alan Sokal and Jean Bricmont Fashionable Nonsense: Postmodern Intellectuals' Abuse of Science (UK: Intellectual Impostures), first published in French in 1997 as , is a book by physicists Alan Sokal and Jean Bricmont. As part of the so-called science wars, Sokal and Bricmont criticize postmodernism in academia for the misuse of scientific and mathematical concepts in postmodern writing. The book was published in English in 1998, with revisions to the original French edition for greater relevance to debates in the English-speaking world. According to some reports, the response within the humanities was "polarized"; critics of Sokal and Bricmont charged that they lacked understanding of the writing they were scrutinizing. By contrast, responses from the scientific community were more supportive. Similar to the subject matter of the book, Sokal is best known for his eponymous 1996 hoaxing affair, whereby he was able to get published a deliberately absurd article that he submitted to "Social Text", a critical theory journal. The article itself is included in "Fashionable Nonsense" as an appendix. Summary. "Fashionable Nonsense" examines two related topics: Incorrect use of scientific concepts versus scientific metaphors. The stated goal of the book is not to attack "philosophy, the humanities or the social sciences in general", but rather "to warn those who work in them (especially students) against some manifest cases of charlatanism." In particular, the authors aim to "deconstruct" the notion that some books and writers are difficult because they deal with profound and complicated ideas: "If the texts seem incomprehensible, it is for the excellent reason that they mean precisely nothing." Set out to show how numerous key intellectuals have used concepts from the physical sciences and mathematics incorrectly, the authors intentionally provide considerably lengthy extracts in order to avoid accusations of taking sentences out of context. Such extracts pull from such works as those of Jacques Lacan, Julia Kristeva, Paul Virilio, Gilles Deleuze, Félix Guattari, Luce Irigaray, Bruno Latour, and Jean Baudrillard, who—in terms of the quantity of published works, invited presentations, and of citations received—were some of the leading academics of continental philosophy, critical theory, psychoanalysis, and/or the social sciences at the time of publication. The book provides a chapter to each of the above-mentioned authors, "the tip of the iceberg" of a group of intellectual practices that can be described as "mystification, deliberately obscure language, confused thinking and the misuse of scientific concepts." For example, Irigaray is criticised for asserting that E=mc2 is a "sexed equation" because "it privileges the speed of light over other speeds that are vitally necessary to us"; and for asserting that fluid mechanics is unfairly neglected because it deals with "feminine" fluids in contrast to "masculine" rigid mechanics. Similarly, Lacan is criticized for drawing an analogy between topology and mental illness that, in Sokal and Bricmont's view, is unsupported by any argument and is "not just false: it is gibberish." Sokal and Bricmont claim that they do not intend to analyze postmodernist thought in general. Rather, they aim to draw attention to the abuse of concepts from mathematics and physics, their areas of specialty. The authors define this abuse as any of the following behaviors: The postmodernist conception of science. Sokal and Bricmont highlight the rising tide of what they call "cognitive relativism", the belief that there are no objective truths but only local beliefs. They argue that this view is held by a number of people, including people who the authors label "postmodernists" and the Strong programme in the sociology of science, and that it is illogical, impractical, and dangerous. Their aim is "not to criticize the left, but to help defend it from a trendy segment of itself." Quoting Michael Albert, [T]here is nothing truthful, wise, humane, or strategic about confusing hostility to injustice and oppression, which is leftist, with hostility to science and rationality, which is nonsense. Reception. According to "New York Review of Books" editor Barbara Epstein, who was delighted by Sokal's hoax, the response to the book within the humanities was bitterly divided, with some delighted and some enraged; in some reading groups, reaction was polarized between impassioned supporters and equally impassioned opponents of Sokal. Support. Philosopher Thomas Nagel has supported Sokal and Bricmont, describing their book as consisting largely of "extensive quotations of scientific gibberish from name-brand French intellectuals, together with eerily patient explanations of why it is gibberish," and agreeing that "there does seem to be something about the Parisian scene that is particularly hospitable to reckless verbosity." Several scientists have expressed similar sentiments. Richard Dawkins, in a review of this book, said regarding the discussion of Lacan:We do not need the mathematical expertise of Sokal and Bricmont to assure us that the author of this stuff is a fake. Perhaps he is genuine when he speaks of non-scientific subjects? But a philosopher who is caught equating the erectile organ to the square root of minus one has, for my money, blown his credentials when it comes to things that I "don't" know anything about.Noam Chomsky called the book "very important", and said that "a lot of the so-called 'left' criticism [of science] seems to be pure nonsense." Criticism. Limiting her considerations to physics, science historian Mara Beller maintained that it was not entirely fair to blame contemporary postmodern philosophers for drawing nonsensical conclusions from quantum physics, since many such conclusions were drawn by some of the leading quantum physicists themselves, such as Bohr or Heisenberg when they ventured into philosophy. Regarding Lacan. Bruce Fink offers a critique in his book "Lacan to the Letter", in which he accuses Sokal and Bricmont of demanding that "serious writing" do nothing other than "convey clear meanings". Fink asserts that some concepts which the authors consider arbitrary or meaningless do have roots in the history of linguistics, and that Lacan is explicitly using mathematical concepts in a metaphoric way, not claiming that his concepts are mathematically founded. He takes Sokal and Bricmont to task for elevating a disagreement with Lacan's choice of writing styles to an attack on his thought, which, in Fink's assessment, they fail to understand. Fink says that "Lacan could easily assume that his faithful seminar public...would go to the library or the bookstore and 'bone up' on at least some of his passing allusions." Similar to Fink, a review by John Sturrock in the "London Review of Books" accuses Sokal and Bricmont of "linguistic reductionism", claiming that they misunderstood the genres and language uses of their intended quarries. This point has been disputed by Arkady Plotnitsky (one of the authors mentioned by Sokal in his original hoax). Plotnitsky says that "some of their claims concerning mathematical objects in question and specifically complex numbers are incorrect", specifically attacking their statement that complex numbers and irrational numbers "have nothing to do with one another". Plotnitsky here defends Lacan's view "of imaginary numbers as an extension of the idea of rational numbers—both in the general conceptual sense, extending to its ancient mathematical and philosophical origins...and in the sense of modern algebra." The first of these two senses refers to the fact that the extension of real numbers to complex numbers mirrors the extension of rationals to reals, as Plotnitsky points out with a quote from Leibniz: "From the irrationals are born the impossible or imaginary quantities whose nature is very strange but whose usefulness is not to be despised." Plotnitsky nevertheless agrees with Sokal and Bricmont that the "square root of −1" which Lacan discusses (and for which Plotnitsky introduces the symbol formula_0) is not, in spite of its identical name, "identical, directly linked, or even metaphorized via the mathematical square root of −1", and that the latter "is "not" the erectile organ". Regarding Irigaray. While Fink and Plotnitsky question Sokal and Bricmont's right to say what definitions of scientific terms are correct, cultural theorists and literary critics Andrew Milner and Jeff Browitt acknowledge that right, seeing it as "defend[ing] their disciplines against what they saw as a misappropriation of key terms and concepts" by writers such as Jacques Lacan and Luce Irigaray. However, they point out that Irigaray might still be correct in asserting that "E" = "mc"2 is a "masculinist" equation, since "the social genealogy of a proposition has no logical bearing on its truth value." In other words, gender factors may influence "which" of many possible scientific truths are discovered. They also suggest that, in criticising Irigaray, Sokal and Bricmont sometimes go beyond their area of expertise in the sciences and simply express a differing position on gender politics. Derrida. In his response, first published in "Le Monde" as "Sokal and Bricmont Aren't Serious", Jacques Derrida writes that the Sokal hoax is rather "sad", not only because Alan Sokal's name is now linked primarily to a hoax rather than science, but also because the chance to reflect seriously on this issue has been ruined for a broad public forum that deserves better. Derrida reminds his readers that science and philosophy have long debated their likenesses and differences in the discipline of epistemology, but certainly not with such an emphasis on the nationality of the philosophers or scientists. He calls it ridiculous and weird that there are intensities of treatment by the scientists, in particular, that he was "much less badly treated", when in fact he was the main target of the US press. Derrida then proceeds to question the validity of their attacks against a few words he made in an off-the-cuff response during a conference that took place thirty years prior to their publication. He suggests there are plenty of scientists who have pointed out the difficulty of attacking his response. He also writes that there is no "relativism" or a critique of Reason and the Enlightenment in his works. He then writes of his hope that in the future this work is pursued more seriously and with dignity at the level of the issues involved. See also. &lt;templatestyles src="Div col/styles.css"/&gt; References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\scriptstyle (L)\\sqrt{-1}" } ]
https://en.wikipedia.org/wiki?curid=574312
574337
Cochran's theorem
In statistics, Cochran's theorem, devised by William G. Cochran, is a theorem used to justify results relating to the probability distributions of statistics that are used in the analysis of variance. Examples. Sample mean and sample variance. If "X"1, ..., "X""n" are independent normally distributed random variables with mean "μ" and standard deviation "σ" then formula_0 is standard normal for each "i". Note that the total "Q" is equal to sum of squared "U"s as shown here: formula_1 which stems from the original assumption that formula_2. So instead we will calculate this quantity and later separate it into "Q""i"'s. It is possible to write formula_3 (here formula_4 is the sample mean). To see this identity, multiply throughout by formula_5 and note that formula_6 and expand to give formula_7 The third term is zero because it is equal to a constant times formula_8 and the second term has just "n" identical terms added together. Thus formula_9 and hence formula_10 Now formula_11 with formula_12 the matrix of ones which has rank 1. In turn formula_13 given that formula_14. This expression can be also obtained by expanding formula_15 in matrix notation. It can be shown that the rank of formula_16 is formula_17 as the addition of all its rows is equal to zero. Thus the conditions for Cochran's theorem are met. Cochran's theorem then states that "Q"1 and "Q"2 are independent, with chi-squared distributions with "n" − 1 and 1 degree of freedom respectively. This shows that the sample mean and sample variance are independent. This can also be shown by Basu's theorem, and in fact this property "characterizes" the normal distribution – for no other distribution are the sample mean and sample variance independent. Distributions. The result for the distributions is written symbolically as formula_18 formula_19 Both these random variables are proportional to the true but unknown variance "σ"2. Thus their ratio does not depend on "σ"2 and, because they are statistically independent. The distribution of their ratio is given by formula_20 where "F"1,"n" − 1 is the F-distribution with 1 and "n" − 1 degrees of freedom (see also Student's t-distribution). The final step here is effectively the definition of a random variable having the F-distribution. Estimation of variance. To estimate the variance "σ"2, one estimator that is sometimes used is the maximum likelihood estimator of the variance of a normal distribution formula_21 Cochran's theorem shows that formula_22 and the properties of the chi-squared distribution show that formula_23 Alternative formulation. The following version is often seen when considering linear regression. Suppose that formula_24 is a standard multivariate normal random vector (here formula_25 denotes the "n"-by-"n" identity matrix), and if formula_26 are all "n"-by-"n" symmetric matrices with formula_27. Then, on defining formula_28, any one of the following conditions implies the other two: Statement. Let "U"1, ..., "U""N" be i.i.d. standard normally distributed random variables, and formula_35. Let formula_36be symmetric matrices. Define "r""i" to be the rank of formula_37. Define formula_38, so that the "Q"i are quadratic forms. Further assume formula_39. Cochran's theorem states that the following are equivalent: Often it's stated as formula_41, where formula_42 is idempotent, and formula_43 is replaced by formula_44. But after an orthogonal transform, formula_45, and so we reduce to the above theorem. Proof. Claim: Let formula_46 be a standard Gaussian in formula_47, then for any symmetric matrices formula_48, if formula_49 and formula_50 have the same distribution, then formula_48 have the same eigenvalues (up to multiplicity). &lt;templatestyles src="Math_proof/styles.css" /&gt;Proof Let the eigenvalues of formula_51 be formula_52, then calculate the characteristic function of formula_49. It comes out to be formula_53 For formula_49 and formula_50 to be equal, their characteristic functions must be equal, so formula_48 have the same eigenvalues (up to multiplicity). Claim: formula_54. &lt;templatestyles src="Math_proof/styles.css" /&gt;Proof formula_55. Since formula_56 is symmetric, and formula_57, by the previous claim, formula_56 has the same eigenvalues as 0. Lemma: If formula_58, all formula_59 symmetric, and have eigenvalues 0, 1, then they are simultaneously diagonalizable. &lt;templatestyles src="Math_proof/styles.css" /&gt;Proof Fix i, and consider the eigenvectors v of formula_60 such that formula_61. Then we have formula_62, so all formula_63. Thus we obtain a split of formula_64 into formula_65, such that V is the 1-eigenspace of formula_60, and in the 0-eigenspaces of all other formula_66. Now induct by moving into formula_67. Now we prove the original theorem. We prove that the three cases are equivalent by proving that each case implies the next one in a cycle (formula_68). &lt;templatestyles src="Math_proof/styles.css" /&gt;Proof Case: All formula_69 are independent Fix some formula_70, define formula_71, and diagonalize formula_72 by an orthogonal transform formula_73. Then consider formula_74. It is diagonalized as well. Let formula_75, then it is also standard Gaussian. Then we have formula_76 Inspect their diagonal entries, to see that formula_77 implies that their nonzero diagonal entries are disjoint. Thus all eigenvalues of formula_72 are 0, 1, so formula_69 is a formula_78 dist with formula_79 degrees of freedom. Case: Each formula_69 is a formula_80 distribution. Fix any formula_70, diagonalize it by orthogonal transform formula_73, and reindex, so that formula_81. Then formula_82 for some formula_83, a spherical rotation of formula_84. Since formula_85, we get all formula_86. So all formula_87, and have eigenvalues formula_88. So diagonalize them simultaneously, add them up, to find formula_43. Case: formula_40. We first show that the matrices "B"("i") can be simultaneously diagonalized by an orthogonal matrix and that their non-zero eigenvalues are all equal to +1. Once that's shown, take this orthogonal transform to this simultaneous eigenbasis, in which the random vector formula_89 becomes formula_90, but all formula_91 are still independent and standard Gaussian. Then the result follows. Each of the matrices "B"("i") has rank "r""i" and thus "r""i" non-zero eigenvalues. For each "i", the sum formula_92 has at most rank formula_93. Since formula_94, it follows that "C"("i") has exactly rank "N" − "r""i". Therefore "B"("i") and "C"("i") can be simultaneously diagonalized. This can be shown by first diagonalizing "B"("i"), by the spectral theorem. In this basis, it is of the form: formula_95 Thus the lower formula_96 rows are zero. Since formula_97, it follows that these rows in "C"("i") in this basis contain a right block which is a formula_98 unit matrix, with zeros in the rest of these rows. But since "C"("i") has rank "N" − "r""i", it must be zero elsewhere. Thus it is diagonal in this basis as well. It follows that all the non-zero eigenvalues of both "B"("i") and "C"("i") are +1. This argument applies for all "i", thus all "B"("i") are positive semidefinite. Moreover, the above analysis can be repeated in the diagonal basis for formula_99. In this basis formula_100 is the identity of an formula_101 vector space, so it follows that both "B"(2) and formula_102 are simultaneously diagonalizable in this vector space (and hence also together with "B"(1)). By iteration it follows that all "B"-s are simultaneously diagonalizable. Thus there exists an orthogonal matrix formula_103 such that for all formula_70, formula_104 is diagonal, where any entry formula_105 with indices formula_106, formula_107, is equal to 1, while any entry with other indices is equal to 0.
[ { "math_id": 0, "text": "U_i = \\frac{X_i-\\mu}{\\sigma}" }, { "math_id": 1, "text": "\\sum_iQ_i=\\sum_{jik} U_j B_{jk}^{(i)} U_k = \\sum_{jk} U_j U_k \\sum_i B_{jk}^{(i)} =\n\\sum_{jk} U_j U_k\\delta_{jk} = \\sum_{j} U_j^2" }, { "math_id": 2, "text": "B_{1} + B_{2} \\ldots = I" }, { "math_id": 3, "text": "\n\\sum_{i=1}^n U_i^2=\\sum_{i=1}^n\\left(\\frac{X_i-\\overline{X}}{\\sigma}\\right)^2\n+ n\\left(\\frac{\\overline{X}-\\mu}{\\sigma}\\right)^2\n" }, { "math_id": 4, "text": "\\overline{X}" }, { "math_id": 5, "text": "\\sigma^2" }, { "math_id": 6, "text": "\n\\sum(X_i-\\mu)^2=\n\\sum(X_i-\\overline{X}+\\overline{X}-\\mu)^2\n" }, { "math_id": 7, "text": "\n\\sum(X_i-\\mu)^2=\n\\sum(X_i-\\overline{X})^2+\\sum(\\overline{X}-\\mu)^2+\n2\\sum(X_i-\\overline{X})(\\overline{X}-\\mu).\n" }, { "math_id": 8, "text": "\\sum(\\overline{X}-X_i)=0," }, { "math_id": 9, "text": "\n\\sum(X_i-\\mu)^2 = \\sum(X_i-\\overline{X})^2+n(\\overline{X}-\\mu)^2 ,\n" }, { "math_id": 10, "text": "\n\\sum\\left(\\frac{X_i-\\mu}{\\sigma}\\right)^2=\n\\sum\\left(\\frac{X_i-\\overline{X}}{\\sigma}\\right)^2\n+n\\left(\\frac{\\overline{X}-\\mu}{\\sigma}\\right)^2=\n\\overbrace{\\sum_i\\left(U_i-\\frac{1}{n}\\sum_j{U_j}\\right)^2}^{Q_1}\n+\\overbrace{\\frac{1}{n}\\left(\\sum_j{U_j}\\right)^2}^{Q_2}=\nQ_1+Q_2.\n" }, { "math_id": 11, "text": "B^{(2)}=\\frac{J_n}{n}" }, { "math_id": 12, "text": "J_n" }, { "math_id": 13, "text": "B^{(1)}= I_n-\\frac{J_n}{n}" }, { "math_id": 14, "text": "I_n=B^{(1)}+B^{(2)}" }, { "math_id": 15, "text": "Q_1" }, { "math_id": 16, "text": "B^{(1)}" }, { "math_id": 17, "text": "n-1" }, { "math_id": 18, "text": "\n\\sum\\left(X_i-\\overline{X}\\right)^2 \\sim \\sigma^2 \\chi^2_{n-1}.\n" }, { "math_id": 19, "text": "\nn(\\overline{X}-\\mu)^2\\sim \\sigma^2 \\chi^2_1,\n" }, { "math_id": 20, "text": "\n\\frac{n\\left(\\overline{X}-\\mu\\right)^2}\n{\\frac{1}{n-1}\\sum\\left(X_i-\\overline{X}\\right)^2}\\sim \\frac{\\chi^2_1}{\\frac{1}{n-1}\\chi^2_{n-1}}\n \\sim F_{1,n-1}\n" }, { "math_id": 21, "text": "\n\\widehat{\\sigma}^2=\n\\frac{1}{n}\\sum\\left(\nX_i-\\overline{X}\\right)^2. " }, { "math_id": 22, "text": "\n\\frac{n\\widehat{\\sigma}^2}{\\sigma^2}\\sim\\chi^2_{n-1}\n" }, { "math_id": 23, "text": "\\begin{align}\nE \\left(\\frac{n \\widehat{\\sigma}^2}{\\sigma^2}\\right) &= E \\left(\\chi^2_{n-1}\\right) \\\\ \n\\frac{n}{\\sigma^2}E \\left(\\widehat{\\sigma}^2\\right) &= (n-1) \\\\\nE \\left(\\widehat{\\sigma}^2\\right) &= \\frac{\\sigma^2 (n-1)}{n}\n\\end{align}" }, { "math_id": 24, "text": "Y\\sim N_n(0,\\sigma^2I_n)" }, { "math_id": 25, "text": "I_n" }, { "math_id": 26, "text": "A_1,\\ldots,A_k" }, { "math_id": 27, "text": "\\sum_{i=1}^kA_i=I_n" }, { "math_id": 28, "text": "r_i= \\operatorname{Rank}(A_i)" }, { "math_id": 29, "text": "\\sum_{i=1}^kr_i=n ," }, { "math_id": 30, "text": "Y^TA_iY\\sim\\sigma^2\\chi^2_{r_i}" }, { "math_id": 31, "text": "A_i" }, { "math_id": 32, "text": "Y^TA_iY" }, { "math_id": 33, "text": "Y^TA_jY" }, { "math_id": 34, "text": "i\\neq j ." }, { "math_id": 35, "text": "U = [U_1, ..., U_N]^T" }, { "math_id": 36, "text": "B^{(1)},B^{(2)},\\ldots, B^{(k)}" }, { "math_id": 37, "text": "B^{(i)}" }, { "math_id": 38, "text": "Q_i=U^T B^{(i)}U" }, { "math_id": 39, "text": "\\sum_i Q_i = U^T U" }, { "math_id": 40, "text": "r_1+\\cdots +r_k=N" }, { "math_id": 41, "text": "\\sum_i A_i = A" }, { "math_id": 42, "text": "A" }, { "math_id": 43, "text": "\\sum_i r_i = N" }, { "math_id": 44, "text": "\\sum_i r_i = rank(A)" }, { "math_id": 45, "text": "A = diag(I_M, 0)" }, { "math_id": 46, "text": "X" }, { "math_id": 47, "text": "\\R^n" }, { "math_id": 48, "text": "Q, Q'" }, { "math_id": 49, "text": "X^T Q X" }, { "math_id": 50, "text": "X^T Q' X" }, { "math_id": 51, "text": "Q" }, { "math_id": 52, "text": "\\lambda_1, ..., \\lambda_n" }, { "math_id": 53, "text": "\\phi(t) =\\left(\\prod_j (1-2i \\lambda_j t)\\right)^{-1/2}" }, { "math_id": 54, "text": "I = \\sum_i B_i" }, { "math_id": 55, "text": "U^T (I - \\sum_i B_i) U = 0" }, { "math_id": 56, "text": "(I - \\sum_i B_i)" }, { "math_id": 57, "text": "U^T (I - \\sum_i B_i) U =^d U^T 0 U" }, { "math_id": 58, "text": "\\sum_i M_i = I" }, { "math_id": 59, "text": "M_i" }, { "math_id": 60, "text": "M_i\n" }, { "math_id": 61, "text": "M_i v = v" }, { "math_id": 62, "text": "v^T v = v^T I v = v^T v + \\sum_{j\\neq i} v^T M_j v" }, { "math_id": 63, "text": "v^T M_j v = 0" }, { "math_id": 64, "text": "\\R^N" }, { "math_id": 65, "text": "V\\oplus V^\\perp" }, { "math_id": 66, "text": "M_j\n" }, { "math_id": 67, "text": "V^\\perp" }, { "math_id": 68, "text": "1 \\to 2 \\to 3 \\to 1" }, { "math_id": 69, "text": "Q_i" }, { "math_id": 70, "text": "i" }, { "math_id": 71, "text": "C_i = I - B_i = \\sum_{j\\neq i} B_j" }, { "math_id": 72, "text": "B_i" }, { "math_id": 73, "text": "O" }, { "math_id": 74, "text": "O C_i O^T = I - O B_i O^T" }, { "math_id": 75, "text": "W = OU" }, { "math_id": 76, "text": "Q_i = W^T (OB_i O^T) W; \\quad \\sum_{j\\neq i} Q_j = W^T (I - OB_i O^T) W" }, { "math_id": 77, "text": "Q_i \\perp \\sum_{j\\neq i} Q_j" }, { "math_id": 78, "text": "\\chi^2" }, { "math_id": 79, "text": "r_i" }, { "math_id": 80, "text": "\\chi^2(r_i)" }, { "math_id": 81, "text": "O B_i O^T = diag(\\lambda_1, ..., \\lambda_{r_i}, 0, ..., 0)" }, { "math_id": 82, "text": "Q_i = \\sum_j \\lambda_j {U'}_j^2" }, { "math_id": 83, "text": "U'_j" }, { "math_id": 84, "text": "U_i" }, { "math_id": 85, "text": "Q_i\\sim \\chi^2(r_i)" }, { "math_id": 86, "text": "\\lambda_j = 1" }, { "math_id": 87, "text": "B_i\\succeq 0" }, { "math_id": 88, "text": "0, 1" }, { "math_id": 89, "text": "[U_1, ..., U_N]^T" }, { "math_id": 90, "text": "[U'_1, ..., U'_N]^T" }, { "math_id": 91, "text": "U_i'" }, { "math_id": 92, "text": "C^{(i)} \\equiv \\sum_{j\\ne i}B^{(j)}" }, { "math_id": 93, "text": "\\sum_{j\\ne i}r_j = N-r_i" }, { "math_id": 94, "text": "B^{(i)}+C^{(i)} = I_{N \\times N}" }, { "math_id": 95, "text": "\\begin{bmatrix}\n\\lambda_1 & 0 & 0 & \\cdots & \\cdots & & 0 \\\\\n0 & \\lambda_2 & 0 & \\cdots & \\cdots & & 0 \\\\\n0 & 0 & \\ddots & & & & \\vdots \\\\\n\\vdots & \\vdots & & \\lambda_{r_i} & & \\\\\n\\vdots & \\vdots & & & 0 & \\\\\n0 & \\vdots & & & & \\ddots \\\\\n0 & 0 & \\ldots & & & & 0\n \\end{bmatrix}." }, { "math_id": 96, "text": "(N-r_i)" }, { "math_id": 97, "text": "C^{(i)} = I - B^{(i)}" }, { "math_id": 98, "text": "(N-r_i)\\times(N-r_i)" }, { "math_id": 99, "text": "C^{(1)} = B^{(2)} + \\sum_{j>2}B^{(j)}" }, { "math_id": 100, "text": "C^{(1)}" }, { "math_id": 101, "text": "(N-r_1)\\times(N-r_1)" }, { "math_id": 102, "text": "\\sum_{j>2}B^{(j)}" }, { "math_id": 103, "text": "S" }, { "math_id": 104, "text": " S^\\mathrm{T}B^{(i)} S \\equiv B^{(i)\\prime} " }, { "math_id": 105, "text": " B^{(i)\\prime}_{x,y} " }, { "math_id": 106, "text": "x = y" }, { "math_id": 107, "text": " \\sum_{j=1}^{i-1} r_j < x = y \\le \\sum_{j=1}^i r_j " } ]
https://en.wikipedia.org/wiki?curid=574337
57435638
Infinite-valued logic
Many-valued logic in which truth values comprise a continuous range In logic, an infinite-valued logic (or real-valued logic or infinitely-many-valued logic) is a many-valued logic in which truth values comprise a continuous range. Traditionally, in Aristotle's logic, logic other than bivalent logic was abnormal, as the law of the excluded middle precluded more than two possible values (i.e., "true" and "false") for any proposition. Modern three-valued logic (trivalent logic) allows for an additional possible truth value (i.e., "undecided") and is an example of finite-valued logic in which truth values are discrete, rather than continuous. Infinite-valued logic comprises continuous fuzzy logic, though fuzzy logic in some of its forms can further encompass finite-valued logic. For example, finite-valued logic can be applied in Boolean-valued modeling, description logics, and defuzzification of fuzzy logic. History. Isaac Newton and Gottfried Wilhelm Leibniz used both infinities and infinitesimals to develop the differential and integral calculus in the late 17th century. Richard Dedekind, who defined real numbers in terms of certain sets of rational numbers in the 19th century, also developed an axiom of continuity stating that a single correct value exists at the limit of any trial and error approximation. Felix Hausdorff demonstrated the logical possibility of an absolutely continuous ordering of words comprising bivalent values, each word having absolutely infinite length, in 1938. However, the definition of a random real number, meaning a real number that has no finite description whatsoever, remains somewhat in the realm of paradox. Jan Łukasiewicz developed a system of three-valued logic in 1920. He generalized the system to many-valued logics in 1922 and went on to develop logics with formula_0 (infinite within a range) truth values. Kurt Gödel developed a deductive system, applicable for both finite- and infinite-valued first-order logic (a formal logic in which a predicate can refer to a single subject) as well as for intermediate logic (a formal intuitionistic logic usable to provide proofs such as a consistency proof for arithmetic), and showed in 1932 that logical intuition cannot be characterized by finite-valued logic. The concept of expressing truth values as real numbers in the range between 0 and 1 can bring to mind the possibility of using complex numbers to express truth values. These truth values would have an imaginary dimension, for example between 0 and "i". Two- or higher-dimensional truth could potentially be useful in systems of paraconsistent logic. If practical applications were to arise for such systems, multidimensional infinite-valued logic could develop as a concept independent of real-valued logic. Lotfi A. Zadeh proposed a formal methodology of fuzzy logic and its applications in the early 1970s. By 1973, other researchers were applying the theory of Zadeh fuzzy controllers to various mechanical and industrial processes. The fuzzy modeling concept that evolved from this research was applied to neural networks in the 1980s and to machine learning in the 1990s. The formal methodology also led to generalizations of mathematical theories in the family of t-norm fuzzy logics. Examples. Basic fuzzy logic is the logic of continuous t-norms (binary operations on the real unit interval [0, 1]). Applications involving fuzzy logic include facial recognition systems, home appliances, anti-lock braking systems, automatic transmissions, controllers for rapid transit systems and unmanned aerial vehicles, knowledge-based and engineering optimization systems, weather forecasting, pricing, and risk assessment modeling systems, medical diagnosis and treatment planning and commodities trading systems, and more. Fuzzy logic is used to optimize efficiency in thermostats for control of heating and cooling, for industrial automation and process control, computer animation, signal processing, and data analysis. Fuzzy logic has made significant contributions in the fields of machine learning and data mining. In infinitary logic, degrees of provability of propositions can be expressed in terms of infinite-valued logic that can be described via evaluated formulas, written as ordered pairs each consisting of a truth degree symbol and a formula. In mathematics, number-free semantics can express facts about classical mathematical notions and make them derivable by logical deductions in infinite-valued logic. T-norm fuzzy logics can be applied to eliminate references to real numbers from definitions and theorems, in order to simplify certain mathematical concepts and facilitate certain generalizations. A framework employed for number-free formalization of mathematical concepts is known as fuzzy class theory. Philosophical questions, including the Sorites paradox, have been considered based on an infinite-valued logic known as fuzzy epistemicism. The Sorites paradox suggests that if adding a grain of sand to something that is not a heap cannot create a heap, then a heap of sand cannot be created. A stepwise approach toward a limit, in which truth is gradually "leaked", tends to refute that suggestion. In the study of logic itself, infinite-valued logic has served as an aid to understand the nature of the human understanding of logical concepts. Kurt Gödel attempted to comprehend the human ability for logical intuition in terms of finite-valued logic before concluding that the ability is based on infinite-valued logic. Open questions remain regarding the handling, in natural language semantics, of indeterminate truth values. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\aleph_0" } ]
https://en.wikipedia.org/wiki?curid=57435638
5743991
Mineral rights
Property rights to exploit an area for the minerals Mineral rights are property rights to exploit an area for the minerals it harbors. Mineral rights can be separate from property ownership (see Split estate). Mineral rights can refer to sedentary minerals that do not move below the Earth's surface or fluid minerals such as oil or natural gas. There are three major types of mineral property: unified estate, severed or split estate, and fractional ownership of minerals. Mineral estate. Owning mineral rights (often referred to as a "mineral interest" or a "mineral estate") gives the owner the right to exploit, mine, or produce any or all minerals they own. Minerals can refer to oil, gas, coal, metal ores, stones, sands, or salts. An owner of mineral rights may sell, lease, or donate those minerals to any person or company as they see fit. Mineral interests can be owned by private landowners, private companies, or federal, state or local governments. Sorting these rights are a large part of mineral exploration. A brief outline of rights and responsibilities of parties involved can be found here. Types of mineral estate. Unified estate. Unified estates, sometimes referred to as "fee simple" or "unified tenure" mean that the surface and mineral rights are not severed. Severed/split estate. This type of estate occurs when mineral and surface ownership are separated. This can occur from prior ownership of mineral rights or is commonly performed when land is passed between family generations. Today corporations own a significant portion of mineral rights beneath private individuals. Fractional ownership. Here a percentage of the mineral property is owned by two or more entities. This can occur when owners leave fractions of the rights to multiple children or grandchildren. Severed/split estate. Mineral estates can be severed, or separated, from surface estates. There are two main avenues to mineral rights severance: the surface property may be sold and the minerals retained, or the minerals may be sold and the surface property retained, though the former is more common. When mineral rights have been severed from the surface rights (or property rights), it is referred to as a "split estate." In a split estate, the owner of the mineral rights has the right to develop those minerals, regardless of who owns the surface rights. This is because in United States law, mineral rights trump surface rights. The U.S. historical precedent for this severance roots from western expansion and The Land Ordinance Act of 1785 and The Northwest Ordinance Act of 1789 at the cost of dispossessed Natives. Severability was further reinforced by the Homestead Act of 1862 (OHA) and the 1862 Railroad Act. Agricultural patents and the California gold rush of 1848 began placing lands that were mineral abundant into private hands and furthered the precedent of mineral rights outweighing surface rights. This was a crucial step in the development of an economic system based largely on private incentives and market transactions. An early case involving a property dispute between a father and son involving ownership of coal veins in Pennsylvania is cited stating; “One who has the exclusive right to mine coal upon a tract of land has the right of possession even as against the owner of the soil, so far as it is necessary to carry on mining operations.” (Turner v. Reynolds, 1854). A later case in Texas in 1862 set precedent by stating “it is a well-established doctrine from the earliest days of the common law, that the right to the minerals thus reserved carries with it the right to enter, dig and carry them away." (Cowan v. Hardeman, 1862). Some may argue that the U.S. justice system's enabling of this precedent is further exacerbated by industry lobbying that enables the status quo of favoring oil and gas development vs other innovations. This severability can create tension between mineral rights owners and surface rights owners if the surface rights owners do not want to allow the mineral rights owners to use their property to access their minerals. This is becoming ever more present in the light of recent unconventional oil and gas development (UOGD) made feasible by technological advancement such as hydraulic fracturing. Problems include water pollution, fluid storage issues and surface damages. These are especially common in the West Virginia gas wells of the Marcellus Shale. Often, companies will offer a surface rights owner a surface use agreement, which can provide financial compensation to the surface owner, or more commonly, offer some concessions on how the minerals are accessed. For example, some surface use agreements require the company to access the property from specific roads or points on the property. A major issue involving fluid mineral rights is the "rule of capture" whereby minerals capable of migrating beneath the Earth's surface can be extracted, even if the original source was another person's mineral property. Such claims typically are protected by various states' oil and gas regulatory agencies whose broader mandate is to promote conservation and minimize conflicts between mineral owners. Major elements. The five elements of a mineral right are: The owner of a mineral interest may separately convey any or all of the above-listed interests. Minerals may be possessed as a life estate, which does not permit a person to sell them, but merely that they own the minerals so long as they live. After this, the rights revert to a predesignated entity, such as a specific organization or person. It is possible for a mineral right owner to sever and sell an oil and gas royalty interest, while keeping the other mineral rights. In such case, if the oil lease expires, the royalty interest is extinguished, its purchaser has nothing, and the mineral owner still owns the minerals. Leasing. An owner of mineral rights may choose to lease those mineral rights to a company for development at any point. Signing a lease signals that both parties agree to the terms laid out in the lease. Lease terms typically include a price to be paid to the mineral rights owner for the minerals to be extracted, and a set of circumstances under which those minerals are to be extracted. For instance, a mineral rights owner might request that the company minimize any noise and light pollution when extracting the minerals. Leases are usually term-limited, meaning the company has a limited amount of time to develop the resources; if they do not begin development within that time-frame they forfeit their right to extract those minerals. The four components of mineral rights leasing are: Ownership. There are three distinct but related aspects of ownership. They are: Leasing. To bring oil and gas reserves to market, minerals are conveyed for a specified time to oil companies through a legally binding contract known as a lease. This arrangement between individual mineral owners and oil companies began prior to 1900 and still thrives today. Before exploration can begin, the mineral owner (lessor) and the oil company (lessee) must agree to certain terms regarding the rights, privileges and obligations of the respective parties during the exploration and possible production stages. Although there are numerous other important details, the basic structure of the lease is straightforward: in exchange for an up-front lease bonus payment, plus a royalty percentage of the value of any production, the mineral owner grants the oil company the right to drill for a period of time, known as the primary term. If the term of the oil or gas lease extends beyond the primary term, and a well was not drilled, then the Lessee is required to pay the lessor a delay rental. This delay rental could be $1 or more per acre. In some cases, no drilling occurs and the lease simply expires. The duration of the lease may be extended when drilling or production starts. This enters into the period of time known as the secondary term, which applies for as long as oil and gas is produced in paying quantities. Division order. A division order is not a contract. It is a stipulation, derived from the lease agreement and other agreements, as to what the operator of a well or an oil and/or gas purchaser will disburse in terms of revenue to the mineral owner and others. The purpose of the division order is to show how the mineral revenues are divided up between the oil company, the owners of the mineral rights (royalty owners) and the overriding royalty interest owners. The division order needs a signature, a current address and social security number for individual royalty owners or tax identification number for companies. Oil and gas lease. An oil and gas lease is a contract because it contains consideration, consent, legal tangible items and competency. Many other line items can be negotiated by the time the contract is complete. The rights of all parties are defined in agreements; and, when mineral production begins, the division order states how much revenue goes to each party involved. Royalty check. Mineral owners may receive a monthly royalty check if oil, gas, or any other substances of value are extracted from below the surface and either sold or used by an oil and gas operating company. Royalty statements include the production and revenue figures for both the individual owner and the entire well. The royalty paid is a function of the net value of the proceeds from the sale of the oil, gas, or other substance, multiplied by the owner's revenue interest decimal, less any amounts deducted for taxes or other deductions. The revenue decimal used to calculate the amount of an owner's royalty check is calculated with the following equation: Revenue interest decimal formula_0 It is common for royalty checks to fluctuate between pay periods due to monthly changes in oil or gas prices, or changes in the volumes produced by the associated oil or gas wells. Additionally, royalties may cease altogether if the associated wells quit producing marketable quantities of oil or gas, if the operating company has changed hands and the new operator has not yet established a new payment account for the owner, or if the operating company or product purchaser is missing appropriate paperwork or proper documentation of changes in ownership or contact information. Surface use agreement. A surface use agreement (SUA) is a contract between a property owner and a mineral rights holder that dictates how the mineral rights are to be developed. Meaning, when mineral rights are extracted by a company that does not own the property above where the minerals are located, the company has the legal right to extract those minerals regardless. However, companies will often enter into voluntary negotiations with the surface rights owner to ensure that the operations all go smoothly. In such cases, the company will offer a SUA, in which property owners may ask for financial compensation or other concessions regarding how the minerals are extracted. See sample. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "= (A \\div U) \\times R \\times (P \\times Y - D)" } ]
https://en.wikipedia.org/wiki?curid=5743991
5744042
Partial geometry
An incidence structure formula_0 consists a set &amp;NoBreak;&amp;NoBreak; of points, a set &amp;NoBreak;&amp;NoBreak; of lines, and an incidence relation, or set of flags, formula_1; a point formula_2 is said to be "incident" with a line formula_3 if &amp;NoBreak;&amp;NoBreak;. It is a () partial geometry if there are integers formula_4 such that: A partial geometry with these parameters is denoted by &amp;NoBreak;&amp;NoBreak;. Generalisations. A partial linear space formula_13 of order formula_14 is called a semipartial geometry if there are integers formula_15 such that: A semipartial geometry is a partial geometry if and only if &amp;NoBreak;&amp;NoBreak;. It can be easily shown that the collinearity graph of such a geometry is strongly regular with parameters &amp;NoBreak;&amp;NoBreak;. A nice example of such a geometry is obtained by taking the affine points of formula_18 and only those lines that intersect the plane at infinity in a point of a fixed Baer subplane; it has parameters &amp;NoBreak;&amp;NoBreak;.
[ { "math_id": 0, "text": "C=(P,L,I)" }, { "math_id": 1, "text": "I \\subseteq P \\times L" }, { "math_id": 2, "text": "p" }, { "math_id": 3, "text": "l" }, { "math_id": 4, "text": "s,t,\\alpha\\geq 1" }, { "math_id": 5, "text": "s+1" }, { "math_id": 6, "text": "t+1" }, { "math_id": 7, "text": "\\alpha" }, { "math_id": 8, "text": "m" }, { "math_id": 9, "text": "q" }, { "math_id": 10, "text": "\\frac{(s+1)(s t+\\alpha)}{\\alpha}" }, { "math_id": 11, "text": "\\mathrm{pg}(s,t,\\alpha)" }, { "math_id": 12, "text": "S(2, s+1, ts+1)" }, { "math_id": 13, "text": "S=(P,L,I)" }, { "math_id": 14, "text": "s, t" }, { "math_id": 15, "text": "\\alpha\\geq 1, \\mu" }, { "math_id": 16, "text": "0" }, { "math_id": 17, "text": "\\mu" }, { "math_id": 18, "text": "\\mathrm{PG}(3, q^2)" } ]
https://en.wikipedia.org/wiki?curid=5744042
5744061
Epigraph (mathematics)
Region above a graph In mathematics, the epigraph or supergraph of a function formula_0 valued in the extended real numbers formula_1 is the set formula_2 consisting of all points in the Cartesian product formula_3 lying on or above the function's graph. Similarly, the strict epigraph formula_4 is the set of points in formula_3 lying strictly above its graph. Importantly, unlike the graph of formula_5 the epigraph always consists entirely of points in formula_3 (this is true of the graph only when formula_6 is real-valued). If the function takes formula_7 as a value then formula_8 will not be a subset of its epigraph formula_9 For example, if formula_10 then the point formula_11 will belong to formula_8 but not to formula_9 These two sets are nevertheless closely related because the graph can always be reconstructed from the epigraph, and vice versa. The study of continuous real-valued functions in real analysis has traditionally been closely associated with the study of their graphs, which are sets that provide geometric information (and intuition) about these functions. Epigraphs serve this same purpose in the fields of convex analysis and variational analysis, in which the primary focus is on convex functions valued in formula_12 instead of continuous functions valued in a vector space (such as formula_13 or formula_14). This is because in general, for such functions, geometric intuition is more readily obtained from a function's epigraph than from its graph. Similarly to how graphs are used in real analysis, the epigraph can often be used to give geometrical interpretations of a convex function's properties, to help formulate or prove hypotheses, or to aid in constructing counterexamples. Definition. The definition of the epigraph was inspired by that of the graph of a function, where the &lt;templatestyles src="Template:Visible anchor/styles.css" /&gt;graph of formula_15 is defined to be the set formula_16 The &lt;templatestyles src="Template:Visible anchor/styles.css" /&gt;epigraph or &lt;templatestyles src="Template:Visible anchor/styles.css" /&gt;supergraph of a function formula_0 valued in the extended real numbers formula_1 is the set formula_17 where all sets being unioned in the last line are pairwise disjoint. In the union over formula_18 that appears above on the right hand side of the last line, the set formula_19 may be interpreted as being a "vertical ray" consisting of formula_20 and all points in formula_3 "directly above" it. Similarly, the set of points on or below the graph of a function is its &lt;templatestyles src="Template:Visible anchor/styles.css" /&gt;hypograph. The &lt;templatestyles src="Template:Visible anchor/styles.css" /&gt;strict epigraph is the epigraph with the graph removed: formula_21 where all sets being unioned in the last line are pairwise disjoint, and some may be empty. Relationships with other sets. Despite the fact that formula_6 might take one (or both) of formula_7 as a value (in which case its graph would not be a subset of formula_3), the epigraph of formula_6 is nevertheless defined to be a subset of formula_3 rather than of formula_22 This is intentional because when formula_23 is a vector space then so is formula_3 but formula_24 is never a vector space (since the extended real number line formula_12 is not a vector space). This deficiency in formula_24 remains even if instead of being a vector space, formula_23 is merely a non-empty subset of some vector space. The epigraph being a subset of a vector space allows for tools related to real analysis and functional analysis (and other fields) to be more readily applied. The domain (rather than the codomain) of the function is not particularly important for this definition; it can be any linear space or even an arbitrary set instead of formula_25. The strict epigraph formula_4 and the graph formula_8 are always disjoint. The epigraph of a function formula_0 is related to its graph and strict epigraph by formula_26 where set equality holds if and only if formula_6 is real-valued. However, formula_27 always holds. Reconstructing functions from epigraphs. The epigraph is empty if and only if the function is identically equal to infinity. Just as any function can be reconstructed from its graph, so too can any extended real-valued function formula_6 on formula_23 be reconstructed from its epigraph formula_28 (even when formula_6 takes on formula_7 as a value). Given formula_29 the value formula_30 can be reconstructed from the intersection formula_31 of formula_32 with the "vertical line" formula_33 passing through formula_34 as follows: The above observations can be combined to give a single formula for formula_30 in terms of formula_35 Specifically, for any formula_29 formula_36 where by definition, formula_37 This same formula can also be used to reconstruct formula_6 from its strict epigraph formula_38 Relationships between properties of functions and their epigraphs. A function is convex if and only if its epigraph is a convex set. The epigraph of a real affine function formula_39 is a halfspace in formula_40 A function is lower semicontinuous if and only if its epigraph is closed. Citations. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "f : X \\to [-\\infty, \\infty]" }, { "math_id": 1, "text": "[-\\infty, \\infty] = \\Reals \\cup \\{\\pm \\infty\\}" }, { "math_id": 2, "text": "\\operatorname{epi} f = \\{(x, r) \\in X \\times \\Reals ~:~ r \\geq f(x)\\}" }, { "math_id": 3, "text": "X \\times \\Reals" }, { "math_id": 4, "text": "\\operatorname{epi}_S f" }, { "math_id": 5, "text": "f," }, { "math_id": 6, "text": "f" }, { "math_id": 7, "text": "\\pm \\infty" }, { "math_id": 8, "text": "\\operatorname{graph} f" }, { "math_id": 9, "text": "\\operatorname{epi} f." }, { "math_id": 10, "text": "f\\left(x_0\\right) = \\infty" }, { "math_id": 11, "text": "\\left(x_0, f\\left(x_0\\right)\\right) = \\left(x_0, \\infty\\right)" }, { "math_id": 12, "text": "[-\\infty, \\infty]" }, { "math_id": 13, "text": "\\Reals" }, { "math_id": 14, "text": "\\Reals^2" }, { "math_id": 15, "text": "f : X \\to Y" }, { "math_id": 16, "text": "\\operatorname{graph} f := \\{(x, y) \\in X \\times Y ~:~ y = f(x)\\}." }, { "math_id": 17, "text": "\n\\begin{alignat}{4}\n\\operatorname{epi} f \n&= \\{(x, r) \\in X \\times \\Reals ~:~ r \\geq f(x)\\} \\\\\n&= \\left[f^{-1}(- \\infty) \\times \\Reals\\right] \\cup \\bigcup_{x \\in f^{-1}(\\Reals)} (\\{x\\} \\times [f(x), \\infty))\n\\end{alignat}\n" }, { "math_id": 18, "text": "x \\in f^{-1}(\\Reals)" }, { "math_id": 19, "text": "\\{x\\} \\times [f(x), \\infty)" }, { "math_id": 20, "text": "(x, f(x))" }, { "math_id": 21, "text": "\n\\begin{alignat}{4}\n\\operatorname{epi}_S f \n&= \\{(x, r) \\in X \\times \\Reals ~:~ r > f(x)\\} \\\\\n&= \\operatorname{epi} f \\setminus \\operatorname{graph} f \\\\\n&= \\bigcup_{x \\in X} \\left( \\{x\\} \\times (f(x), \\infty) \\right)\n\\end{alignat}\n" }, { "math_id": 22, "text": "X \\times [-\\infty, \\infty]." }, { "math_id": 23, "text": "X" }, { "math_id": 24, "text": "X \\times [-\\infty, \\infty]" }, { "math_id": 25, "text": "\\Reals^n" }, { "math_id": 26, "text": "\\,\\operatorname{epi} f \\,\\subseteq\\, \\operatorname{epi}_S f \\,\\cup\\, \\operatorname{graph} f" }, { "math_id": 27, "text": "\\operatorname{epi} f = \\left[\\operatorname{epi}_S f \\,\\cup\\, \\operatorname{graph} f\\right] \\,\\cap\\, [X \\times \\Reals]" }, { "math_id": 28, "text": "E := \\operatorname{epi} f" }, { "math_id": 29, "text": "x \\in X," }, { "math_id": 30, "text": "f(x)" }, { "math_id": 31, "text": "E \\cap (\\{x\\} \\times \\Reals)" }, { "math_id": 32, "text": "E" }, { "math_id": 33, "text": "\\{x\\} \\times \\Reals" }, { "math_id": 34, "text": "x" }, { "math_id": 35, "text": "E := \\operatorname{epi} f." }, { "math_id": 36, "text": "f(x) = \\inf_{} \\{r \\in \\Reals ~:~ (x, r) \\in E\\}" }, { "math_id": 37, "text": "\\inf_{} \\varnothing := \\infty." }, { "math_id": 38, "text": "E := \\operatorname{epi}_S f." }, { "math_id": 39, "text": "g : \\Reals^n \\to \\Reals" }, { "math_id": 40, "text": "\\Reals^{n+1}." } ]
https://en.wikipedia.org/wiki?curid=5744061
57448874
Bayesian model reduction
Bayesian model reduction is a method for computing the evidence and posterior over the parameters of Bayesian models that differ in their priors. A full model is fitted to data using standard approaches. Hypotheses are then tested by defining one or more 'reduced' models with alternative (and usually more restrictive) priors, which usually – in the limit – switch off certain parameters. The evidence and parameters of the reduced models can then be computed from the evidence and estimated (posterior) parameters of the full model using Bayesian model reduction. If the priors and posteriors are normally distributed, then there is an analytic solution which can be computed rapidly. This has multiple scientific and engineering applications: these include scoring the evidence for large numbers of models very quickly and facilitating the estimation of hierarchical models (Parametric Empirical Bayes). Theory. Consider some model with parameters formula_0 and a prior probability density on those parameters formula_1. The posterior belief about formula_0 after seeing the data formula_2 is given by Bayes rule: The second line of Equation 1 is the model evidence, which is the probability of observing the data given the model. In practice, the posterior cannot usually be computed analytically due to the difficulty in computing the integral over the parameters. Therefore, the posteriors are estimated using approaches such as MCMC sampling or variational Bayes. A reduced model can then be defined with an alternative set of priors formula_3: The objective of Bayesian model reduction is to compute the posterior formula_4 and evidence formula_5 of the reduced model from the posterior formula_2 and evidence formula_6 of the full model. Combining Equation 1 and Equation 2 and re-arranging, the reduced posterior formula_4 can be expressed as the product of the full posterior, the ratio of priors and the ratio of evidences: The evidence for the reduced model is obtained by integrating over the parameters of each side of the equation: And by re-arrangement: Gaussian priors and posteriors. Under Gaussian prior and posterior densities, as are used in the context of variational Bayes, Bayesian model reduction has a simple analytical solution. First define normal densities for the priors and posteriors: where the tilde symbol (~) indicates quantities relating to the reduced model and subscript zero – such as formula_7 – indicates parameters of the priors. For convenience we also define precision matrices, which are the inverse of each covariance matrix: The free energy of the full model formula_8 is an approximation (lower bound) on the log model evidence: formula_9 that is optimised explicitly in variational Bayes (or can be recovered from sampling approximations). The reduced model's free energy formula_10 and parameters formula_11 are then given by the expressions: Example. Consider a model with a parameter formula_0 and Gaussian prior formula_12, which is the Normal distribution with mean zero and standard deviation 0.5 (illustrated in the Figure, left). This prior says that without any data, the parameter is expected to have value zero, but we are willing to entertain positive or negative values (with a 99% confidence interval [−1.16,1.16]). The model with this prior is fitted to the data, to provide an estimate of the parameter formula_13 and the model evidence formula_6. To assess whether the parameter contributed to the model evidence, i.e. whether we learnt anything about this parameter, an alternative 'reduced' model is specified in which the parameter has a prior with a much smaller variance: e.g. formula_14. This is illustrated in the Figure (right). This prior effectively 'switches off' the parameter, saying that we are almost certain that it has value zero. The parameter formula_15 and evidence formula_5 for this reduced model are rapidly computed from the full model using Bayesian model reduction. The hypothesis that the parameter contributed to the model is then tested by comparing the full and reduced models via the Bayes factor, which is the ratio of model evidences: formula_16 The larger this ratio, the greater the evidence for the full model, which included the parameter as a free parameter. Conversely, the stronger the evidence for the reduced model, the more confident we can be that the parameter did not contribute. Note this method is not specific to comparing 'switched on' or 'switched off' parameters, and any intermediate setting of the priors could also be evaluated. Applications. Neuroimaging. Bayesian model reduction was initially developed for use in neuroimaging analysis, in the context of modelling brain connectivity, as part of the dynamic causal modelling framework (where it was originally referred to as post-hoc Bayesian model selection). Dynamic causal models (DCMs) are differential equation models of brain dynamics. The experimenter specifies multiple competing models which differ in their priors – e.g. in the choice of parameters which are fixed at their prior expectation of zero. Having fitted a single 'full' model with all parameters of interest informed by the data, Bayesian model reduction enables the evidence and parameters for competing models to be rapidly computed, in order to test hypotheses. These models can be specified manually by the experimenter, or searched over automatically, in order to 'prune' any redundant parameters which do not contribute to the evidence. Bayesian model reduction was subsequently generalised and applied to other forms of Bayesian models, for example parametric empirical Bayes (PEB) models of group effects. Here, it is used to compute the evidence and parameters for any given level of a hierarchical model under constraints (empirical priors) imposed by the level above. Neurobiology. Bayesian model reduction has been used to explain functions of the brain. By analogy to its use in eliminating redundant parameters from models of experimental data, it has been proposed that the brain eliminates redundant parameters of internal models of the world while offline (e.g. during sleep). Software implementations. Bayesian model reduction is implemented in the Statistical Parametric Mapping toolbox, in the Matlab function spm_log_evidence_reduce.m . References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\theta" }, { "math_id": 1, "text": "p(\\theta)" }, { "math_id": 2, "text": "p(\\theta\\mid y)" }, { "math_id": 3, "text": "\\tilde{p}(\\theta)" }, { "math_id": 4, "text": "\\tilde{p}(\\theta\\mid y)" }, { "math_id": 5, "text": "\\tilde{p}(y)" }, { "math_id": 6, "text": "p(y)" }, { "math_id": 7, "text": "\\mu_0" }, { "math_id": 8, "text": "F" }, { "math_id": 9, "text": "F\\approx \\ln{p(y)}" }, { "math_id": 10, "text": "\\tilde{F}" }, { "math_id": 11, "text": "(\\tilde{\\mu},\\tilde{\\Sigma})" }, { "math_id": 12, "text": "p(\\theta)=N(0,0.5^2)" }, { "math_id": 13, "text": "q(\\theta)" }, { "math_id": 14, "text": "\\tilde{p}_0=N(0,0.001^2)" }, { "math_id": 15, "text": "\\tilde{q}(\\theta)" }, { "math_id": 16, "text": "\\text{BF}=\\frac{p(y)}{\\tilde{p}(y)}" } ]
https://en.wikipedia.org/wiki?curid=57448874
574544
Circular motion
Object movement along a circular path &lt;templatestyles src="Hlist/styles.css"/&gt; In physics, circular motion is a movement of an object along the circumference of a circle or rotation along a circular arc. It can be uniform, with a constant rate of rotation and constant tangential speed, or non-uniform with a changing rate of rotation. The rotation around a fixed axis of a three-dimensional body involves the circular motion of its parts. The equations of motion describe the movement of the center of mass of a body, which remains at a constant distance from the axis of rotation. In circular motion, the distance between the body and a fixed point on its surface remains the same, i.e., the body is assumed rigid. Examples of circular motion include: special satellite orbits around the Earth (circular orbits), a ceiling fan's blades rotating around a hub, a stone that is tied to a rope and is being swung in circles, a car turning through a curve in a race track, an electron moving perpendicular to a uniform magnetic field, and a gear turning inside a mechanism. Since the object's velocity vector is constantly changing direction, the moving object is undergoing acceleration by a centripetal force in the direction of the center of rotation. Without this acceleration, the object would move in a straight line, according to Newton's laws of motion. Uniform circular motion. In physics, uniform circular motion describes the motion of a body traversing a circular path at a constant speed. Since the body describes circular motion, its distance from the axis of rotation remains constant at all times. Though the body's speed is constant, its velocity is not constant: velocity, a vector quantity, depends on both the body's speed and its direction of travel. This changing velocity indicates the presence of an acceleration; this centripetal acceleration is of constant magnitude and directed at all times toward the axis of rotation. This acceleration is, in turn, produced by a centripetal force which is also constant in magnitude and directed toward the axis of rotation. In the case of rotation around a fixed axis of a rigid body that is not negligibly small compared to the radius of the path, each particle of the body describes a uniform circular motion with the same angular velocity, but with velocity and acceleration varying with the position with respect to the axis. Formula. For motion in a circle of radius r, the circumference of the circle is "C" = 2"πr". If the period for one rotation is T, the angular rate of rotation, also known as angular velocity, ω is: formula_0 and the units are radians/second. The speed of the object traveling the circle is: formula_1 The angle θ swept out in a time t is: formula_2 The angular acceleration, α, of the particle is: formula_3 In the case of uniform circular motion, α will be zero. The acceleration due to change in the direction is: formula_4 The centripetal and centrifugal force can also be found using acceleration: formula_5 The vector relationships are shown in Figure 1. The axis of rotation is shown as a vector ω perpendicular to the plane of the orbit and with a magnitude "ω" = "dθ" / "dt". The direction of ω is chosen using the right-hand rule. With this convention for depicting rotation, the velocity is given by a vector cross product as formula_6 which is a vector perpendicular to both ω and r("t"), tangential to the orbit, and of magnitude "ω" "r". Likewise, the acceleration is given by formula_7 which is a vector perpendicular to both ω and v("t") of magnitude "ω" |v| = "ω"2 "r" and directed exactly opposite to r("t"). In the simplest case the speed, mass, and radius are constant. Consider a body of one kilogram, moving in a circle of radius one metre, with an angular velocity of one radian per second. In polar coordinates. During circular motion, the body moves on a curve that can be described in the polar coordinate system as a fixed distance "R" from the center of the orbit taken as the origin, oriented at an angle "θ"("t") from some reference direction. See Figure 4. The displacement "vector" formula_8 is the radial vector from the origin to the particle location: formula_9 where formula_10 is the unit vector parallel to the radius vector at time t and pointing away from the origin. It is convenient to introduce the unit vector orthogonal to formula_10 as well, namely formula_11. It is customary to orient formula_11 to point in the direction of travel along the orbit. The velocity is the time derivative of the displacement: formula_12 Because the radius of the circle is constant, the radial component of the velocity is zero. The unit vector formula_10 has a time-invariant magnitude of unity, so as time varies its tip always lies on a circle of unit radius, with an angle θ the same as the angle of formula_13. If the particle displacement rotates through an angle "dθ" in time "dt", so does formula_10, describing an arc on the unit circle of magnitude "dθ". See the unit circle at the left of Figure 4. Hence: formula_14 where the direction of the change must be perpendicular to formula_10 (or, in other words, along formula_11) because any change formula_15 in the direction of formula_10 would change the size of formula_10. The sign is positive because an increase in "dθ" implies the object and formula_10 have moved in the direction of formula_11. Hence the velocity becomes: formula_16 The acceleration of the body can also be broken into radial and tangential components. The acceleration is the time derivative of the velocity: formula_17 The time derivative of formula_11 is found the same way as for formula_10. Again, formula_11 is a unit vector and its tip traces a unit circle with an angle that is "π"/2 + "θ". Hence, an increase in angle "dθ" by formula_13 implies formula_11 traces an arc of magnitude "dθ", and as formula_11 is orthogonal to formula_10, we have: formula_18 where a negative sign is necessary to keep formula_11 orthogonal to formula_10. (Otherwise, the angle between formula_11 and formula_10 would "decrease" with an increase in "dθ".) See the unit circle at the left of Figure 4. Consequently, the acceleration is: formula_19 The centripetal acceleration is the radial component, which is directed radially inward: formula_20 while the tangential component changes the magnitude of the velocity: formula_21 Using complex numbers. Circular motion can be described using complex numbers. Let the x axis be the real axis and the formula_22 axis be the imaginary axis. The position of the body can then be given as formula_23, a complex "vector": formula_24 where "i" is the imaginary unit, and formula_25 is the argument of the complex number as a function of time, t. Since the radius is constant: formula_26 where a "dot" indicates differentiation in respect of time. With this notation, the velocity becomes: formula_27 and the acceleration becomes: formula_28 The first term is opposite in direction to the displacement vector and the second is perpendicular to it, just like the earlier results shown before. Velocity. Figure 1 illustrates velocity and acceleration vectors for uniform motion at four different points in the orbit. Because the velocity v is tangent to the circular path, no two velocities point in the same direction. Although the object has a constant "speed", its "direction" is always changing. This change in velocity is caused by an acceleration a, whose magnitude is (like that of the velocity) held constant, but whose direction also is always changing. The acceleration points radially inwards (centripetally) and is perpendicular to the velocity. This acceleration is known as centripetal acceleration. For a path of radius r, when an angle θ is swept out, the distance traveled on the periphery of the orbit is "s" = "rθ". Therefore, the speed of travel around the orbit is formula_29 where the angular rate of rotation is "ω". (By rearrangement, "ω" = "v"/"r".) Thus, "v" is a constant, and the velocity vector v also rotates with constant magnitude "v", at the same angular rate "ω". Relativistic circular motion. In this case, the three-acceleration vector is perpendicular to the three-velocity vector, formula_30 and the square of proper acceleration, expressed as a scalar invariant, the same in all reference frames, formula_31 becomes the expression for circular motion, formula_32 or, taking the positive square root and using the three-acceleration, we arrive at the proper acceleration for circular motion: formula_33 Acceleration. The left-hand circle in Figure 2 is the orbit showing the velocity vectors at two adjacent times. On the right, these two velocities are moved so their tails coincide. Because speed is constant, the velocity vectors on the right sweep out a circle as time advances. For a swept angle "dθ" = "ω" "dt" the change in v is a vector at right angles to v and of magnitude "v" "dθ", which in turn means that the magnitude of the acceleration is given by formula_34 Non-uniform circular motion. In a non-uniform circular motion, an object is moving in a circular path with a varying speed. Since the speed is changing, there is tangential acceleration in addition to normal acceleration. In a non-uniform circular motion, the net acceleration (a) is along the direction of Δ"v", which is directed inside the circle but does not pass through its center (see figure). The net acceleration may be resolved into two components: tangential acceleration and normal acceleration also known as the centripetal or radial acceleration. Unlike tangential acceleration, centripetal acceleration is present in both uniform and non-uniform circular motion. In a non-uniform circular motion, normal force does not always point in the opposite direction of weight. Here is an example with an object traveling in a straight path then looping a loop back into a straight path again. This diagram shows the normal force pointing in other directions rather than opposite to the weight force. The normal force is actually the sum of the radial and tangential forces. The component of weight force is responsible for the tangential force here (We have neglected frictional force). The radial force (centripetal force) is due to the change in the direction of velocity as discussed earlier. In a non-uniform circular motion, normal force and weight may point in the same direction. Both forces can point down, yet the object will remain in a circular path without falling straight down. First, let's see why normal force can point down in the first place. In the first diagram, let's say the object is a person sitting inside a plane, the two forces point down only when it reaches the top of the circle. The reason for this is that the normal force is the sum of the tangential force and centripetal force. The tangential force is zero at the top (as no work is performed when the motion is perpendicular to the direction of force applied. Here weight force is perpendicular to the direction of motion of the object at the top of the circle) and centripetal force points down, thus normal force will point down as well. From a logical standpoint, a person who is travelling in the plane will be upside down at the top of the circle. At that moment, the person's seat is actually pushing down on the person, which is the normal force. The reason why the object does not fall down when subjected to only downward forces is a simple one. Think about what keeps an object up after it is thrown. Once an object is thrown into the air, there is only the downward force of Earth's gravity that acts on the object. That does not mean that once an object is thrown in the air, it will fall instantly. What keeps that object up in the air is its velocity. The first of Newton's laws of motion states that an object's inertia keeps it in motion, and since the object in the air has a velocity, it will tend to keep moving in that direction. A varying angular speed for an object moving in a circular path can also be achieved if the rotating body does not have a homogeneous mass distribution. For inhomogeneous objects, it is necessary to approach the problem as in. One can deduce the formulae of speed, acceleration and jerk, assuming all the variables to depend on formula_35: formula_36 formula_37 formula_38 formula_39 formula_40 formula_41 formula_42 Further transformations may involve formula_43 and corresponding derivatives: formula_44 Applications. Solving applications dealing with non-uniform circular motion involves force analysis. With a uniform circular motion, the only force acting upon an object traveling in a circle is the centripetal force. In a non-uniform circular motion, there are additional forces acting on the object due to a non-zero tangential acceleration. Although there are additional forces acting upon the object, the sum of all the forces acting on the object will have to be equal to the centripetal force. formula_45 Radial acceleration is used when calculating the total force. Tangential acceleration is not used in calculating total force because it is not responsible for keeping the object in a circular path. The only acceleration responsible for keeping an object moving in a circle is the radial acceleration. Since the sum of all forces is the centripetal force, drawing centripetal force into a free body diagram is not necessary and usually not recommended. Using formula_46, we can draw free body diagrams to list all the forces acting on an object and then set it equal to formula_47. Afterward, we can solve for whatever is unknown (this can be mass, velocity, radius of curvature, coefficient of friction, normal force, etc.). For example, the visual above showing an object at the top of a semicircle would be expressed as formula_48. In a uniform circular motion, the total acceleration of an object in a circular path is equal to the radial acceleration. Due to the presence of tangential acceleration in a non uniform circular motion, that does not hold true any more. To find the total acceleration of an object in a non uniform circular, find the vector sum of the tangential acceleration and the radial acceleration. formula_49 Radial acceleration is still equal to formula_50. Tangential acceleration is simply the derivative of the speed at any given point: formula_51. This root sum of squares of separate radial and tangential accelerations is only correct for circular motion; for general motion within a plane with polar coordinates formula_52, the Coriolis term formula_53 should be added to formula_54, whereas radial acceleration then becomes formula_55. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\omega = \\frac {2 \\pi}{T} = 2\\pi f = \\frac{d\\theta}{dt} " }, { "math_id": 1, "text": "v = \\frac{2 \\pi r}{T} = \\omega r" }, { "math_id": 2, "text": "\\theta = 2 \\pi \\frac{t}{T} = \\omega t" }, { "math_id": 3, "text": "\\alpha = \\frac{d\\omega}{dt}" }, { "math_id": 4, "text": "a_c = \\frac{v^2}{r} = \\omega^2 r" }, { "math_id": 5, "text": "F_c = \\dot{p} \\mathrel\\overset{\\dot{m} = 0}{=} ma_c = \\frac{mv^2}{r}" }, { "math_id": 6, "text": "\\mathbf{v} = \\boldsymbol \\omega \\times \\mathbf r ," }, { "math_id": 7, "text": "\\mathbf{a} = \\boldsymbol \\omega \\times \\mathbf v = \\boldsymbol \\omega \\times \\left( \\boldsymbol \\omega \\times \\mathbf r \\right) , " }, { "math_id": 8, "text": "\\mathbf{r}" }, { "math_id": 9, "text": "\\mathbf{r}(t) = R \\hat\\mathbf{u}_R(t)\\,," }, { "math_id": 10, "text": "\\hat\\mathbf{u}_R(t)" }, { "math_id": 11, "text": "\\hat\\mathbf{u}_\\theta(t)" }, { "math_id": 12, "text": "\\mathbf{v}(t) = \\frac{d}{dt} \\mathbf{r}(t) = \\frac{d R}{dt} \\hat\\mathbf{u}_R(t) + R \\frac{d \\hat\\mathbf{u}_R}{dt} \\, ." }, { "math_id": 13, "text": "\\mathbf{r}(t)" }, { "math_id": 14, "text": "\\frac{d \\hat\\mathbf{u}_R}{dt} = \\frac{d \\theta}{dt} \\hat\\mathbf{u}_\\theta(t) \\, ," }, { "math_id": 15, "text": "d\\hat\\mathbf{u}_R(t)" }, { "math_id": 16, "text": "\\mathbf{v}(t) = \\frac{d}{dt} \\mathbf{r}(t) = R\\frac{d \\hat\\mathbf{u}_R}{dt} = R \\frac{d \\theta}{dt} \\hat\\mathbf{u}_\\theta(t) = R \\omega \\hat\\mathbf{u}_\\theta(t) \\, ." }, { "math_id": 17, "text": "\\begin{align}\n\\mathbf{a}(t) &= \\frac{d}{dt} \\mathbf{v}(t) = \\frac{d}{dt} \\left(R \\omega \\hat\\mathbf{u}_\\theta(t) \\right) \\\\\n &= R \\left( \\frac{d \\omega}{dt} \\hat\\mathbf{u}_\\theta(t) + \\omega \\frac{d \\hat\\mathbf{u}_\\theta}{dt} \\right) \\, .\n\\end{align}" }, { "math_id": 18, "text": "\\frac{d \\hat\\mathbf{u}_\\theta}{dt} = -\\frac{d \\theta}{dt} \\hat\\mathbf{u}_R(t) = -\\omega \\hat\\mathbf{u}_R(t) \\, ," }, { "math_id": 19, "text": "\\begin{align}\n\\mathbf{a}(t) &= R \\left( \\frac{d \\omega}{dt} \\hat\\mathbf{u}_\\theta(t) + \\omega \\frac{d \\hat\\mathbf{u}_\\theta}{dt} \\right) \\\\\n &= R \\frac{d \\omega}{dt} \\hat\\mathbf{u}_\\theta(t) - \\omega^2 R \\hat\\mathbf{u}_R(t) \\,.\n\\end{align}" }, { "math_id": 20, "text": "\\mathbf{a}_R(t) = -\\omega^2 R \\hat\\mathbf{u}_R(t) \\, ," }, { "math_id": 21, "text": "\\mathbf{a}_\\theta(t) = R \\frac{d \\omega}{dt} \\hat\\mathbf{u}_\\theta(t) = \\frac{d R \\omega}{dt} \\hat\\mathbf{u}_\\theta(t) = \\frac{d \\left|\\mathbf{v}(t)\\right|}{dt} \\hat\\mathbf{u}_\\theta(t) \\, ." }, { "math_id": 22, "text": "y" }, { "math_id": 23, "text": "z" }, { "math_id": 24, "text": "z = x + iy = R\\left(\\cos[\\theta(t)] + i \\sin[\\theta(t)]\\right) = Re^{i\\theta(t)}\\,," }, { "math_id": 25, "text": "\\theta(t)" }, { "math_id": 26, "text": "\\dot{R} = \\ddot R = 0 \\, ," }, { "math_id": 27, "text": "v = \\dot{z}\n = \\frac{d}{dt}\\left(R e^{i\\theta[t]}\\right)\n = R \\frac{d}{dt}\\left(e^{i\\theta[t]}\\right)\n = R e^{i\\theta(t)} \\frac{d}{dt} \\left(i \\theta[t] \\right)\n = iR\\dot{\\theta}(t) e^{i\\theta(t)}\n = i\\omega R e^{i\\theta(t)} = i\\omega z\n" }, { "math_id": 28, "text": "\\begin{align}\n a &= \\dot{v} = i\\dot{\\omega} z + i\\omega\\dot{z} = \\left(i\\dot{\\omega} - \\omega^2\\right)z \\\\\n &= \\left(i\\dot{\\omega} - \\omega^2 \\right) R e^{i\\theta(t)} \\\\\n &= -\\omega^2 R e^{i\\theta(t)} + \\dot{\\omega} e^{i\\frac{\\pi}{2}} R e^{i\\theta(t)} \\, .\n\\end{align}" }, { "math_id": 29, "text": "v = r \\frac{d\\theta}{dt} = r\\omega ," }, { "math_id": 30, "text": "\\mathbf{u} \\cdot \\mathbf{a} = 0. " }, { "math_id": 31, "text": "\\alpha^2 = \\gamma^4 a^2 + \\gamma^6 \\left(\\mathbf{u} \\cdot \\mathbf{a}\\right)^2, " }, { "math_id": 32, "text": "\\alpha^2 = \\gamma^4 a^2. " }, { "math_id": 33, "text": "\\alpha = \\gamma^2 \\frac{v^2}{r}. " }, { "math_id": 34, "text": "a_c = v \\frac{d\\theta}{dt} = v\\omega = \\frac{v^2}{r}" }, { "math_id": 35, "text": " t " }, { "math_id": 36, "text": "\n\\mathbf{r} = R \\mathbf{u}_R\n" }, { "math_id": 37, "text": "\n\\dot \\mathbf{u}_R = \\omega\\mathbf{u}_{\\theta}\n" }, { "math_id": 38, "text": "\n\\dot \\mathbf{u}_{\\theta} = -\\omega\\mathbf{u}_R\n" }, { "math_id": 39, "text": "\n\\mathbf{v} = \\frac{d}{dt} \\mathbf{r} = \\dot \\mathbf{r}\n= \\dot R \\mathbf{u}_R + R \\omega \\mathbf{u}_{\\theta}\n" }, { "math_id": 40, "text": "\n\\mathbf{a} = \\frac{d}{dt} \\mathbf{v} = \\dot \\mathbf{v}\n= \\ddot R \\mathbf{u}_R\n+ \\left(\\dot R \\omega \\mathbf{u}_{\\theta}\n+ \\dot R \\omega \\mathbf{u}_{\\theta} \\right)\n+ R \\dot \\omega \\mathbf{u}_{\\theta}\n- R \\omega^2 \\mathbf{u}_{R} \n" }, { "math_id": 41, "text": "\n\\mathbf{j} = \\frac{d}{dt} \\mathbf{a} = \\dot \\mathbf{a}\n= \\dot\\ddot R \\mathbf{u}_R \n+ \\ddot R \\omega \\mathbf{u}_{\\theta}\n\n+ \\left(2\\ddot R \\omega \\mathbf{u}_{\\theta}\n+ 2\\dot R \\dot\\omega \\mathbf{u}_{\\theta}\n- 2\\dot R \\omega^2\\mathbf{u}_R \\right)\n\n+ \\dot R \\dot\\omega \\mathbf{u}_{\\theta}\n+ R \\ddot\\omega \\mathbf{u}_{\\theta}\n- R \\dot\\omega \\omega\\mathbf{u}_{R}\n\n- \\dot R \\omega^2 \\mathbf{u}_{R}\n- R 2\\dot\\omega\\omega \\mathbf{u}_{R}\n- R \\omega^3 \\mathbf{u}_{\\theta}\n" }, { "math_id": 42, "text": "\n\\mathbf{j}\n= \\left( \\dot\\ddot R - 3\\dot R \\omega^2 - 3R \\dot\\omega \\omega \\right) \\mathbf{u}_R \n+ \\left( 3 \\ddot R \\omega + 3\\dot R \\dot\\omega + R \\ddot \\omega - R \\omega^3 \\right) \\mathbf{u}_{\\theta} \n" }, { "math_id": 43, "text": "curvature = c = \\frac{1}{R}, \\omega = \\frac{v}{R} = vc " }, { "math_id": 44, "text": "\\begin{align}\n\\dot R &= -\\frac{\\dot c}{c^2} \\\\\n\\ddot R &= \\frac{2\\left(\\dot c\\right)^2}{c^3} - \\frac{\\ddot c}{c^2} \\\\\n\\dot \\omega &= \\frac{\\dot v R - \\dot R v}{R^2} = \\dot v c + v \\dot c\\\\\n\\end{align}" }, { "math_id": 45, "text": "\\begin{align}\n F_\\text{net} &= ma \\\\\n\n\n &= ma_r \\\\\n &= \\frac{mv^2}{r} \\\\\n &= F_c\n\\end{align}" }, { "math_id": 46, "text": "F_\\text{net} = F_c" }, { "math_id": 47, "text": "F_c" }, { "math_id": 48, "text": "F_c = n + mg" }, { "math_id": 49, "text": "\\sqrt{a_r^2 + a_t^2} = a" }, { "math_id": 50, "text": "\\frac{v^2}{r}" }, { "math_id": 51, "text": "a_t = \\frac{dv}{dt} " }, { "math_id": 52, "text": "(r, \\theta)" }, { "math_id": 53, "text": "a_c = 2 \\left(\\frac{dr}{dt}\\right)\\left(\\frac{d\\theta}{dt}\\right)" }, { "math_id": 54, "text": "a_t" }, { "math_id": 55, "text": "a_r = \\frac{-v^2}{r} + \\frac{d^2 r}{dt^2}" } ]
https://en.wikipedia.org/wiki?curid=574544
57474470
Functional correlation
Dimensionality reduction technique In statistics, functional correlation is a dimensionality reduction technique used to quantify the correlation and dependence between two variables when the data is functional. Several approaches have been developed to quantify the relation between two functional variables. Overview. A pair of real valued random functions formula_0 and formula_1 with formula_2, a compact interval, can be viewed as realizations of square-integrable stochastic process in a Hilbert space. Since both formula_3 and formula_4are infinite dimensional, some kind of dimension reduction is required to explore their relationship. Notions of correlation for functional data include the following. Functional canonical correlation coefficient (FCCA). FCCA is a direct extension of multivariate canonical correlation. For a pair of random functions formula_5 and formula_6 the first canonical coefficient formula_7 is defined as: where formula_8 denotes the inner product in Lp space (p=2) i.e. formula_9 formula_10 The formula_11 canonical coefficient formula_12, given formula_13 is defined as: where formula_14 is uncorrelated with all previous pairs formula_15. Thus FCCA implements projections in the directions of formula_16 and formula_17 for formula_3 and formula_4 respectively, such that their linear combinations (inner products) formula_18 are maximally correlated. formula_3 and formula_4 are uncorrelated if all their canonical correlations are zero, equivalently, if and only if formula_19. Alternative formulation. The cross-covariance operator for two random functions formula_3 and formula_4 defined as formula_20 and analogously the auto covariance operators for formula_21, for formula_22 and using formula_23, the formula_11 canonical coefficient formula_12 in (2) can be re-written as, where formula_14 is uncorrelated with all previous pairs formula_24 Maximizing (3) is equivalent to finding eigenvalues and eigenvectors of the operator formula_25. Challenges. Since formula_26 and formula_27 are compact operators, the square root of the auto-covariance operator of formula_28 processes may not be invertible. So the existence of formula_29 and hence computing its eigenvalues and eigenvectors is an ill-posed problem. As a consequence of this inverse problem, overfitting may occur which may lead to an unstable correlation coefficient. Due to this inverse problem, formula_30 tends to be biased upwards and therefore close to 1 and hence is difficult to interpret. FCCA also requires densely recorded functional data so that the inner products in (2) can be accurately evaluated. Possible solutions. Some possible solutions to this problem have been discussed. Functional singular correlation analysis (FSCA). FSCA bypasses the inverse problem by simply replacing the objective function by covariance in place of correlation in (2). FSCA aims to quantify the dependency of formula_33 by implementing the concept of functional singular-value decomposition for the cross-covariance operator. FSCA can be viewed as an extension of analyses using singular-value decomposition of vector data to functional data. For a pair of random functions formula_5 and formula_6 with smooth mean functions formula_34 and formula_35 and smooth covariance functions, FSCA aims at a "functional covariance" corresponding to the first singular value of the cross-covariance operator formula_36, which is attained at functions formula_37. A standardized version of this serves as a functional correlation and is defined as The singular representation of the cross-covariance can be employed to find a solution to the maximization problem (4). Analogously, we can extend this concept to find the next formula_38 ordered singular correlation coefficients formula_39. Correlation as angle between functions. In the multivariate case, the inner product of two vectors formula_40 and formula_41 is defined as, formula_42 where formula_43 is the angle between formula_40 and formula_41. This can be extended to the space of square integrable random functions. For this notion to be a meaningful measure of alignment of shapes, the integrals of the functions, which are the projections on the constant function 1, are subtracted. This part corresponds to a "static part" and the remainder can be thought of as a "dynamic part" for each random function. The cosine of the formula_44 angle between these "dynamic parts" then provides a correlation measure of functional shapes. Denoting formula_45 and formula_46 the standardized curves may be defined as formula_47 formula_48 and the correlation is defined as, References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": " \\textstyle X(t)" }, { "math_id": 1, "text": " \\textstyle Y(t)" }, { "math_id": 2, "text": " t \\in T" }, { "math_id": 3, "text": " X " }, { "math_id": 4, "text": " Y " }, { "math_id": 5, "text": " X \\in \\mathcal{L}^2(\\mathcal{I_X}) " }, { "math_id": 6, "text": " Y \\in \\mathcal{L}^2(\\mathcal{I_Y}) " }, { "math_id": 7, "text": " \\rho_1 " }, { "math_id": 8, "text": "\\langle \\cdot,\\cdot\\rangle" }, { "math_id": 9, "text": "\\langle f_1,f_2\\rangle = \\int_{\\mathcal{I}} f_1(t) f_2(t) \\, dt, " }, { "math_id": 10, "text": " \\quad f_1, f_2 \\in \\mathcal{L}^2(\\mathcal{I}) " }, { "math_id": 11, "text": " k^{th}" }, { "math_id": 12, "text": " \\rho_k " }, { "math_id": 13, "text": "\\rho_1, \\rho_2, \\ldots, \\rho_{k-1} " }, { "math_id": 14, "text": " (U_k, V_k)=(\\langle u_k, X\\rangle , \\langle v_k, Y\\rangle )" }, { "math_id": 15, "text": "(U_j, V_j)=(\\langle u_j, X\\rangle , \\langle v_j, Y\\rangle )_{j=1,2,\\ldots,k-1} " }, { "math_id": 16, "text": " U_k " }, { "math_id": 17, "text": " V_k " }, { "math_id": 18, "text": " (U_k, V_k) " }, { "math_id": 19, "text": " \\rho_1 =0 " }, { "math_id": 20, "text": " \\Sigma_{XY} : \\mathcal{L}^2(\\mathcal{I_X}) \\rightarrow \\mathcal{L}^2(\\mathcal{I_Y}) : \\Sigma_{XY} v(t) = \\int \\operatorname{cov}( X(t), Y(t)) v(s) \\, ds ;\\quad v \\in \\mathcal{L}^2(\\mathcal{I_Y}) " }, { "math_id": 21, "text": " X,\\Sigma_{XX} " }, { "math_id": 22, "text": " Y,\\Sigma_{YY} " }, { "math_id": 23, "text": " \\operatorname{cov} (\\langle u, X\\rangle , \\langle v, Y\\rangle ) = \\langle u, \\Sigma_{XY}Y \\rangle " }, { "math_id": 24, "text": "(U_j, V_j)=(\\langle u_j, X\\rangle , \\langle v_j, Y\\rangle )_{j=1,2,..,k-1} " }, { "math_id": 25, "text": " R= \\Sigma^{-1/2}_{XX} \\Sigma_{XY} \\Sigma^{-1/2}_{YY} " }, { "math_id": 26, "text": " \\Sigma_{XX} " }, { "math_id": 27, "text": " \\Sigma_{YY} " }, { "math_id": 28, "text": " \\mathcal{L^2} " }, { "math_id": 29, "text": " R " }, { "math_id": 30, "text": " \\rho_1" }, { "math_id": 31, "text": " l^2 " }, { "math_id": 32, "text": "\\mathcal{L}^2" }, { "math_id": 33, "text": " X,Y" }, { "math_id": 34, "text": " \\mu_X(t)=\\mathbb{E}(X(t))" }, { "math_id": 35, "text": " \\mu_Y(t)=\\mathbb{E}(Y(t))" }, { "math_id": 36, "text": "\\Sigma_{XY}" }, { "math_id": 37, "text": "u_1 \\in \\mathcal{L}^2(\\mathcal{I_X}) , v_1 \\in \\mathcal{L}^2(\\mathcal{I_Y})" }, { "math_id": 38, "text": "k " }, { "math_id": 39, "text": " \\rho_1, \\rho_2,\\ldots,\\rho_k " }, { "math_id": 40, "text": "a " }, { "math_id": 41, "text": "b " }, { "math_id": 42, "text": " \\langle a,b \\rangle = \\|a\\| \\|b\\| \\cos\\alpha" }, { "math_id": 43, "text": " \\alpha " }, { "math_id": 44, "text": "L^2" }, { "math_id": 45, "text": " M_1=\\langle X,1 \\rangle" }, { "math_id": 46, "text": " M_2=\\langle Y,1 \\rangle" }, { "math_id": 47, "text": " X^*(t)=\\left(X(t)-M_1(t)\\right) \\Big / \\left(\\int (X(t)-M_1(t))^2 \\, dt\\right)^{1/2}" }, { "math_id": 48, "text": " Y^*(t)=\\left(Y(t)-M_2(t)\\right) \\Big / \\left(\\int (Y(t)-M_2(t))^2 \\,dt\\right)^{1/2}" } ]
https://en.wikipedia.org/wiki?curid=57474470
5747450
Heronian mean
Number between two given numbers In mathematics, the Heronian mean "H" of two non-negative real numbers "A" and "B" is given by the formula formula_0 It is named after Hero of Alexandria. Properties. Just like all means, the Heronian mean is symmetric (it does not depend on the order in which its two arguments are given) and idempotent (the mean of any number with itself is the same number). The Heronian mean of the numbers "A" and "B" is a weighted mean of their arithmetic and geometric means: formula_1 Therefore, it lies between these two means, and between the two given numbers. Application in solid geometry. The Heronian mean may be used in finding the volume of a frustum of a pyramid or cone. The volume is equal to the product of the height of the frustum and the Heronian mean of the areas of the opposing parallel faces. A version of this formula, for square frusta, appears in the Moscow Mathematical Papyrus from Ancient Egyptian mathematics, whose content dates to roughly 1850 BC. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "H = \\frac{1}{3} \\left(A + \\sqrt{A B} +B \\right)." }, { "math_id": 1, "text": " H = \\frac{2}{3}\\cdot\\frac{A+B}{2} + \\frac{1}{3}\\cdot\\sqrt{A B}." } ]
https://en.wikipedia.org/wiki?curid=5747450
5747525
TCP window scale option
Transmission control protocol configurable option The TCP window scale option is an option to increase the receive window size allowed in Transmission Control Protocol above its former maximum value of 65,535 bytes. This TCP option, along with several others, is defined in which deals with long fat networks (LFNs). TCP windows. The throughput of a TCP communication is limited by two windows: the congestion window and the receive window. The congestion window tries not to exceed the capacity of the network (congestion control); the receive window tries not to exceed the capacity of the receiver to process data (flow control). The receiver may be overwhelmed by data if for example it is very busy (such as a Web server). Each TCP segment contains the current value of the receive window. If, for example, a sender receives an ack which acknowledges byte 4000 and specifies a receive window of 10000 (bytes), the sender will not send packets after byte 14000, even if the congestion window allows it. Theory. TCP window scale option is needed for efficient transfer of data when the bandwidth-delay product (BDP) is greater than 64 KB. For instance, if a T1 transmission line of 1.5 Mbit/s was used over a satellite link with a 513 millisecond round-trip time (RTT), the bandwidth-delay product is formula_0 bits or about 96,187 bytes. Using a maximum buffer size of 64 KB only allows the buffer to be filled to (65,535 / 96,187) = 68% of the theoretical maximum speed of 1.5 Mbit/s, or 1.02 Mbit/s. By using the window scale option, the receive window size may be increased up to a maximum value of formula_1 bytes, or about 1 GiB. This is done by specifying a two byte shift count in the header options field. The true receive window size is left shifted by the value in shift count. A maximum value of 14 may be used for the shift count value. This would allow a single TCP connection to transfer data over the example satellite link at 1.5 Mbit/s utilizing all of the available bandwidth. Essentially, not more than one full transmission window can be transferred within one round-trip time period. The window scale option enables a single TCP connection to fully utilize an LFN with a BDP of up to 1 GB, e.g. a 10 Gbit/s link with round-trip time of 800 ms. Possible side effects. Because some firewalls do not properly implement TCP Window Scaling, it can cause a user's Internet connection to malfunction intermittently for a few minutes, then appear to start working again for no reason. There is also an issue if a firewall doesn't support the TCP extensions. Configuration of operating systems. Windows. TCP Window Scaling is implemented in Windows since Windows 2000. It is enabled by default in Windows Vista / Server 2008 and newer, but can be turned off manually if required. Windows Vista and Windows 7 have a fixed default TCP receive buffer of 64 kB, scaling up to 16 MB through "autotuning", limiting manual TCP tuning over long fat networks. Linux. Linux kernels (from 2.6.8, August 2004) have enabled TCP Window Scaling by default. The configuration parameters are found in the /proc filesystem, see pseudo-file /proc/sys/net/ipv4/tcp_window_scaling and its companions /proc/sys/net/ipv4/tcp_rmem and /proc/sys/net/ipv4/tcp_wmem (more information: , section sysctl). Scaling can be turned off by issuing the following command. To maintain the changes after a restart, include the line "net.ipv4.tcp_window_scaling=0" in /etc/sysctl.conf (or /etc/sysctl.d/99-sysctl.conf as of systemd 207). FreeBSD, OpenBSD, NetBSD and Mac OS X. Default setting for FreeBSD, OpenBSD, NetBSD and Mac OS X is to have window scaling (and other features related to RFC 1323) enabled. To verify their status, a user can check the value of the "net.inet.tcp.rfc1323" variable via the sysctl command: A value of 1 (output "&lt;samp style="padding-left:0.4em; padding-right:0.4em; color:var( --color-subtle, #666666); " &gt;net.inet.tcp.rfc1323=1&lt;/samp&gt;") means scaling is enabled, 0 means "disabled". If enabled it can be turned off by issuing the command: This setting is lost across a system restart. To ensure that it is set at boot time, add the following line to "/etc/sysctl.conf": codice_0 However, on macOS 10.14 this command provides an error &lt;samp style="padding-left:0.4em; padding-right:0.4em; color:var( --color-subtle, #666666); " &gt;sysctl: unknown oid 'net.inet.tcp.rfc1323'&lt;/samp&gt;
[ { "math_id": 0, "text": "\\scriptstyle 1,500,000 \\times 0.513 = 769,500" }, { "math_id": 1, "text": "1,073,725,440\\ \\scriptstyle \\left(= (2^{16}-1)\\times(2^{14}) = 65,535 \\times 16,384)\\right)" } ]
https://en.wikipedia.org/wiki?curid=5747525